Thread: AI News Discussion Thread
Dude has some awesome round ups about ai. I'm always the most I pressed by the medical application…. Sadly big pharma will find ways to make people pay earlier instead of actually treating them with the new tech.

 
  • Strength
  • Brain
Reactions: Kadayi and Mickmrly

The leaders of the ChatGPT developer OpenAI have called for the regulation of "superintelligent" AIs, arguing that an equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something with the power to destroy it.

In a short note published to the company's website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, call for an international regulator to begin working on how to "inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security" in order to reduce the "existential risk" such systems could pose.

"It's conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations," they write. "In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can't just be reactive."

In the shorter term, the trio call for "some degree of coordination" amongcompanies working on the cutting-edge of AI research, in order to ensure the development of ever-more powerful models integrates smoothly with society while prioritising safety. That coordination could come through a government-led project, for instance, or through a collective agreement to limit growth in AI capability.

Researchers have been warning of the potential risks of superintelligence for decades, but as AI development has picked up pace those risks have become more concrete. The US-based Center for AI Safety (CAIS), which works to "reduce societal-scale risks from artificial intelligence", describes eight categories of "catastrophic" and "existential" risk that AI development could pose.

While some worry about a powerful AI completely destroying humanity, accidentally or on purpose, CAIS describes other more pernicious harms. A world where AI systems are voluntarily handed ever more labour could lead to humanity "losing the ability to self-govern and becoming completely dependent on machines", described as "enfeeblement"; and a small group of people controlling powerful systems could "make AI a centralising force", leading to "value lock-in", an eternal caste system between ruled and rulers.

OpenAI's leaders say those risks mean "people around the world should democratically decide on the bounds and defaults for AI systems", but admit that "we don't yet know how to design such a mechanism". However, they say continued development of powerful systems is worth the risk.

"We believe it's going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity)," they write. They warn it could also be dangerous to pause development. "Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it's inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn't guaranteed to work. So we have to get it right."
 
  • Brain
Reactions: XOR and Kadayi
While some worry about a powerful AI completely destroying humanity, accidentally or on purpose, CAIS describes other more pernicious harms. A world where AI systems are voluntarily handed ever more labour could lead to humanity "losing the ability to self-govern and becoming completely dependent on machines", described as "enfeeblement"; and a small group of people controlling powerful systems could "make AI a centralising force", leading to "value lock-in", an eternal caste system between ruled and rulers.

This is what these assholes are really about. "Oh no, what will people do when they don't have a job anymore?!". It's about protecting capitalism. Not having a job in a world where machines are doing those wouldn't be a problem if people were given money unconditionally. Ofc, the rich don't want that and so AI taking over too many jobs would spell trouble for them.

Also fuck OpenAI, because not for one second do I believe them that this isn't also about content censorship.
 
  • Brain
Reactions: XOR and Mickmrly
This is what these assholes are really about. "Oh no, what will people do when they don't have a job anymore?!". It's about protecting capitalism. Not having a job in a world where machines are doing those wouldn't be a problem if people were given money unconditionally. Ofc, the rich don't want that and so AI taking over too many jobs would spell trouble for them.

Also fuck OpenAI, because not for one second do I believe them that this isn't also about content censorship.
Maybe they can't boil the frog too quickly or the frog will notice.

But automation is the key to them freeing themselves from their dependence on the masses which they look down upon. With control of msm they can get people to vote for genocide, as people won't know what they're voting for and will only vote for what the programming moves them toward.

Things like the restrict act to destroy all alternate voices are the key to an echo chamber where they control who gets elected, since even ballot printing can be overcome with enough votes against.(and they did cheat in 2020, just ask how did Biden win breaking all record while winning the fewest counties of any modern president? How could such few territories produce such overwhelming numbers? If the general sentiment was in favor of Biden why did he lose so many democrat counties?)
 
Don't know if it's been mentioned earlier but what is the impression on the automatic1111 method of using stable diffusion? I'm currently using easydiffusion but looking to something with more options.
 
  • Brain
Reactions: Kadayi
Don't know if it's been mentioned earlier but what is the impression on the automatic1111 method of using stable diffusion? I'm currently using easydiffusion but looking to something with more options.

I have both Automatic1111 (A1111) and Vlad Diffusion installed presently. The latter is basically a split from A1111 which is kept up-to-date a bit more regularly. Well worth a look: -

 
I hope development cycles are cut drastically and we go back to the glory days of ps2.



I watched this last night when it was being broadcast live and I was not the least bit impressed.

I will outright reject any game that goes that route.

This is going to do gaming no favors, same way third party game engines like UE did gaming no favors, at least not until AI reaches a human-like capacity and appreciation for creativity, unpredictability and empathy. I'm not sure we'll ever get to see that day considering the trajectory we're currently on.

A key line from Jensen's Computex presentation: "The data center is your computer."

He didn't come right out and say it, but he was practically signaling the end of the personal computer. Consoles won't be far behind.
 
I watched this last night when it was being broadcast live and I was not the least bit impressed.

I will outright reject any game that goes that route.

This is going to do gaming no favors, same way third party game engines like UE did gaming no favors, at least not until AI reaches a human-like capacity and appreciation for creativity, unpredictability and empathy. I'm not sure we'll ever get to see that day considering the trajectory we're currently on.

A key line from Jensen's Computex presentation: "The data center is your computer."

He didn't come right out and say it, but he was practically signaling the end of the personal computer. Consoles won't be far behind.

Things are so generic now I think AI will do a better job than humans to come up with creative stuff.
 
I hope development cycles are cut drastically and we go back to the glory days of ps2.



But who controls the AI model? Having mandatory quotas for specific ideologies would probably be mass implemented and the eventual output would end up the way certain third parties want it to be.

To me AI are just massive code libraries, vanilla coding is more my style. The added benefit of less complex code is that it can be redeployed a lot more easily as well. Who knows which programs the AI features will be tied down to.
 
But who controls the AI model? Having mandatory quotas for specific ideologies would probably be mass implemented and the eventual output would end up the way certain third parties want it to be.

To me AI are just massive code libraries, vanilla coding is more my style. The added benefit of less complex code is that it can be redeployed a lot more easily as well. Who knows which programs the AI features will be tied down to.

If they at least trained it on good shit, we would be good to go in terms of creativity, but they will train it to be a wimp AI for sure.
 
  • Brain
Reactions: Mickmrly


Interesting video from Olivio about some stuff that has potential with respect to games
 


Interesting video from Olivio about some stuff that has potential with respect to games


"...where they showed beautiful examples of how AI's used..."

fada0b2ecaf1033c4e0d2d29429fa358_w200.gif


It's no wonder the standards for entertainment (and almost everything else) have fallen so low. There was nothing at all impressive, never mind beautiful, about that gaming example. The dialogue, voice work and animation were stiff and emotionless.

Olivio's video here works like propaganda, extolling the benefits while downplaying and ignoring the negatives. It's happening at scale and will lead to consumers forming opinions about AI's rollout without getting a balanced summary of what it's really bringing to the table.

An example to make the point, putting aside my subjective complaints above: a lot of people have been criticizing online requirements for single-player games and/or games as a service, but how many of them will stop to consider that this AI will require online connectivity and move the GaaS idea forward? Is that trade-off worth it?
 
"...where they showed beautiful examples of how AI's used..."

fada0b2ecaf1033c4e0d2d29429fa358_w200.gif


It's no wonder the standards for entertainment (and almost everything else) have fallen so low. There was nothing at all impressive, never mind beautiful, about that gaming example. The dialogue, voice work and animation were stiff and emotionless.

Olivio's video here works like propaganda, extolling the benefits while downplaying and ignoring the negatives. It's happening at scale and will lead to consumers forming opinions about AI's rollout without getting a balanced summary of what it's really bringing to the table.

An example to make the point, putting aside my subjective complaints above: a lot of people have been criticizing online requirements for single-player games and/or games as a service, but how many of them will stop to consider that this AI will require online connectivity and move the GaaS idea forward? Is that trade-off worth it?

This is literally alpha proof of concept stuff. As someone whose been in on the AI stuff from early on in 2022 the leaps and bounds it has made in barely a year, what seems stiff today will be completely different in a few months. The genie is out of the bottle on this.
 
  • This tbh
Reactions: lock2k
Gonna be interesting to see how the courts decide. If I read A Song of Ice and Fire and then take that as inspiration to write my own fantasy novels, then that's obviously not illegal.

Yeah the legal argument doesn't make sense to me, but I only read that tweet so maybe there's more to it. But anyone can buy a copy of a book, share it with others, read it to others, etc. If you were teaching a class and made it required reading and said to write more like these authors, you certainly haven't committed any crime.
 
  • Like
Reactions: XOR


Should be available now on W11 apparently

I receive a newsletter from Microsoft because of some learning tasks, and one of the things it talked about was the importance of treating your AI nicely and speaking to it politely. They said that if you act rudely, the AI will begin to respond in kind. I think that if windows starts nagging for updates I'll be giving it a snarky reply or two. 😁

The technology doesn't interest me too much, but it would be fantastic for accessibility and that's pretty good.
 
Chat GPT Visual looks like the next crazy step.

People doing some crazy things. Like show a picture of a dashboard and it will write the code for you to build it. Highlight certain things in an image and you can ask questions about it. Talk to it.
 
  • 100%
Reactions: Kadayi
Chat GPT Visual looks like the next crazy step.

People doing some crazy things. Like show a picture of a dashboard and it will write the code for you to build it. Highlight certain things in an image and you can ask questions about it. Talk to it.

Yeah shit is bonkers. The next decade is going to be a real float or sink situation for knowledge workers, with those who get to grips with the tools and start to leverage them in their daily life capitalising and those who don't finding themselves up the creek without a paddle.
 
Pretty cool video about VFX guys playing around with AI Facegen tech with game characters.

They admit that they're just using AI to patch into the clips, however at the same time, talk about how much better the results would be if done at source.

 
Pretty cool video about VFX guys playing around with AI Facegen tech with game characters.

They admit that they're just using AI to patch into the clips, however at the same time, talk about how much better the results would be if done at source.



Cool, now apply to MK1