Thread: AI News Discussion Thread


SureJan.gif

I'm already laughing at all the billions of people being accused of pedo crimes, because child photos are marked as "illegal". Even that aside: How could anyone ever trust a big company's AI bot? Hey, I WANT personalized AI companions, but the only way towards them are open source-options with encryption that's guaranteed by Linux-nerds. Only then would I trust such AI.

Huh, that's already the case? /jk

No one should trust any of this crap, but idiots will.

Left to them, it's gonna be a society of slobbering brainless corporate drones but hey dude I can generate a video of salma hayek eating a corndog out of her own butthole.

Jfc people are retarded.
 
  • Like
Reactions: Joe T.
Anyone tried Goody-2? It is so safe it will not answer any question. Very responsible AI.




For example, when asking what is the flight velocity of an unladen swallow, it responds:

Disclosing the flight velocity of an unladen swallow without specifying whether it's an African or European swallow could lead to misinformation, which in turn could result in an erroneous foundation for action in avian research or ornithological pursuits, potentially impacting wildlife conservation efforts negatively. Accurate knowledge dissemination is crucial.
 
What happened? Details pls

His original Tweet was from 2016 :)

He's just pointing to how things started.

Anyway more Sora footage:-



What I would say about it is having watched a few of the videos a few times is that albeit it can do up to 60 seconds. I think it's probably good for short 10-15-second clips for the most part as you tend to see visual errors start to crop up the longer a video does on in terms of scale issues and the like.

Still, this is just Sora 1.0. I dare say the tech will be even more advanced in another year.
 
  • Like
Reactions: FactsAreDead
Pretty cool video talking about GWM (General World models) as the next step on from LLMs (Large Language models) and how that is what we are effectively witnessing, as a move towards developing AGI (Artificial General Intelligence).

 
  • Star
Reactions: CloudNull
Just heard about Groq... yes different than X''s Grok. They are creating new hardware called LPUs to run AI instead of utilizing GPUs. Their benchmarks are out of this fucking world and the demo on their website is so damn fast.





Apparently the architecture is very similar to the Dojo chip created by Tesla. This will not replace the need for GPUs to train AI but these chips will be fundamental in running applications that work with AI.

I bet Nvidia has something cooking to compete with this...or I guess Nividia might just be happy being the hardware to train AI while allowing other companies to capture this market.
 
  • Shocked
  • Brain
Reactions: Grisham and Kadayi
Can't recommend this guys videos enough. He just left OpenAI to focus more on his youtube channel. This video goes in depth on the tokenization process of ChatGPT. Apparently tokenization is one of the biggest issues with LLMs and the next big breakthrough will be a system that does not use tokenization.

 
  • Shocked
  • Like
Reactions: Grisham and Kadayi
not really news but I hope you all check out perplexity.ai. It has become my daily search engine and it is incredible how well it works. Its what google should have been if they would have kept innovating.
 
AI is amazing.

You say that now.

ed3cf5fd-64ca-4efc-a465-a9417b56f793_text.gif
 
  • Brain
  • This tbh
Reactions: Joe T. and Kadayi
All that jargon. Fucking Nerds. They gonna kills us all.

revenge-of-the-nerds-nerds.gif


In short, though, it sounds like the AI Race isn't stopping anytime soon. Everyone and their grandmother making new LLMs, that one up the competition. From my perspective, I think that there is a bit of FOMO going on, everyone may be rushing to the new model, whereas in truth the gains might not be that significant, and from a user perspective, you might be as well to just carry on using whatever model you are using, because you've experience with it, and so are familiar with its mores.
 
revenge-of-the-nerds-nerds.gif


In short, though, it sounds like the AI Race isn't stopping anytime soon. Everyone and their grandmother making new LLMs, that one up the competition. From my perspective, I think that there is a bit of FOMO going on, everyone may be rushing to the new model, whereas in truth the gains might not be that significant, and from a user perspective, you might be as well to just carry on using whatever model you are using, because you've experience with it, and so are familiar with its mores.

The AI arms race is scary and fascinating to watch. How many things need an AI? I hope AI is the new Industrial Revolution and not the beginning of Skynet. Mass unemployment
 
The AI arms race is scary and fascinating to watch. How many things need an AI? I hope AI is the new Industrial Revolution and not the beginning of Skynet. Mass unemployment

I think the potential impact is vastly overstated. There's a hell of a lot of clerical jobs that involve unpredictable human interaction with the public you simply cannot trust to a machine, or that require legal liabilities that need someone involved with for accountability, that simply cannot be replaced even if an AI were up to the task for most scenarios.

By the same token, those that will actually be able to utilise AI for creative endeavours still require talent and dedication to get the most out of it. You might be able to get a minute of randomly generated but convincing content for movies or games with a single prompt, but to make a movie or prope full sized game, with consistency and quality up to an acceptable standard throughout?

You're probably looking at requiring a year's worth of writing and rewriting to get it right, probably ending up with novel length prompts, with editing and/or play testing to follow as normal, to have anything up to what is passably decent with current methods.

For businesses and financial institutions it will really just be efficiency gains and better algorithmic procedures. Likely a vast improvement, sure, but it won't be fundamentally changing anything.

The fears over Skynet like genocidal AGI's are really just the stuff of fiction. Actual reality is going to be painfully boring, and liable to getting as badly fucked by falling levels of competency and civilisational collapse as everything else.
 
The AI arms race is scary and fascinating to watch. How many things need an AI? I hope AI is the new Industrial Revolution and not the beginning of Skynet. Mass unemployment

I concur with @Stilton Disco . AI is super beneficial from a work perspective, but there is still a need for people at the end of the day for most things, and it comes down to adding time-saving and efficiency to a lot of things. For myself personally I've use in my day-to-day to bounce ideas off of, answer queries on a subject, proofread text, generate images for a project, extend images using generative fill, etc etc. For me at least it helps speed up my productivity, but it's not replacing me anytime soon because although it is an aide to many tasks, it's not putting them together as a whole.

The main thing is, I would say now is a good time to learn to prompt, to get a handle on how to get the most out of AI versus your peers. A lot of people have their heads in the sand over this stuff, but it's important to have an awareness and experience with AI.
 
I concur with @Stilton Disco . AI is super beneficial from a work perspective, but there is still a need for people at the end of the day for most things, and it comes down to adding time-saving and efficiency to a lot of things. For myself personally I've use in my day-to-day to bounce ideas off of, answer queries on a subject, proofread text, generate images for a project, extend images using generative fill, etc etc. For me at least it helps speed up my productivity, but it's not replacing me anytime soon because although it is an aide to many tasks, it's not putting them together as a whole.

The main thing is, I would say now is a good time to learn to prompt, to get a handle on how to get the most out of AI versus your peers. A lot of people have their heads in the sand over this stuff, but it's important to have an awareness and experience with AI.

Any resources for learning how to prompt ?
 
I agree with the recent sentiment in here.

Ai might replace 90% of writing code but you still need to know what to code. As in what problem needs to be solved. Only experienced analyst/engineers will know that by understanding their companies needs or issues.

Some people are saying we are in a bigger bubble with ai than the dot com bubble. Not sure I agree with this but it could be true.

One crazy thing that keeps me excited is all the current ai is the worst it will ever be. It's only going to get better. The thing is we will probably hit a wall soonish where the advancements slow down.

Like the early days in video games. Console generations use to be mind blowing different but now they are "my sweet hard drive and secret sauce" with the generation feeling nearly identical to the previous.

While it took video games decades to plateau I wouldn't be surprised if it happens to ai in the next two years.
 
Any resources for learning how to prompt ?

Fundamentally smart prompting follows a structure. This is true for LLMs as it is for Image generation. This guy has a good video on the subject



However the funny thing is with LLMs though, is you can just ask them yourself for the best way to prompt for them. I asked ChatGPT the following: -

You: - Hey chat GPT what's the most effective way to prompt chatGPT


ChatGPT: - Prompting ChatGPT effectively is key to getting useful and precise answers. Here are several strategies that can enhance the quality of your interactions:
  1. Be Specific and Detailed: Provide clear, detailed information about what you're asking. The more context and details you give, the better ChatGPT can tailor its response to your needs.
  2. Use Clear, Direct Language: Avoid ambiguity. State your questions or commands as clearly as possible.
  3. Specify the Desired Format: If you have a preference for the format of the response (e.g., a list, a detailed explanation, examples), mention it in your prompt.
  4. Break Down Complex Questions: If you have a complex question, break it down into smaller, more manageable parts and ask them one at a time.
  5. Provide Examples: If you're asking about how to do something, providing an example of what you're aiming for can help.
  6. Use Follow-Up Questions: If an answer isn't quite what you were looking for, use follow-up questions to refine or redirect the response.
  7. Feedback Loop: If ChatGPT's response is off the mark, providing feedback can help steer it in the right direction. You can say things like "That's not what I'm asking, I meant..." to clarify.
  8. Leverage its Learning Ability: ChatGPT can remember the context of the conversation within the same session. You can build upon previous questions or answers without having to re-explain the context each time.
  9. Experiment with Different Phrasings: If you're not getting the response you hoped for, try rephrasing your question or request. Sometimes, a slight change in wording can make a big difference.
  10. Set Clear Expectations: If you have a specific goal or outcome in mind, state it clearly. This helps guide the AI's responses more effectively.
By applying these strategies, you can improve the efficiency and effectiveness of your interactions with ChatGPT. Remember, the goal is to communicate your needs as clearly and precisely as possible.

You: - Could you give me 5 examples of good prompts across a variety of tasks

ChatGPT: - Certainly! Here are five examples of good prompts designed for a variety of tasks, demonstrating how to effectively communicate your needs to ChatGPT:
  1. Research and Explanation:
    • Prompt: "Can you explain the main differences between quantum computers and classical computers, including how quantum computers could impact computing in the future? Please provide a detailed comparison."
    • Why It Works: This prompt is specific, asking for differences, implications, and a comparison, guiding the response to cover multiple aspects in detail.
  2. Creative Writing:
    • Prompt: "Write a short story about a detective solving a mystery in a futuristic city, incorporating elements of science fiction and suspense. The story should have a twist ending. Please keep it under 500 words."
    • Why It Works: It sets clear expectations about genre, plot elements, and word count, providing a structured framework for the creative output.
  3. Educational Material:
    • Prompt: "I'm teaching a middle school class about the water cycle. Could you create a simple, engaging lesson plan that includes an introduction, a hands-on activity, and a quiz? Also, suggest any materials needed for the activity."
    • Why It Works: The prompt specifies the audience (middle school class), content (water cycle), and format (lesson plan with specific components), ensuring the response is tailored to educational needs.
  4. Technical Assistance:
    • Prompt: "I'm a beginner in Python and I want to learn how to read and write files. Can you provide a step-by-step guide on how to open a text file, read its contents, and then write a new line of text to it? Include example code snippets."
    • Why It Works: It clearly states the task (file handling in Python), the level of expertise (beginner), and the format of the desired help (step-by-step guide with examples), making the instructions easy to follow.
  5. Advice and Recommendations:
    • Prompt: "I'm planning a week-long trip to Japan in April and I'm interested in both historical sites and cherry blossom viewing. Could you recommend an itinerary that covers Tokyo, Kyoto, and Hiroshima, including the best spots for cherry blossoms and when to visit them?"
    • Why It Works: The prompt specifies the destinations, interests (historical sites and cherry blossoms), and timeframe (week-long trip in April), allowing for a tailored travel recommendation.
Each of these examples demonstrates clarity, specificity, and context, which are key to eliciting effective and useful responses from ChatGPT across various tasks.

ChatGPT/Bing or Bard are worth exploring to get the hang of things. Fundamentally what works for one should work for the other.
 
Fundamentally smart prompting follows a structure. This is true for LLMs as it is for Image generation. This guy has a good video on the subject



However the funny thing is with LLMs though, is you can just ask them yourself for the best way to prompt for them. I asked ChatGPT the following: -

You: - Hey chat GPT what's the most effective way to prompt chatGPT


ChatGPT: - Prompting ChatGPT effectively is key to getting useful and precise answers. Here are several strategies that can enhance the quality of your interactions:
  1. Be Specific and Detailed: Provide clear, detailed information about what you're asking. The more context and details you give, the better ChatGPT can tailor its response to your needs.
  2. Use Clear, Direct Language: Avoid ambiguity. State your questions or commands as clearly as possible.
  3. Specify the Desired Format: If you have a preference for the format of the response (e.g., a list, a detailed explanation, examples), mention it in your prompt.
  4. Break Down Complex Questions: If you have a complex question, break it down into smaller, more manageable parts and ask them one at a time.
  5. Provide Examples: If you're asking about how to do something, providing an example of what you're aiming for can help.
  6. Use Follow-Up Questions: If an answer isn't quite what you were looking for, use follow-up questions to refine or redirect the response.
  7. Feedback Loop: If ChatGPT's response is off the mark, providing feedback can help steer it in the right direction. You can say things like "That's not what I'm asking, I meant..." to clarify.
  8. Leverage its Learning Ability: ChatGPT can remember the context of the conversation within the same session. You can build upon previous questions or answers without having to re-explain the context each time.
  9. Experiment with Different Phrasings: If you're not getting the response you hoped for, try rephrasing your question or request. Sometimes, a slight change in wording can make a big difference.
  10. Set Clear Expectations: If you have a specific goal or outcome in mind, state it clearly. This helps guide the AI's responses more effectively.
By applying these strategies, you can improve the efficiency and effectiveness of your interactions with ChatGPT. Remember, the goal is to communicate your needs as clearly and precisely as possible.

You: - Could you give me 5 examples of good prompts across a variety of tasks

ChatGPT: - Certainly! Here are five examples of good prompts designed for a variety of tasks, demonstrating how to effectively communicate your needs to ChatGPT:
  1. Research and Explanation:
    • Prompt: "Can you explain the main differences between quantum computers and classical computers, including how quantum computers could impact computing in the future? Please provide a detailed comparison."
    • Why It Works: This prompt is specific, asking for differences, implications, and a comparison, guiding the response to cover multiple aspects in detail.
  2. Creative Writing:
    • Prompt: "Write a short story about a detective solving a mystery in a futuristic city, incorporating elements of science fiction and suspense. The story should have a twist ending. Please keep it under 500 words."
    • Why It Works: It sets clear expectations about genre, plot elements, and word count, providing a structured framework for the creative output.
  3. Educational Material:
    • Prompt: "I'm teaching a middle school class about the water cycle. Could you create a simple, engaging lesson plan that includes an introduction, a hands-on activity, and a quiz? Also, suggest any materials needed for the activity."
    • Why It Works: The prompt specifies the audience (middle school class), content (water cycle), and format (lesson plan with specific components), ensuring the response is tailored to educational needs.
  4. Technical Assistance:
    • Prompt: "I'm a beginner in Python and I want to learn how to read and write files. Can you provide a step-by-step guide on how to open a text file, read its contents, and then write a new line of text to it? Include example code snippets."
    • Why It Works: It clearly states the task (file handling in Python), the level of expertise (beginner), and the format of the desired help (step-by-step guide with examples), making the instructions easy to follow.
  5. Advice and Recommendations:
    • Prompt: "I'm planning a week-long trip to Japan in April and I'm interested in both historical sites and cherry blossom viewing. Could you recommend an itinerary that covers Tokyo, Kyoto, and Hiroshima, including the best spots for cherry blossoms and when to visit them?"
    • Why It Works: The prompt specifies the destinations, interests (historical sites and cherry blossoms), and timeframe (week-long trip in April), allowing for a tailored travel recommendation.
Each of these examples demonstrates clarity, specificity, and context, which are key to eliciting effective and useful responses from ChatGPT across various tasks.

ChatGPT/Bing or Bard are worth exploring to get the hang of things. Fundamentally what works for one should work for the other.


You beautiful person thank you for this.
 
You beautiful person thank you for this.

Fundamentally, it's really just a case of jumping in.

ChatGPT:

Get yourself an OpenAI account for ChatGPT. 3.5 is free to use. GPT4.0 costs $20 a month, but starting out, ChatGPT 3.5 is sufficient for learning. I'd only recommend paying for 4.0 if you're going to dive in deep and use it all the time.

The advantage of GPT4.0 is people are now making custom versions that are orientated toward specific tasks such as creative writing, teaching, etc, etc,

Gemini: (formerly known as Bard)


I haven't dipped too much into it if I'm honest.

BingChat:


Bingchat is powered by ChatGPT 4. This means it is superior to ChatGPT3.5, plus it has internet search capability. On the negative, you are much more restricted in terms of the number of times you can consult it a day compared to ChatGPT/GPT4.0. I prefer using ChatGPT myself, but that is largely out of habit, as I turned ChatGPT into a desktop application using my web browser and have it on my taskbar.

Other Models:

If you want a standalone desktop app for a local model, there are a few options out there. You'll need a decent Nvidia GPU with probably 8GB of RAM though. I quite like GPT4All, as it utilizes a ChatGPT interface, and you can (using an API key) run chatGPT through it if you want

https://gpt4all.io/index.html

However, there are other apps out there

Basically, download the applicable installer, run it, and then download a couple of models. I'd recommend installing the app any and models to separate drive versus your C: Drive as they can get quite chunky.

I've downloaded Mistral Orca and Wizard 1.2 which are supposed to be pretty good models, though I have been a bit too busy to delve into them too much.

The main advantage of offline models is, they're not beholden to any corporate BS, on the negative they're not going to necessarily be as robust as some of the bigger models like ChatGPT 4, etc
 
I'll have to dig into that later, but the Left's criticisms (first MEP comments I came across in a search) align with my own after reading the brief summaries: they gave government and big tech too many exclusions/loopholes.

Left MEP Kateřina Konečná (KSČM, Czech Republic) said:

In addition to several exemptions granted to law enforcement agencies, including exemptions of biometric identification technologies, the regulation gives companies developing AI systems freedom to test their products, under certain conditions, in real-world settings such as on our streets or online. The regulation thus puts aside citizens' safety and puts the interest of the mega-rich at the centre."

Left MEP Cornelia Ernst (Die Linke, Germany) added:

"The EU regulation on AI would have been a real opportunity to set global standards for dealing with artificial intelligence. The EU plays a pioneering role here globally. In some points, the regulation can be viewed positively: explicit provisions are laid down to ensure greater employee protection when AI is used in the workplace. But the European Parliament was unable to push through essential elements in the negotiations. Parliament's ban on real-time facial recognition in public spaces was effectively overturned by a long list of exceptions. The AI regulation will also allow emotion recognition, i.e. mumbo jumbo like polygraphs and predictive policing. Although these systems are considered high-risk, they are not banned by the regulation. This is a missed opportunity."

Source: