Lo0go techturning.com

AI language systems are ‘quite stupid’

AI language systems are ‘quite stupid’

In a recent interview, Nick Clegg, the President of Global Affairs at Meta (formerly known as Facebook), shared his perspective on the current state of Artificial Intelligence (AI) models. Clegg expressed that the current AI models are “quite stupid,” implying that they are not as advanced or autonomous as some of the hype surrounding AI might suggest. He believes that the technology is not yet at a stage where AI models can develop full autonomy and think for themselves, refuting the existential threat warnings issued by some AI experts. According to him, the fears related to autonomous AI systems are premature, as they do not currently exist.

Meta’s decision to release their large language model, Llama 2, as open-source has sparked both excitement and concerns within the tech community. Open-sourcing the model means that it will be available for commercial businesses and researchers to use freely. While open-source models can benefit from extensive user testing and improvement through community feedback, there is also a risk of misuse due to the model’s power. Past experiences with chatbots have shown instances of manipulation to spread hate speech, generate false information, and give harmful instructions. Thus, the concern arises about whether the guardrails for Llama 2 are strong enough to prevent such misuse and what actions Meta would take in case of any issues.

It is worth noting that Meta’s decision to partner with Microsoft for Llama 2’s availability via platforms like Azure is significant. Microsoft has already invested billions of dollars in ChatGPT creator OpenAI, demonstrating the giant’s commitment to AI and the potential consolidation of AI development under a few major players. This raises questions about competition in the AI industry and whether it is healthy for a few large companies to dominate the space.

Contrasting Llama 2 with other AI models like GPT-4 and Google’s LLM powering the Bard chatbot, it is notable that Llama 2 stands out as an open-source option while others are not freely available for commercial or research purposes. This distinction adds to the debate on how AI models should be made accessible and used in various applications.

Despite the potential benefits of open-sourcing AI models like Llama 2, concerns about legislation and regulation arise. Dame Wendy Hall, a computer science professor, expresses worry about self-regulation in the industry. She compares open-source AI to providing a template for building a nuclear bomb, suggesting that it could lead to misuse without proper oversight and control.

In response to such concerns, Clegg argues that the open-sourced Llama 2 is safer than other AI models that have been open-sourced in the past. However, he does agree that AI needs to be regulated, and the key question is how to ensure responsible and safe open-sourcing of large language models.

The debate around AI and its regulation is ongoing, and the collaboration between Meta and Microsoft further highlights the stakes involved in the AI industry’s development. While there are concerns about potential misuse, there is also a drive towards open-sourcing AI models to accelerate innovation and foster collaboration. As the AI landscape continues to evolve, striking a balance between openness, competition, and responsible use remains a crucial challenge for the tech community and policymakers alike.


Related Articles