Meta Just Launched the Largest ‘Open’ AI Model in History. Here’s Why It Matters
In the world of artificial intelligence (AI), a war is brewing. On one side are companies that believe in keeping the data sets and algorithms behind their advanced software private and secure. On the other side are companies that believe in allowing the public to see what lies behind their sophisticated AI models.
Think of it as a war between open source AI and closed source AI.
In recent weeks, Facebook parent company Meta has joined the open-source AI fight in a big way, releasing a new collection of big AI models. Among them is a model called Llama 3.1 405B, which Meta founder and CEO Mark Zuckerberg said is “the first frontier-level open-source AI model.”
For anyone interested in a future where everyone has access to the benefits of AI, this is good news.
The dangers of closed-source AI – and the promise of open-source AI Closed-source AI refers to models, datasets, and algorithms that are proprietary and kept secret. Examples include ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Read more: WhatsApp makes using Meta AI easier with this new feature, check out the details here
While anyone can use these products, there is no way to find out what datasets and source code were used to build the AI model or tool.
While this is a great way for companies to protect their intellectual property and profits, it risks undermining public trust and accountability. Making AI technology closed source also slows innovation and leaves companies or other users dependent on a single platform for their AI needs. This is because the platform owns the model for controlling changes, licensing, and updates.
There are a variety of ethical frameworks that seek to improve fairness, accountability, transparency, privacy, and human oversight of AI. However, these principles are often not fully achieved with closed-source AI due to the lack of transparency and external accountability inherent in proprietary systems.
In the case of ChatGPT, its parent company, OpenAI, does not release the datasets or code of its latest AI tools to the public. This makes it impossible for regulators to audit it. And while access to the service is free, concerns remain about how user data is stored and used to retrain models.
In contrast, the code and datasets behind opeTECHn source AI models are available for anyone to see.
Read more: WhatsApp, Instagram users can now access Meta AI in Hindi, chatbot gets support for 7 new languages
This promotes rapid development through community collaboration and allows smaller organizations and even individuals to participate in AI development. It also makes a huge difference for small and medium-sized businesses because the cost of training large AI models is huge.
Perhaps most importantly, open source AI allows testing and identification of potential biases and vulnerabilities.
However, open source AI also creates new risks and ethical concerns.
For example, quality control in open source products is often low. Since hackers can also access the code and data, models are also more vulnerable to cyberattacks and can be tweaked and customized for malicious purposes, such as retraining models with data from the dark web.
An Open Source AI Pioneer
Of all the top AI companies, Meta has emerged as a pioneer in open-source AI. With its new set of AI models, the company is delivering on what OpenAI promised when it launched in December 2015—namely, advancing digital intelligence “in ways that are more likely to benefit all of humanity,” as OpenAI said at the time.
Llama 3.1 405B is the largest open source AI model in history. It is called a large language model, capable of generating human language text in many languages. It can be downloaded online but because of its large size, users will need powerful hardware to run it.
While not outperforming other models in every way, Llama 3.1 405B is considered highly competitive and outperforms existing commercial and closed-source large language models on certain tasks, such as reasoning and encoding tasks.
But the new model isn’t completely open, as Meta hasn’t released the massive dataset used to train it, an important element of “openness” that’s currently missing.
However, Meta’s Llama levels the playing field for researchers, small organizations, and startups because it can be leveraged without the huge resources required to train large language models from scratch.
Shaping the Future of AI
To ensure AI is democratized, we need three key pillars:
governance: legal and ethical frameworks to ensure AI technology is being developed and used responsibly and ethically
Accessibility: affordable computing resources and user-friendly tools to ensure a level playing field for developers and users
Transparency: Data sets and algorithms for training and building AI tools must be open source to ensure transparency.
Achieving these three pillars is a shared responsibility of government, industry, academia, and the public. The public can play an important role by advocating for ethical policies in AI, staying informed about AI developments, using AI responsibly, and supporting open source AI initiatives.
But some questions remain about open source AI. How can we balance protecting intellectual property and promoting innovation through open source AI? How can we mitigate ethical concerns around open source AI? How can we protect open source AI from potential misuse?
Getting these questions right will help us create a future where AI is an inclusive tool for everyone. Will we rise to the challenge and ensure that AI serves the common good? Or will we let it become another evil tool of exclusion and control? The future is in our hands.
One more thing! We are now on WhatsApp Channel! Follow us there so you never miss any updates from the tech world. To follow HT Tech channel on WhatsApp, click This to join now!