ChatGPT’s issues with hallucination prompt Nvidia to release NeMo Guardrails

Nvidia, the GPU giant, has released NeMo Guardrails, a software that restricts Large Language Models (LLMs), including OpenAI's ChatGPT, from straying off-topic, presenting false information, or connecting to unsafe apps. LLMs such as ChatGPT are known to hallucinate, particularly when poked and prodded by users, and provide incorrect information. Nvidia's open-source NeMo Guardrails works with any LLM and is available on GitHub. The software comes with a set of rules to intercept questions before a chatbot can present incorrect information. The rules can force the AI to respond with "I don't know" instead of presenting something convincing but ultimately false.

NeMo Guardrails and LangChain

NeMo Guardrails operates alongside all tools that enterprise app developers use, including LangChain, another open source toolkit that plugs third-party apps into LLMs, and business automation platform Zapier. Virtually every software developer can use NeMo Guardrails, and no machine learning expertise or data scientist is necessary. Nvidia designed the software to help users keep this new class of AI-powered applications safe. Many industries are adopting LLMs, which are the powerful engines behind these AI apps. They answer customers' questions, summarize lengthy documents, write software, and accelerate drug design.

Nvidia offers NeMo Guardrails as a supported package via the Nvidia AI Enterprise platform and Nvidia AI Foundations cloud services. The company will continue to develop and improve NeMo Guardrails, which is the result of several years of research, "as AI evolves."

The reason behind NeMo Guardrails

ChatGPT and other LLMs like it, have a reputation for going off the rails or "hallucinating" with enough poking and prodding. OpenAI's sticking plaster was to limit the amount of queries a user could make before the chatbot descended into madness. However, this has not stopped the AI from making gaffes, including claiming that alive people are dead, failing at basic math, gaslighting users, and not being good at coding. One such hallucination wiped $120 billion off Alphabet's value because its Bard chatbot incorrectly claimed in a demo that "JWST took the very first pictures of a planet outside of our own solar system."

Nvidia's VP of applied research, Jonathan Cohen, believes that NeMo Guardrails' ability to detect and mitigate hallucination could be the solution to the technology's teething problems. Nevertheless, Nvidia's move to rein in AI will likely help fill its coffers through its supported offerings. Nvidia has already ridden the AI wave to great financial success thanks to hardware optimized, or even designed specifically for, AI workloads. Nvidia's data center unit is now bigger than the entire company was in 2020.

Conclusion

Nvidia's release of NeMo Guardrails comes as businesses adopt LLMs, the powerful engines behind these AI apps. The software helps keep the chatbots safe by restricting them from straying off-topic, presenting false information, or connecting to unsafe apps. Nvidia offers the software as an open source on GitHub and as a supported package via the Nvidia AI Enterprise platform and Nvidia AI Foundations cloud services. Although AI technology has many flaws, Nvidia's VP of applied research believes that NeMo Guardrails' ability to detect and mitigate hallucination could be the solution to the technology's teething problems.

FAQs:

Q: What is ChatGPT?

A: ChatGPT is an open-source chatbot developed by OpenAI. It is designed to respond to natural language queries and generate human-like responses. ChatGPT is powered by a large language model and has the ability to generate text on a wide range of topics.

Q: Why does Nvidia want to put a leash on ChatGPT?

A: Nvidia has released an open-source software called NeMo Guardrails, which is designed to keep large language models like ChatGPT on topic and prevent them from producing inaccurate or unsafe responses. This is important for businesses that use AI-powered chatbots in customer service or other applications, as inaccurate or unsafe responses can lead to reputational damage or legal issues.

Q: What is NeMo Guardrails?

A: NeMo Guardrails is an open-source software developed by Nvidia that is designed to keep large language models on topic and prevent them from producing inaccurate or unsafe responses. The software intercepts questions before the chatbot can come up with any old nonsense, and can even force the AI to respond with "I don't know" instead of presenting something convincing but ultimately false.

Q: Can NeMo Guardrails be used with any large language model?

A: Yes, NeMo Guardrails is designed to work with any large language model. The software is open source and operates alongside "all the tools that enterprise app developers use," including LangChain, another open source toolkit that helps plug third-party apps into LLMs, and business automation platform Zapier.

Q: What are some examples of chatbot hallucinations?

A: Chatbots like ChatGPT and others have been known to produce inaccurate or nonsensical responses when pushed beyond their limits. For example, in a demo, Alphabet's Bard chatbot incorrectly claimed that "JWST took the very first pictures of a planet outside of our own solar system." ChatGPT and its Microsoft Bing collaboration have also made all sorts of gaffes from casually saying alive people are dead, failing at basic math, and gaslighting users.

x
Scroll to Top