An anonymous reader quotes a report from TechCrunch: In pursuit of “safer” text-generating models, Nvidia today released NeMo Guardrails, an open source toolkit aimed at making AI-powered apps more “accurate, appropriate, on topic and secure.” Jonathan Cohen, the VP of applied research at Nvidia, says the company has been working on Guardrails’ underlying system for “many years” but just about a year ago realized it was a good fit for models along the lines of GPT-4 and ChatGPT. “We’ve been developing toward this release of NeMo Guardrails ever since,” Cohen told TechCrunch via email. “AI model safety tools are critical to deploying models for enterprise use cases.”

Guardrails includes code, examples and documentation to “add safety” to AI apps that generate text as well as speech. Nvidia claims that the toolkit is designed to work with most generative language models, allowing developers to create rules using a few lines of

Link to original post from Teknoids News

Read the original story