Brutkey

William Pietri
@williampietri@sfba.social

We've launched! After months of work, MLCommons has released our v1.0 benchmark that measures LLM (aka "AI") propensity for giving hazardous responses.

Here's the results for 15 common models:
https://ailuminate.mlcommons.org/benchmarks/

And here's the overview:
https://mlcommons.org/ailuminate/

I was the tech lead for the software and want to give a shout out to my excellent team of developers and the many experts we worked closely with to make this happen.