Sponsored by Monday Properties and written by ARLnow, Startup Monday is a weekly column that highlights Arlington-based startups, founders, and local tech news. Monday Properties is proudly featuring 1515 Wilson Blvd in Rosslyn.
A Rosslyn-based startup says it is on a mission to help companies adopt artificial intelligence responsibly.
The company, Trustible, announced in mid-April that it emerged from “stealth” — a quiet period of growth and initial fundraising — with an “oversubscribed” $1.6 million in “pre-seed” funding, tech news outlet Technical.ly D.C. first reported.
That money will go toward hiring employees and improving its government compliance solutions. These are aimed at helping companies demonstrate they are following emerging government regulations, such as those poised for adoption by the U.S. and the European Union, per a press release.
As this technology rapidly improves, companies worldwide are racing to adopt and adapt to it. In that haste, however, Trustible founders Gerald Kierce and Andrew Gamino-Cheong worry organizations could wind up not complying with government regulations and unleashing harmful applications of AI.
“AI is becoming a foundational tool in our everyday lives — from business applications, to public services, to consumer products,” they wrote in a blog post last month. “Recent advances in AI have dramatically accelerated its adoption across society — unquestionably changing the way humans interact with technology and basic services.”
Companies ramping up their use of AI are entering uncharted waters, however. The founders say these organizations have to answer tricky questions like whether AI can be biased and who is liable AI breaks the law or produces results that are not factual. They worry about misuses such as wrongful prosecution, unequal health care and national surveillance.
“With great power comes great responsibility,” they say. “Despite good intentions, organizations deploying AI need the enterprise tools and skills to build Responsible AI practices at scale. Moreover, they don’t feel prepared to meet the requirements of emerging AI regulations.”
That is why demonstrating trust in AI will be key to it being adopted successfully, say Kierce and Gamino-Cheong.
“Many of the challenges we’ve outlined require interdisciplinary solutions — they are as much of a technical and business problem as they are socio-technical, political, and humanitarian,” per the blog post. “But there is a critical role for a technology solution to accelerate Responsible AI priorities and scale governance programs.”
That is where Trustible comes in. It provides all the minutiae companies need — checklists, documentation tools and reporting capabilities — to adopt AI as governments try and concurrently develop ways to regulate it.
The platform helps organizations define policies, implement and enforce ethical AI practices and prove they comply with regulations, in anticipation of compliance reviews and AI audits.
Already, the U.S. and Europe appear poised to adopt regulations, they say.
In the U.S., the National Institute of Standards and Technology has released a framework the founders believe will inform any pending federal regulations. Meantime, the White House has released an “AI Bill of Rights” the founders say serves as a blueprint for institutions looking to develop internal AI policies.