Machine-learning engineer Liran Hason knew that widespread AI adoption would depend on quelling fears of AI going rogue and making up things, spewing offensive results, or waging war on humanity.
To keep AI on the straight and narrow, Hason started Aporia, a company that provides software enabling AI users and developers the ability to see how AI’s “decisions are being made, to get live alerts when a bias has occurred, or when a potential mistake” has cropped up.
Clients then can use Aporia’s tools to find the root cause of problems so they can build walls or filters that shut them out.
At first, Hason thought that using AI was the best way to monitor AI. However, he realized that he would be using a potentially flawed tool to find flaws in a similar tool.
He decided to use old-fashioned “deterministic” software that lacked AI’s flexibility and used that to establish standards that would set off alarms.
Hason expects AI to become ubiquitous quickly, despite worries about its dangers.
He holds a hope that “governments, by that time, have already defined rules and regulations to make sure that companies are not getting wild with this technology.”
TRENDPOST: Creating software that babysits AI and calls Mom and Dad when it misbehaves will become a separate industry that will thrive for many years to come until all of AI’s penchants for misbehavior are identified and permanently eliminated—if that is ever possible.