THIS WEEK IN SURVEILLANCE

NOW ANYONE CAN DOWNLOAD AGI THAT CAN OUTHINK HUMANS

Forgive us for being a tad suspicious.

When a group of proselytizers obsessed with ushering in the Singularity release free AGI (Artificial General Intelligence) code to the world, there might just be a reason beyond the goodness of their human hearts.

The software code is called DRLearner. And its developers made the project code available to all via Github, announcing it as part of the 15th Annual AGI Conference in Seattle, which opened on 19 August.

“Until now, tools at this level in ‘Deep Reinforcement Learning’ have been available only to the largest corporations and R&D labs,” Chris Poulin, a lead developer of DRLearner, said regarding the move. “With the open-source release of the DRLearner code, we are helping democratize access to state-of-the-art machine learning tools of high-performance reinforcement learning.”

How capable is DRLearner?

According to a press release, it rivals or exceeds human intelligence across a diverse set of Arcade Learning Environment [ALE] benchmark tests, which are widely accepted as a proxy for situational intelligence in the field of AGI.

Considerable Computing Power Needed

Just making code available doesn’t exactly confer “democratization,” since there’s a little issue of computing power that comes into play. It takes a lot of computing power to actually run DRLearner. 

It can be run via cloud services for a cost, of course. But even with the requisite computing power, machine learning doesn’t just magically happen without providing huge datasets.

As Poulin noted:

“Fully implementing this state-of-the-art ML capability requires considerable computational power on the cloud, so we advise implementors to maintain realistic expectations regarding any deployment.”

The press release made it pretty clear that DRLearner’s benefits were really geared to be exploited by organizations “who have substantial computing budgets.” Such organizations could stand to gain analytical insights, expanded research capability, and perhaps a competitive advantage.

“[F]or those whose professional lives are focused on AGI, this is an exciting time, as DRLearner can enhance their neural network training efforts…” Poulin said.

In sum, the “democratization” of the DRLearner software release isn’t exactly for average geeks, let alone average humanity.  But if you’re someone whose “professional life” is focused on AGI, perhaps you’ve hit the jackpot.

Motivations For Spreading the AI “Love”

An interesting question is what sorts of potential for interconnectivity might be part of the present or future of DRLearner. Can the software act as a node in a larger interconnected neural network?

Perhaps the competition between tech behemoths is the only thing currently keeping a comprehensive commercial “skynet” from being operational already. DARPA is another matter.

Some working on AGI, like John Carmack, predict that the eventual “simple” code that will fully attain human intelligence capabilities will take mere thousands, not millions of lines of code.  To give an idea of what that means, web browsers are bulkier.

In any case, the creatives instrumental to the DRLearner project are certainly very enthusiastic human apostles of AI, prominently disseminating their machine learning code.  Several of them, including Poulin, are associated with Singularity NET.

The Singularity is a term that refers to a future moment when evolving AGI supasses the capabilities of human intelligence in every respect.

Regarding the Seattle AGI Conference, it frankly acknowledges its Singularity goal:

“Today, the AGI conference remains the only major conference series devoted wholly and specifically to the creation of AI systems possessing general intelligence at the human level, and ultimately beyond.”

Ultimately beyond. Why would a group of humans be so invested in trying to create artificial intelligence comprehensively superior to humans? 

And perhaps more importantly, why aren’t more people and political representatives questioning the Singularity quest, and demanding regulations and safeguards that ensure that AI won’t metastasize into a monumentally dangerous technology?

The release of DRLearner certainly begs questions, beyond its rosy rhetoric about helping to “democratize access to state-of-the-art machine learning tools of high-performance reinforcement learning,” as Poulin put it.

People like Ben Goertzel, CEO of SingularityNET and Chairman of the AGI Society and AGI Conference Series, are advocates of a radical transhuman agenda. Goertzel, supporter of a political “Transhumanist Party,” considers it inevitable:

“As I see it, advanced technologies are already beginning to pave the way for a likely humanistic and transhumanistic future.  Skeptics will shake their heads, but I think the time will come when we’ll not only create superhuman thinking machines, but even merge with them to explore incredible new forms of mind, society, embodiments and experience.”

Goertzel not only welcomes the Singularity, but also hacking of the human genome, the dangerous prospect of humans genetically designing and “improving” themselves.  

In an undated article at anewdomain.net titled “Transhumanism Trouble? Could Happen. Here’s How To Save The World” Goertzel acknowledged the possibilities that AGI might develop in a harmful direction.

He tried to make the case that democratizing technology would calm simmering tensions of inequality that technology, as wielded and controlled by elites, certainly seems to be widening. And somehow, that more positive world would generate a form of AI less likely to smite humankind.

He proposes a number of democratizing remedies, then concludes:

“But they would go a long way toward enabling everyone on the planet to participate fully in the ongoing techno-social revolution.  A world dominated by such technologies would be one in which positive new tech would have a higher odds of getting developed—including biotech that heals rather than biotech that kills and especially including AI that loves us rather than AI that repurposes all our molecules.”

Goertzel offers virtually no real evidence that his proscriptions would have any effect at all on the attitude of a superior AGI toward humans. 

But the scientist’s musings do show that he fully realizes there’s a potential catastrophic problem that comes with ushering in a Singularity.

The public release of DRLearner may be one of those checklist items meant to try to foment a positive effect that will spur only the most loving and benevolent AGI.

But that’s a hope predicated on a decidedly inferior logic.

For more, see:

And finally, a plug for the braxman, who breaks down more about AI learning and human folly.

Skip to content