OpenAI has introduced a novel grant initiative aimed at supporting companies engaged in ensuring the safety of superintelligent systems. The organization anticipates the realization of superintelligence within the coming decade.

According to the company, these advanced systems will “be capable of complex and creative behaviors that humans cannot fully understand.”

“This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them? This is one of the most important unsolved technical problems in the world. But we think it is solvable with a concerted effort. There are many promising approaches and exciting directions, with lots of low-hanging fruit. We think there is an enormous opportunity for the ML research community and individual researchers to make major progress on this problem today,” OpenAI wrote in a blog post.

The existing method to guarantee the safety of AI systems, known as alignment, employs reinforcement learning from human feedback (RLHF). However, its dependence on human supervision may pose challenges in addressing the intricate scenarios enabled by superintelligent AI. For instance, when confronted with the generation of millions of lines of intricate code, human evaluation may prove impractical.

The company will grant $100,000 to $2 million to academic labs, nonprofits and researchers. OpenAI is also launching one-year fellowships worth $150,000 for graduate students (half of which will go to fund research and the other half will be a stipend).

OpenAI asserts that prior experience in alignment is not a prerequisite, and they are prepared to assist researchers who have yet to delve into this domain.

Tags: , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR