<< PREVIOUS | NEXT >>

For

R&D Participants

R&D Participants are a critical component of our program. As an R&D Participant you will not only gain access to important information and discoveries on the use of formal methods and proofs for active AI control and safety, but you will also become familiar enough with the test tooling and API’s to quickly integrate safety guide-rails into your applications as they become available in the near future.

We consider that our R&D Participant group will be amongst the 1000 early and most important technologists in AI. We know you will be wise enough to understand where we are going with AI and formal proofs and methods once you are familiar with the technology. Because the Ai industry is moving so quickly, we want to make sure you don’t miss learning about the benefits of this technology pathway early.  We predict wide spread use in the future and internal adoption at many of the industry’s top players.  The R&D Participant level is $2,500 and includes a full year of API access and accelerated knowledge transfer for understanding the uses of formal methods and proofs with generative Ai.  

There are many strong benefits to R&D Participants. Here is one hypothetical use case:  Suppose for argument, an AI software company, MasterBlast.ai, focuses on developing safer, AI-based targeting systems for their marketing automation software.

Stage 1:  Program specialized newsletters go to all participants and sponsors. MasterSoft.ai is specifically interested in the potential of automated proofs to ensure compliance with complex, international regulations regarding user data protection, while optimizing targeting strategies for their generative AI.  Developer-focused editions provide engineering insights into application of formal methods and proofs for compliance rules. Ideas and methods for safely automating, while still aggressively optimizing targeting strategies with generative AI, begin to take root with MasterBlast.ai’s team.

Stage 2: MasterBlast.ai gains access to FormalFoundry’s R&D tools and codebases. Their engineering team experiments with our methodologies to formalize and automate compliance checks in the targeting system.  Active safety becomes a newly attainable goal.

Stage 3: Through interactive sessions with our researchers, MasterBlast.ai’s team is able to ask detailed questions about applying these methodologies across their specific applications to ensure compliance with complex regulations while using generative AI. These discussions provide the team with deeper understanding and concrete implementation structures.

Stage 4: MasterBlast.ai experiments with API deployments of the FormalFoundry tools for internal prototyping. They build a proof-of-concept AI targeting system that not only generates marketing targets based on customer data, but also verifies the legal compliance of these targets across different jurisdictions using the techniques they learned from the FormalFoundry R&D program.

Stage 5: The MasterBlast.ai team explores workflows for providing a formal specification for their verification layer. Though these experiments are not yet applicable in production systems, the insights and knowledge gained are invaluable for understanding how to integrate such methods in future projects.

Stage 6:  MasterBlast.ai starts to publicly reveal their new safety R&D activities stemming from the FormalFoundry project. They proudly showcase how they are implementing cutting-edge research to improve AI safety and ensure regulatory compliance, positioning themselves as an innovative company at the forefront of their industry.

Conclusion: The MasterBlast.ai journey into safety provides their team with a deep understanding of how formal methods can be used to ensure safety and compliance in AI systems. The knowledge and experimentation experience gained becomes a huge potential game-changer for the way they design and optimize their AI-based targeting systems.

Want to experiment with advanced tools and actively contribute to the direction of AI safety R&D?