How to keep AI developing without compromising safety and security
Christina Montgomery Francesca Rossi The introduction of generative AI systems into the public domain exposed people all over the world to new technological possibilities, implications, and even consequences many had yet to consider. Thanks to systems like ChatGPT, just about anyone can now use advanced AI models that are not only capable of detecting patterns, honing data, and making recommendations as earlier versions of AI would, but also moving beyond that to create new content, develop original chat responses, and more. A turning point for AI When ethically designed and responsibly brought to market, generative AI capabilities support unprecedented opportunities to benefit business and society. They can help create better customer service and improve healthcare systems and legal services. They also can support and augment human creativity, expedite scientific discoveries, and mobilize more effective ways to address climate challenges. We are at a critical inflection point in AI’s development and use and its potential to accelerate human progress. However, this huge potential comes with risks, such as the generation of fake content and harmful text, possible privacy leaks, amplification of bias, and a profound lack of transparency into how these systems operate. It is critical, therefore, that we question what AI could mean for the future of the workforce, democracy, creativity, and the overall well-being of humans and our planet. The need for new AI ethics standards Some tech leaders recently called for a six-month pause in the training of more powerful AI systems to allow for the creation of new ethics standards. While the intentions and motivations of the letter were undoubtedly good, it misses a fundamental point: these systems are within our control today, as are the solutions. Responsible training, together with an ethics by design approach over the whole AI pipeline, supported by a multi-stakeholder collaboration around AI, can make […]