Putting principles into practice: Adobe’s approach to AI ethics

Image source: your123, Adobe Stock.

Today, companies are increasingly infusing artificial intelligence (AI) and machine learning (ML) into their products to better serve their customers. At Adobe, hundreds of features powered by Adobe Sensei, our AI engine, help businesses deliver more personalized experiences, empower people to take their creativity to the next level, and enable users to achieve what they want in seconds. otherwise could take hours. With over a decade of experience integrating AI and ML into Adobe tools, it’s clear that AI done right can improve the creation, delivery and optimization of digital experiences. in large scale.

It’s also clear that today’s consumers see the value of emerging technologies: according to a recent Adobe survey, 70% of consumers say they trust AI to improve their experiences with a brand. But in order to harness AI in the most useful way, we need to recognize the unique challenges this technology brings. AI is only as good as the data it is trained on. And even with good data, you can still end up with biased AI, which can unintentionally discriminate or disparage and make people feel less valued.

At Adobe, we are constantly striving to make AI better for everyone. That’s why we have a comprehensive AI ethics program in place to ensure we develop AI technologies in an ethical, responsible and inclusive way for our customers and communities. As one of the world’s most innovative companies for nearly 40 years, Adobe takes the impact of our technology as seriously as developing the technology itself.

More than just a buzzword

Two years ago, when we decided to create our AI ethics program, we took a deliberative and thoughtful approach. It started with establishing the ethical principles we follow when developing AI-based technologies.

We have established an AI Ethics Committee comprised of a cross-functional group of Adobe employees with diverse gender, racial and career profiles, from research engineers to product developers to legal teams, etc. Having a diverse committee is important for evaluating innovations from different angles and can help identify potential issues that a less diverse team might not see. Together, the committee defined Adobe’s AI ethics principles of responsibility, accountability, and transparency.

With accountability and responsibility, we say it is no longer enough to provide the best technology in the world to create digital experiences. We want to make sure our technology is designed to be inclusive and respects our customers, our communities, and our Adobe values. Specifically, it means developing new systems and processes to assess whether our AI is creating harmful biases. Transparency means being open about how we use AI with our customers and providing feedback mechanisms to report concerns about our AI-powered tools or our AI practices. We want to involve our customers in the journey and work with our community to responsibly design and implement AI.

Having a concise, simple and actionable set of principles that align with our specific corporate values ​​is critical to operationalizing them across our engineering structure.

Practice what we preach

So what exactly does “operationalizing our principles” look like? We have created standardized processes from design to development to deployment that include training, testing, and a review process overseen by a diverse AI ethics review board.

As part of this process, engineers perform a multi-part AI ethics review to capture the potential ethical impact of AI-based innovations. For most products and features, the impact analysis shows no major ethical risk. Take, for example, an Adobe font-selection tool that uses AI to help customers choose different typefaces. After an initial evaluation, the product met our approval standards. In other cases, the products are subject to further scrutiny.

Prior to the fall 2020 release, we reviewed the Photoshop Neural Filters feature, which allows users to add non-destructive generative filters to create elements that weren’t previously in images using the AI/ML. Think: adding color to black and white photos, changing someone’s expression from sad to happy, changing their hairstyle, etc. We knew neural filters would allow creators to make compelling adjustments and speed up the image editing process. But we wanted to make sure they produced results that accurately reflected real human characteristics and didn’t perpetuate harmful biases. Through a rigorous technical review process, Neural Filters reflects and respects human characteristics and has become a major hit among creators around the world.

Collaboration is key

Although different companies may have their own unique AI considerations, the need for ethical AI standards is universal. As we continue to evolve Adobe’s AI ethics program, we are sharing our knowledge and best practices with our industry peers. We have contributed to the Software Alliance BSA AI Ethics Industry Code of Practice so that other companies can leverage the work that has already been done in the space – and as a member founder of the AI ​​Ethics Partnership, we collaborate with academics, citizens, industry, and media organizations to answer the most important and challenging questions about the future of this emerging technology.

We recognize that the development of AI and the ethical review of AI is an ongoing process. As we continue to learn and grow, we will work with our employees, customers, and communities to deliver innovations that reflect our Adobe values ​​and deliver on our commitment to responsible technology development.

Learn more here about Adobe’s AI ethics principles.

This story originally appeared on CNBC.

Leave a Comment