AI adoption is on the rise. It is becoming increasingly popular in a wide range of industries, from finance to healthcare and transportation. Advances have largely been enabled by breakthroughs in deep learning, such as the recently released AI-powered chatbot ChatGPT, and improvements in hardware, such as ML-specific computing infrastructure.
AI-powered tools bring huge benefits in terms of cost savings, efficiency gains, and improved customer experience. However, there are also risks associated with its increasing use. These risks need to be assessed and effectively managed. Serious risks include attackers maliciously exploiting weaknesses in AI systems causing false predictions or stealing private information from the training data. Regulators are becoming increasingly aware of the importance of regulating AI to ensure its responsible and safe adoption.
Recent examples, such as the coming EU AI Act or the US-EU AI agreement are the beginning of a wave of regulation for AI to come.
But creating standards and regulations for AI isn’t easy. Striking the right balance between risk mitigation and promotion of innovation is key. To create appropriate standards, risks need to be considered for each AI system separately, considering the industry and the different use cases. Despite progress in AI, there is no common understanding of its security implications. A look at the research community reveals conflicting findings on defence strategies and a lack of consistent benchmarks. This makes it difficult for organizations to navigate the field.
To help shed light on the complex issue of securing AI, Germany’s Federal Office for Information Security (BSI) launched the "Security of AI Systems: Fundamentals" project. The aim of this project is to explore the risks, assessment options and mitigation strategies in relation to AI systems. We are honoured that, as part of the Security of AI Systems project, the BSI has chosen Apheris to conduct the study on ML security. As ML security experts, we have investigated the following three main ML threat areas:
1. Transfer learning
2. Providing ML-models to third-parties
3. Sharing sensitive data with third-parties
The risk profile of various attack vectors and mitigation strategies were examined for each. In addition, we have developed a set of practical recommendations for ML practitioners on how to develop secure ML models. The implications of our findings suggest that the attack surface can be significantly reduced by implementing the right processes to systematically identify and mitigate potential attack vectors. Having the right approaches in place to accurately identify potentially threatening attack vectors help determine what mitigation measures need to be implemented. To help ML practitioners implement mitigation techniques, our ML security experts have developed practical frameworks that show how to implement and maintain different mitigation strategies.
In conclusion, the research conducted by the BSI and Apheris has provided valuable insights into how organizations can successfully adopt AI technologies while ensuring robust security measures are in place. By proactively assessing the potential risks associated with their AI, organizations can develop appropriate safeguards to protect their data and IP from malicious use. In light of the coming regulations on the use of AI, we strongly recommend that any organization looking to use AI for their purposes familiarize themselves with security measures. The mastery of AI adoption in the context of these forthcoming regulations will determine organizations competitiveness.
Apheris is committed to delivering the most secure federated learning solution in the industry. Our team of developers and security experts have worked diligently to ensure that our platform and whole organization meets the highest security standards, including our ISO27001 certification. That’s why we’re happy to contribute our knowledge to the BSI to help them assess the potential risks associated with ML systems.