Challenge objective
The aim is to provide approaches where artificial intelligence users, solution providers and other stakeholders keep the confidentiality of their data, information and knowledge under control.
The issues
Many artificial intelligence solutions require a large amount of data to ensure accurate results. However, companies and organisations in most fields have traditionally been cautious about sharing their data and information in order to maintain secrecy and, more recently, to comply with data protection regulations. This often limits the volume of datasets available for AI learning to sizes far too small to quickly generate reliable solutions based on pre-trained AI models/algorithms. Europe and Wallonia remain particularly strong in B2B segments in various application areas such as manufacturing and healthcare, where SMEs collaborate with large companies to deliver innovative solutions. However, in the age of AI, the lack of access to large data sets could have a significant impact on the commercial viability of these European and Walloon companies. It is therefore crucial to create approaches that enable the rapid development of reliable AI solutions without compromising the confidentiality of data and intellectual property, as well as user consent on the use of data.
Challenges
When companies and organisations start to explore the path of AI, they quickly realise that it brings with it a number of challenges:
- The good quality datasets available are too small for the development of reliable AI, and data sharing and pooling are difficult to implement in practice, particularly in the healthcare and manufacturing sectors, where Wallonia has world-leading companies;
- The interoperability of data from different sources is often complex to determine and align.
- Access to data has become more difficult as organisations and individuals become increasingly aware of the value that can be extracted from their data;
- In addition to the confidentiality of data used during training sessions, it is equally important to protect the input/output data fed into or generated by the AI models and algorithms deployed in operations (i.e. protect input/output data during inference when trained AI models are used) ;
- AI models must also be protected against intellectual property theft, regardless of how the AI models/algorithms are integrated/incorporated into a solution and the type of hardware infrastructure used to run the solution..