LiveLabs

This section contains the LiveLabs that are active in the context of the project. For more information, please check the individual pages on the website:

LIVE LAB 1: Collaborative robotic and more naturalistic interactions
LIVE LAB 2: Supporting anomaly detection for human and technological tasks with early warning signals in manufacturing environment
LIVE LAB 3: Supporting Decision making and Alarm Management in Safety Critical control room scenarios
 
Ethical consideration for collaborative intelligence setting

The methods developed for the research aim at providing a multifaceted view of the ethical and legal considerations to bear in mind when setting up collaborative intelligence workstations. All too often, a very narrow set of views is represented when drafting novel codes of ethics and pieces of legislation. Thus, the four research methods chosen for the occasion offer complementarity in several aspects, as they seek to represent a diversity of national contexts; sectors (from representatives of trade unions and EU institutions to the professionals from the private sector and academia); roles in the workplace (such as employees, employers and external partners, e.g. advisors and consultants) and stages in the AI lifecycle (from researchers to consumer association representatives).

To best capture their point of view, the research will combine inputs from:

  1. Document Analysis. This includes gathering inputs from a wide range of frameworks crafted with the intention of governing AI research and development. Some of those have already been implemented, while others remain on their first stages of development, or contribute to more of a theoretical discussion. Examples include the Montreal Declaration for a Responsible Development of Artificial Intelligence (2018), UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021), the extensive work done by the Council of Europe (CoE) Ad hoc Committee on Artificial Intelligence (CAHAI, 2021), the Ethics Guidelines for Trustworthy AI by the EU High-Level Expert Group on Artificial Intelligence (HLEG AI, 2020), or the Data Ethics Decision Aid (DEDA) by the Utrecht Data School (2021) (Franzke et al. 2021).
  2. Focus Groups. Composed by representatives of European small and medium enterprises (SMEs) working in the ICT sector, they have been devised to help understand the concerns, challenges, capabilities, unmet needs and views held by SMEs in Europe of emerging ethics codes and regulation.
  3. Semi-structured Interviews from a more policy-oriented perspective, including professionals whose tasks include reflecting on the regulatory and ethical implications of emerging technologies.
  4. Ethnographic Research. This kind of research can often provide unique insights that escape more traditional research methods. Hence, conducting observations and taking detailed fieldnotes during on-site visits to partner organisations offers an entry point to the real-life or simulated environments where data-gathering equipment is tested and deployed. The idea is to understand their current use to provide more tailored insights on how to improve the EU’s Ethics and legislative framework on matters related to data, AI and privacy.