Logo Logo
Hilfe
Hilfe
Switch Language to English

Battaglia, Fiorella und Di Vetta, Giuseppe (2022): Technology to unlock the mind: citizen science and the sandbox approach for a new model of BCI governance. 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 26-28 October 2022. In: 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), S. 563-567

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

The benefits and harms of Brain-Computer Interfaces (BCIs) deserve to be explored in depth. The evaluation of the fundamental conceptual, ethical and legal questions associated with BCI applications should be scrutinized. Finally, the resulting insights should be leveraged to benefit both medicine and society. We will perform this exploration with two focuses: the first one will look at the epistemic and ethical impact of the peer-production of knowledge (citizen science) and the second one will look at the legal criteria that should inform the introduction of a novel form of regulation which is envisioned by the sandbox approach [1]. With a view to complying with a translational research approach, it is required to foster co-creation of knowledge and therefore to include the active participation of patients, their families, clinicians, healthy users and the public in the process aimed at the regulation of the use of BCIs. Citizen science is emerging as an important policy orientation but is still largely unknown [2]. Users are holders of practical knowledge which should be emphasized in a translational approach. There is a close connection between the emergence of a new model of governance of BCIs, which takes into account the issues of epistemic injustice and the deep and profound implications on science as a discipline, a profession, and as a practice, foreseen by the policy orientation of citizen science [3]. Moreover, considering the user as merely a passive participant amounts to an injustice done to someone specifically in their capacity as a knower [4]. This part of the special session is about providing a state-of-the-art account of what is going on in co-creation theory, which is the necessary premise for designing co-creation activities in the framework of the sandboxes. How is the co-creation of knowledge possible? Why does co-creation of knowledge matter? These questions are central in the epistemology of the co-creation and have significant effects on on a number of dimensions (implementation, benchmarking, and regulation), which are the specific themes of this special session. According to the European legal framework, it is required that all high-risk AI informed devices must be tested for legal conformity. This test can often be performed by the provider itself. The Proposal for a Regulation of the European Parliament and of the council encourages EU member states to create regulatory ‘sandboxes’, in which firms can try out novel services without fear of being hit by a penalty [5]. During the temporary suspension of current regulations that would prevent the use of the BCI, all interested stakeholders are requested to participate in the experiment aiming at testing the devices. This test has two different kinds of requirements: the technical constraints responsible for the feasibility of the devices and the norms for the legal regulations. In between is the role of patients, users, families and the public. Firstly, we will address the understanding of co-creation and provide the reasons for adopting a co-creation approach beyond the immediate evidence of benefit that is proceeding from the engagement in participatory practices in the production of goods, services, and knowledge. It is a theory that is capable of both explaining and formulating the epistemic and ethical reasons behind these processes, in order to enhance well-functioning practices and avoid possible shortcomings in their implementation, especially during the last step, that of the regulation. Secondly, we will discuss why a ‘sandbox’ should be considered the most efficient regulatory environment to allow a real co-creation dynamic in BCIs innovation, considering the strict liability regime foreseen by European AI regulation. As known, a regulatory sandbox should be a safe space for both discovery and application, or for both BCI innovation and regulation. In that sense, while the sandbox approach can be conceived to improve proactive publicness in tech science, significant criticisms are raised. Discussing the legal implications of the sandbox regulatory approach, the paper will address many of these criticisms, especially regarding the consistency between a strict liability regime and a sandbox approach, as designed by EU regulators and the condition of legal-safe operating. This approach in terms of adaptive governance needs to be examined further. What are the rules for experimentation? How should these rules be characterized? Are the envisioned rules tools of soft law? Lastly, we will discuss the regulatory learning effect of the sandbox approach: could it be real? Examples will be given from FinTech Regulation, where the sandbox approach has already been experimented with; that regulatory experience should be taken into consideration.

Dokument bearbeiten Dokument bearbeiten