Ethics of artificial intelligence and responsibility: addressing bias and transparency
The lack of accountability dominates the scene of artificial intelligence because, in most cases, it becomes ambiguous who should bear responsibility when artificial intelligence fails or causes harm. The problems involved in this include:
-
Transparency: Many artificial intelligence models work as “black boxes”, and thus followed in decision -making processes.
-
Legal responsibility: There is still ambiguity regarding who is responsible for responsibility – between developers, organizations, or artificial intelligence system itself.
For example, in the event of an independent vehicle accident, responsibility may be attributed to manufacturers, developers and organizers. This complexity calls for an urgent development of frameworks that determine clear accountability.