Abstract
The law can be considered an important tool to address the risks of using artificial intelligence (AI). AI is defined in a variety of ways depending on the tasks it completes. Given that AI leverages computing power to carry out tasks that people typically undertake, it is also frequently referred to as cognitive computing or machine learning. Artificial intelligence (AI) uses data perception and synthesis to replicate human thought processes, automate tasks, and make judgments. The use of AI is regulated by many laws and regulations aimed at protecting consumers, users and society in general. The role of the law in addressing the risks of using AI includes many issues, among which are: maintaining privacy and security, maintaining fairness, civil and criminal liability, maintain safety and regulating the use of AI in business. Artificial intelligence in law firms has proven to be a golden ticket to increased productivity, improved decision-making, and higher competitiveness in the industry. Rules that individuals and organizations must adhere to when using AI, ensuring that these standards are strictly applied. Furthermore, the law helps promote transparency and accountability, as organizations have to commit to documenting AI usage processes and clarify how data and algorithms are used. This helps reduce the risk of discrimination and errors that can occur when using the AI.
Keywords
Confidentiality, Legal Gaps, Transparency and Accountability
1. Introduction
Artificial intelligence (AI) is one of the most widespread modern technologies at the present time, and it is used in various fields such as medicine, commerce, education, security, government, and many other fields
[1] | Ahmad, S. F., Rahmat, M. K., Mubarik, M. S., Alam, M. M., & Hyder, S. I. (2021). Artificial intelligence and its role in education. Sustainability, 13(22), 12902; https://doi.org/10.3390/su132212902 |
[2] | Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International journal of information management, 57, 101994; https://doi.org/10.1016/j.ijinfomgt.2019.08.002 |
[3] | Park, C. W., Seo, S. W., Kang, N., Ko, B., Choi, B. W., Park, C. M., & Yoon, H. J. (2020). Artificial intelligence in health care: current applications and issues. Journal of Korean medical science, 35(42); https://doi.org/10.3346/jkms.2020.35.e379 |
[4] | Lee, D., & Yoon, S. N. (2021). Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. International journal of environmental research and public health, 18(1), 271; https://doi.org/10.3390/ijerph18010271 |
[5] | Manickam, P., Mariappan, S. A., Murugesan, S. M., Hansda, S., Kaushik, A., Shinde, R., & Thipperudraswamy, S. P. (2022). Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical systems for intelligent healthcare. Biosensors, 12(8), 562; https://doi.org/10.3390/bios12080562 |
[1-5]
. Despite the many advantages that artificial intelligence provides, it also poses several risks and challenges.
In this context, the law plays an important role in addressing the risks of using artificial intelligence, as it can help define responsibilities related to the design, use, management, and control of various applications of artificial intelligence, ensuring freedom from harm and protecting the rights of individuals and institutions.
Among the matters that are focused on in laws related to artificial intelligence are:
Preserving privacy, which requires setting clear rules to protect personal data and maintaining confidentiality in the use of smart technologies and shared information. Technical designers and developers must adhere to standards of integrity and transparency in the design and operation of intelligent systems and cloud computing. In addition, the law plays an important role in governing the ethics that robots and intelligent systems must adhere to, which includes the ability to recognize legal errors and take action to correct them
[6] | Sumantri, V. K. (2019). Legal responsibility on errors of the artificial intelligence-based robots. Lentera Hukum, 6, 337. |
[7] | O'Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., & Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The international journal of medical robotics and computer assisted surgery, 15(1), e1968. |
[6, 7]
.
To ensure the quality of applications that use artificial intelligence technologies, standards must be defined to verify their safety and efficiency, and to ensure that they do not negatively impact individuals and communities. The necessary security practices must be followed to protect these applications from hacking and cyber attacks, and their compatibility with international standards and local rules and regulations must be verified
.
These guarantees help increase confidence in the use of smart applications and artificial intelligence, and promote innovation and the development of new technologies in a safe and sustainable manner
[9] | Hasan, M. K., Habib, A. A., Shukur, Z., Ibrahim, F., Islam, S., & Razzaque, M. A. (2023). Review on cyber-physical and cyber-security system in smart grid: Standards, protocols, constraints, and recommendations. Journal of network and computer applications, 209, 103540; https://doi.org/10.1016/j.jnca.2022.103540 |
[9]
.
2. The Problem of the Study
The problem of studying the role of law in addressing the risks of using artificial intelligence includes several important matters, as there is a scarcity of laws and regulations at the international and local levels related to artificial intelligence, which makes it difficult to conduct a comprehensive study related to this topic
.
This reinforces the need to identify the fundamental issues that need to be covered and analyzed in the study of law and artificial intelligence.
The research problem is: How does the legislator address crimes that occur through the use of artificial intelligence? With regard to the provisions of criminal liability for artificial intelligence, as well as an analysis of legal texts related to the subject of the study.
We also find that smart technologies and artificial intelligence are being developed rapidly and continuously, and this means that laws and regulations must be updated regularly to keep pace with this development and ensure the effectiveness of the measures taken. It is important that studies occur periodically and regularly to update knowledge of the latest technical developments and artificial intelligence technology and their impact on the law. There are also many risks and challenges related to the use of smart technologies and artificial intelligence, which requires setting priorities and focusing on the main risks that need to be addressed, and it is difficult to Sometimes we limit liability for these problems and damages. Therefore, it is necessary to establish clear legal mechanisms and procedures to determine responsibility in the event of any problems
[11] | Aslan, Ö., Aktuğ, S. S., Ozkan-Okay, M., Yilmaz, A. A., & Akin, E. (2023). A comprehensive review of cyber security vulnerabilities, threats, attacks, and solutions. Electronics, 12(6), 1333. |
[11]
.
Therefore, comprehensive and multifaceted studies of the role of law in addressing the risks of the use of smart technologies and artificial intelligence must be conducted, focusing on the main risks and ensuring that laws and regulations are regularly updated to keep pace with rapid technical developments.
3. The Importance of Studying
Studying the role of law in addressing the risks of using artificial intelligence is one of the most important studies in this field, because it helps determine preferences and priorities in developing and implementing laws and regulations related to artificial intelligence. Among the importance of this study:
The issue of artificial intelligence and its impact on society, the economy, and the environment requires a comprehensive and multifaceted study of the role of law in protecting consumers and communities and realizing the potential benefits of artificial intelligence technologies. This requires providing legal, financial, and regulatory security for companies and organizations that invest in this field, and achieving economic development and environmental sustainability
[12] | Behailu, Y. (2023). The impact of artificial intelligence on society. International Research Journal of Modernization in Engineering, Technology and Science, 5(10), 3120-3125. |
[13] | Nabila, E. A., Santoso, S., Muhtadi, Y., & Tjahjono, B. (2021). Artificial intelligence robots and revolutionizing society in terms of technology, innovation, work and power. IAIC Transactions on Sustainable Digital Innovation (ITSDI), 3(1), 46-52. |
[12, 13]
.
For example, artificial intelligence techniques can be used to analyze data related to energy and natural resources, and provide effective recommendations to improve the efficiency of their use and reduce their excess consumption. Artificial intelligence techniques can also be used to improve production, distribution and logistics processes, improve the efficiency of resource use and reduce waste. In order to effectively ensure sustainable development, laws and regulations related to artificial intelligence must determine preferences and priorities in the use of these technologies, and ensure that everyone's needs are met without affecting the environment and the future. Therefore, studying the role of law in addressing the risks of using artificial intelligence plays a crucial role in achieving development and development in the future.
The importance of this study is evident in defining artificial intelligence (AI) from a human rights (HR) perspective. By assessing the risks that artificial intelligence poses to human rights
[14] | Kriebitz, A., & Lütge, C. (2020). Artificial intelligence and human rights: A business ethical assessment. Business and Human Rights Journal, 5(1), 84-104. |
[14]
. The suitability of the rules of international human rights law to protect individuals from the risks of using artificial intelligence.
4. Study Methodology
Analysis of the role of law in addressing the risks of using artificial intelligence: This part analyzes the role of law in addressing the risks that may result from the use of artificial intelligence, and highlights the importance of determining responsibilities and appropriate compensation in the event of damages resulting from the use of artificial intelligence.
4.1. Division of Study
The study of the role of law in addressing the risks of using artificial intelligence can be divided into the following sections:
In this part, the concept of artificial intelligence is defined and its importance and applications in public life are explained. It also talks about the risks that can arise from the use of artificial intelligence and the necessity of having legal rules and regulations to address these risks.
The existing laws and regulations in various countries and international organizations related to artificial intelligence are reviewed, and the extent of their application and effectiveness in addressing the risks of using artificial intelligence is explained.
4.2. Legal Gaps
In this part, the legal gaps in existing laws and regulations related to artificial intelligence are analyzed, and suggestions and solutions are provided to enhance the effectiveness of these laws and regulations in addressing the risks of using artificial intelligence.
In this part, a practical case study is conducted to illustrate the importance and effectiveness of laws and regulations in addressing the risks of using artificial intelligence. A case study can be used to show real-life examples of the negative impacts that the use of artificial intelligence can cause in the absence of legal measures to reduce them
[15] | Sakka, F., El Maknouzi, M. E. H., & Sadok, H. (2022). Human resource management in the era of artificial intelligence: future HR work practices, anticipated skill set, financial and legal implications. Academy of Strategic Management Journal, 21, 1-14. |
[15]
.
5. Study Topics
A number of different themes can be presented about the role of law in addressing the risks of using artificial intelligence, and among these themes are:
5.1. The First Axis: Risk Analysis
Risk analysis is the process of identifying and evaluating potential risks associated with a particular technology or information system. This analysis aims to determine the extent of the impact that these risks could have on society, the environment, and individual rights
[16] | Wright, S. A., & Schultz, A. E. (2018). The rising tide of artificial intelligence and business automation: Developing an ethical framework. Business Horizons, 61(6), 823-832. |
[16]
.
The law can help analyze the potential risks associated with the use of artificial intelligence and assess the extent of the impact it may have on society, the environment and individual rights, enabling the necessary measures to be taken to reduce these risks
[17] | Abe, O., & Eurallyah, A. J. (2021). Regulating artificial intelligence through a human rights-based approach in Africa. African Journal of Legal Studies, 14(4), 425-448. |
[17]
.
When it comes to artificial intelligence technology, risk analysis becomes especially important. AI technology may involve potential risks, such as programming error, addiction to the technology, or bias and discrimination emanating from the AI system. From this standpoint, the law can help in analyzing these risks and assessing the extent of the impact they may have on society, the environment, and individual rights.
Risk analysis in this context can include many aspects, such as assessing the impact of the use of artificial intelligence on the labor market and jobs, or analyzing its impact on public health.
Environment, or assess its impact on individual rights and privacy. Risk analysis can help in taking the necessary measures to reduce these risks and determine the actions that must be taken to reduce potential risks. Therefore, the law can play a vital role in maintaining the safety of the use of technology and reducing the negative impact that may occur
[18] | Huang, C., Zhang, Z., Mao, B., & Yao, X. (2022). An overview of artificial intelligence ethics. IEEE Transactions on Artificial Intelligence, 4(4), 799-819. |
[18]
.
Problems of risk analysis, the role of law in addressing risks, and the use of artificial intelligence
Risk analysis is the process of identifying potential risks, assessing their impact and likelihood of occurrence, and taking action to reduce those risks. The main problems that the risk analysis process may face for the safe use of artificial intelligence are:
1. It may be difficult to identify all the potential risks associated with the use of artificial intelligence due to rapid changes in technology and new developments in the field of artificial intelligence.
2. Risk analysis can interfere with innovation and development in artificial intelligence, as it can limit experimentation and modifications to the technology.
3. Risk analysis specialists may face difficulty in verifying the accuracy of the data they rely on to assess potential risks, and there may be difficulty in accessing the data necessary for analysis.
4. The tests and evaluations necessary for risk analysis can be expensive and require a lot of effort and resources, and this may limit the ability of institutions and governments to fully implement them.
The law can play an important role in alleviating these problems and addressing the risks of using artificial intelligence, by establishing specific laws and regulations that ensure risk analysis and evaluation and taking the necessary measures to control potential risks, in addition to defining the responsibilities of different parties and ensuring transparency and accountability in the use of artificial intelligence
[19] | Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. International Journal of Information Management, 62, 102433. |
[19]
.
Proposed solutions to analyze the risks associated with the role of law in addressing the risks of using artificial intelligence.
To analyze the risks associated with the safe use of artificial intelligence, some suggested solutions can be taken, including:
1. Determine the basic rules and guidelines that ensure the safe use of artificial intelligence, by establishing specific laws and regulations that define the responsibilities, limits, and basic requirements of institutions and companies that use the technology.
2. Analyze risks periodically and update the necessary assessments to reduce potential risks, by appointing a specialized team that works to analyze risks and develop solutions and procedures to deal with them.
3. Encouraging transparency and accountability in the use of artificial intelligence, by providing the necessary data to analyze and evaluate risks and providing the necessary information to users and consumers, in addition to determining responsibilities and appropriate penalties for parties that violate the applicable rules and regulations.
4. Encouraging cooperation and coordination between governments, institutions and various organizations to exchange experiences and information related to risk analysis and evaluation, and develop the necessary solutions to improve the use of artificial intelligence in a safe and effective way.
5. Providing training and awareness to users and consumers about best practices in using artificial intelligence safely, and providing the necessary tools and techniques to effectively analyze and evaluate risks.
5.2. The Second Axis: Determining Legal Responsibility
The law can help determine legal liability for the wrongful use of artificial intelligence and determine appropriate sanctions and punishments for individuals and companies that intentionally use it in violation of the laws. Determining legal responsibility in addressing the risks of using artificial intelligence is an important and vital topic in the era of technological innovation in which we live. Despite the benefits of artificial intelligence technology, its use in many fields can cause various legal risks and challenges
[20] | Munoko, I., Brown-Liburd, H. L., & Vasarhelyi, M. (2020). The ethical implications of using artificial intelligence in auditing. Journal of business ethics, 167(2), 209-234. |
[20]
.
Determining legal liability in this regard requires careful consideration of the ethical and legal issues related to the design and use of AI technology, including privacy, intellectual property rights, security and public safety. Solving this issue requires broad collaboration between businesses, communities and governments to ensure that AI technology is used in a safe and effective manner. This can be achieved by developing a legal and ethical framework that provides clear guidance on how AI technology is used and the responsibility of those responsible for it. Other technologies such as machine learning, statistical analysis, and transparent artificial intelligence can also help achieve transparency, accountability, and fairness in the use of technology and mitigate potential risks.
Legal responsibility in addressing risks using artificial intelligence
Legal responsibilities regarding the use of technology and artificial intelligence are numerous, and can be determined according to different aspects of use and application. Among the basic legal responsibilities are:
1. Developers’ responsibility: Developers bear the primary responsibility for the design and development of smart technologies and their applications, and must ensure that there are no errors or defects that lead to harming individuals or causing financial or moral damage.
Legal responsibility in addressing risks using artificial intelligence.
Legal responsibilities regarding the use of technology and artificial intelligence are numerous, and can be determined according to different aspects of use and application. Among the basic legal responsibilities are:
2. Developers’ responsibility: Developers bear the primary responsibility for the design and development of smart technologies and their applications, and must ensure that there are no errors or defects that lead to harming individuals or causing financial or moral damage.
3. Users’ responsibility: Users bear responsibility for determining how to use smart technologies and their applications, and must avoid using them in a way that conflicts with the law or ethics.
4. Operators’ responsibility: Operators bear responsibility for using smart technologies and their applications within a specific scope in accordance with laws and legislation, and must apply the necessary security measures to protect information and privacy.
5. Legislators’ responsibility: Legislators bear responsibility for setting laws and legislation that define the limits of the use of smart technologies and their applications, and identifying the risks and ethical issues associated with this use.
6. Responsibility of monitors: Monitors bear responsibility for monitoring the use of smart technologies and their applications, and ensuring the implementation of laws and legislation related to them.
Challenges facing legal liability in addressing risks using artificial intelligence.
The legal responsibility to address the risks of using artificial intelligence faces many challenges
[21] | Cobbe, J., & Singh, J. (2021). Artificial intelligence as a service: Legal responsibilities, liabilities, and policy challenges. Computer Law & Security Review, 42, 105573. |
[22] | Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., 29, 353. |
[21, 22]
, including:
1. Recognizing responsibility: It may be difficult to determine responsibility in the event of an error or damage due to the use of artificial intelligence, especially if there are several parties involved in the process.
2. Transparency and comprehensiveness: The processes used in artificial intelligence and the decisions made must be understandable and transparent to everyone, so that individuals can understand what is happening and take the necessary actions in the event of a problem.
3. Data control: Personal data must be protected, individuals must be provided with mechanisms to control the use of their data, and prevent it from being used in illegal or unauthorized ways.
4. Training and education: AI workers should be trained on the legal and ethical responsibility to use the technology, and enhance awareness among end users about their rights and responsibilities.
5. Continuous development: Laws and regulations must be reviewed and updated periodically to keep pace with technological developments and new challenges facing the legal responsibility in addressing the risks of using artificial intelligence.
5.3. The Third Axis: Developing Artificial Intelligence Technology
The law can help develop AI technology safely and responsibly, setting the legal, ethical and technical standards needed to provide the best results. Artificial intelligence technology is the field of computing and programming development that aims to enable machines and systems with the ability to carry out tasks autonomously without human intervention. This technology relies on algorithms and mathematical models to analyze data, extract knowledge, and make decisions
[23] | Bashayreh, M., Sibai, F. N., & Tabbara, A. (2021). Artificial intelligence and legal liability: towards an international approach of proportional liability based on risk sharing. Information & Communications Technology Law, 30(2), 169-192. |
[23]
. These models are trained on data and analyzed to identify patterns, predictions, and predictions and answer various questions.
Applications of AI span many fields, including robotics, autonomous vehicles, surgical robots, industrial process control systems, financial services, e-commerce, healthcare, and more. Artificial intelligence technology has witnessed tremendous development in recent years, and helps solve many challenges in various fields. It is expected that its use will increase in the future and bring about radical changes in many industries and fields. However, artificial intelligence developments face many challenges and risks, including the impact on the labor market, privacy, security, racial discrimination, etc., which requires taking the necessary measures to address these risks.
Artificial Intelligence technology includes a set of techniques and tools that allow machines and systems to learn, think, and make decisions independently and accurately, similar to the way humans learn and make decisions
[24] | Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. International journal of information management, 48, 63-71. |
[24]
. Artificial intelligence technology relies on complex algorithms that allow machines to learn, adapt to the environment, and solve problems better.
Artificial intelligence technologies include many fields such as machine learning, deep learning, natural language processing, smart robots, and expert systems.
Artificial intelligence technologies are applied in many fields such as medicine, education, industry, financial services, etc.
Artificial intelligence technologies have been developing rapidly in recent years, and are widely used in areas such as big data analysis, image and sound analysis, speech recognition, intelligent medical consultations, and control of autonomous vehicles. However, the use of these technologies raises some challenges and risks, such as the potential for error, inaccuracy, ethical control, and privacy and security issues. Therefore, the development of AI technologies requires definition of stakeholder responsibilities and strong legislation and policies to protect individuals and communities
[25] | Shukla Shubhendu, S., & Vijay, J. (2013). Applicability of artificial intelligence in different fields of life. International Journal of Scientific Engineering and Research, 1(1), 28-35. |
[25]
.
6. Problems of Developing Artificial Intelligence Technology
The development of artificial intelligence technology faces many challenges and problems, including:
1. Lack of available data: Artificial intelligence technology relies on large amounts of data to train smart models. However, there is a paucity of data available in some areas, which hinders the development process.
2. Lack of trust in the system: Artificial intelligence faces the problem of trust in the system, as errors can occur in diagnosis, conclusion, or interactions with users, which leads to a negative impact on trust in the technology.
3. Security and privacy issues: Artificial intelligence technology faces great challenges with regard to security and privacy, as data can be used in illegal ways or the system can be hacked and data stolen.
4. Ethical issues: Artificial intelligence technology raises ethical issues such as bias and discrimination, and issues related to personal choices and freedom, and these issues must be dealt with carefully to avoid a negative impact on society.
5. Cost: Artificial Intelligence technology can be expensive to develop and operate, and companies and institutions may be unable to bear the huge investment costs related to this technology.
6. Failure to learn: Intelligent models can have trouble testing and learning from mistakes, which leads to a negative impact on the quality of education.
7. Solutions and Proposals for Developing Artificial Intelligence Technology
There are many solutions and proposals for developing artificial intelligence technology, including:
1. Focus on Deep Learning: This is achieved by learning patterns and predictions from available data, which leads to the development of accurate and effective models.
2. Improving data quality: Artificial intelligence technology depends on data for learning and analysis, and therefore it is important to improve the quality of the data used in this type of applications.
3. Enhancing transparency and accountability: It is important to enhance transparency in artificial intelligence technology operations and clarify how decisions are made, in addition to defining responsibilities and accountability for errors resulting from the use of this technology.
4. Enhancing security and privacy: Companies and organizations responsible for developing artificial intelligence technology must enhance security and privacy for users and protect their personal data from leakage or illegal exploitation.
5. Strengthening international cooperation: It is important to enhance international cooperation and joint work between governments, companies and international organizations to develop artificial intelligence technology in a sustainable and effective manner, while preserving moral values and human rights.
6. Developing appropriate laws and legislation: Appropriate laws and legislation must be developed to protect individuals and communities from the potential risks of artificial intelligence technology.
A comparison of the role of different countries in confronting the risks of using artificial intelligence
The role of countries in addressing the risks of using artificial intelligence varies, and a comparison can be made between different countries in this regard. Here are some points that can be concluded from a comparison between some countries:
1. China: China has one of the largest artificial intelligence industries in the world and is investing heavily in this field. The Chinese government is trying to control the use of technology to combat crime and terrorism, but this raises concerns about privacy and freedom of expression.
2. The United States: The United States has a large sector of companies that work in the field of artificial intelligence and use it in many fields, including defense, security, and health care. The US government is taking measures to regulate the use of technology and ensure privacy and security.
3. The European Union: The European Union focuses on protecting the privacy and basic rights of individuals, and many European countries have strict legislation on data protection. The European Union seeks to develop a unified legal framework for artificial intelligence that ensures control over the use of technology and protection of the basic rights of individuals.
4. Canada: Canada seeks to balance the use of technology in areas such as health, science, security, and protecting privacy and security. Canada has strong data protection and privacy legislation, and is developing a legal framework for artificial intelligence.
Although Canada does not have large technology and artificial intelligence development programs like China and the United States, it is working to create a legal framework that allows for a balance between technology development and maintaining privacy and security
[26] | Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of service research, 21(2), 155-172. |
[26]
. Canada has strict legislation to protect personal data and privacy, and a Canadian Center for Artificial Intelligence has been established to focus on human and social research and applications. This focus on the ethical and social aspects of artificial intelligence is expected to strengthen Canada's position as a developed country in this field.
Some statistics on the role of law in addressing the risks of using artificial intelligence.
Some statistics regarding the role of law in addressing the risks of using artificial intelligence can be found through scientific reports and research in this field. Here are some important statistics:
According to the report “Modern AI Legislation: A Global Review” issued by the Brickman Center for Law and Technology, only 18 countries as of 2020 have developed AI-related legislation. The United States of America comes in the lead in terms of the number of legislations with 11 pieces of legislation, followed by China with 3 pieces of legislation.
The World Economic Forum’s “AI Regulatory Guidelines: An International Comparison” report noted that as of 2020, fewer than 30 countries had developed an AI regulatory framework.
A study conducted by Boston Counsel in the United States showed that 62% of American institutions that use artificial intelligence face difficulties in applying legislation and legal regulations to work related to artificial intelligence.
According to the report “Artificial Intelligence in Cybersecurity and Criminal Justice: Challenges and Opportunities” issued by the European Union, determining responsibilities in cases of crimes committed using artificial intelligence is considered a major challenge.
Artificial intelligence is a major challenge in criminal law, as it requires determining who is responsible for the crime and determining whether the fault belongs to the person using the AI technology or the system itself.
In the European Union report that I mentioned, they indicated that determining liability requires determining the extent of control that the user has over the system and whether there was a failure or negligence in maintaining the integrity of the system and the data used in it. Determining responsibility also requires analyzing the available evidence and evaluating the methods and techniques that were used in the crime, and this process can be very difficult in cases of using complex technologies such as artificial intelligence technology. Therefore, the law must evolve and adapt to new technologies such as artificial intelligence to enable judicial authorities to accurately determine liability and administer justice in cases of crimes committed using this technology.
8. Main Results
There are many studies that discuss the role of law in addressing the risks of using artificial intelligence. Below we present some of the main results of these studies:
1. The law plays an important role in controlling the use of artificial intelligence, as companies and institutions must adhere to local and international legislation and regulations that regulate the use of artificial intelligence and determine the responsibility of the parties involved in the event of risks.
2. The law must be flexible to ensure the flexible use of artificial intelligence, and ensure that laws do not limit technological development.
3. The law must also set standards for applying artificial intelligence in commercial and industrial processes and services, such as implementing justice, protecting privacy and security, and other important matters.
4. Studies show that the law itself is not sufficient to address the risks of using artificial intelligence, as companies, institutions, and users must adopt other practices, such as assessing risks, applying appropriate internal policies, and cooperating with the competent authorities.
5. The ethical and social challenges associated with artificial intelligence are among the main risks, and the law must set ethical standards for the use of artificial intelligence and protect users and communities from any negative effects.
6. The law must work to enhance awareness about the dangers of using artificial intelligence and provide training and guidance to companies, institutions, and users on how to use it safely and effectively.
9. Conclusion
It can be concluded from studies related to the role of law in addressing the risks of the use of artificial intelligence that it is important and necessary to develop legislation and laws that guarantee the safe and effective use of artificial intelligence, and protect the rights, security, and privacy of affected individuals and communities. This requires setting standards and conditions for applying artificial intelligence, and defining appropriate responsibilities and penalties in the event of risks or violations. In addition, the law should promote awareness and provide training and guidance to individuals and organizations on how to use AI safely and effectively.
The study's recommendations on the role of law in addressing the risks of the use of artificial intelligence are very important to determine the steps that must be taken to protect users and communities from the potential risks that the use of technology can cause.
Among the main recommendations that can be achieved in this area are:
1. Developing legislation, laws and regulatory policies to reduce the risks that can be caused by the use of smart technology.
2. Ensuring cooperation between governments and technology companies to achieve the general goals of society and enhance transparency in the use of technology.
3. Encouraging scientific research and studies in this field to determine the potential risks of artificial intelligence and new technical developments and how to deal with them.
Abbreviations
LG | Legal Gaps |
AI | Artificial Intelligence |
RA | Risk Analysis |
Conflicts of Interest
The authors declare no conflicts of interest.
References
[1] |
Ahmad, S. F., Rahmat, M. K., Mubarik, M. S., Alam, M. M., & Hyder, S. I. (2021). Artificial intelligence and its role in education. Sustainability, 13(22), 12902;
https://doi.org/10.3390/su132212902
|
[2] |
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International journal of information management, 57, 101994;
https://doi.org/10.1016/j.ijinfomgt.2019.08.002
|
[3] |
Park, C. W., Seo, S. W., Kang, N., Ko, B., Choi, B. W., Park, C. M., & Yoon, H. J. (2020). Artificial intelligence in health care: current applications and issues. Journal of Korean medical science, 35(42);
https://doi.org/10.3346/jkms.2020.35.e379
|
[4] |
Lee, D., & Yoon, S. N. (2021). Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. International journal of environmental research and public health, 18(1), 271;
https://doi.org/10.3390/ijerph18010271
|
[5] |
Manickam, P., Mariappan, S. A., Murugesan, S. M., Hansda, S., Kaushik, A., Shinde, R., & Thipperudraswamy, S. P. (2022). Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical systems for intelligent healthcare. Biosensors, 12(8), 562;
https://doi.org/10.3390/bios12080562
|
[6] |
Sumantri, V. K. (2019). Legal responsibility on errors of the artificial intelligence-based robots. Lentera Hukum, 6, 337.
|
[7] |
O'Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., & Ashrafian, H. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The international journal of medical robotics and computer assisted surgery, 15(1), e1968.
|
[8] |
Mullet, V., Sondi, P., & Ramat, E. (2021). A review of cybersecurity guidelines for manufacturing factories in industry 4.0. IEEE Access, 9, 23235-23263;
https://doi.org/10.1109/ACCESS.2021.3056650
|
[9] |
Hasan, M. K., Habib, A. A., Shukur, Z., Ibrahim, F., Islam, S., & Razzaque, M. A. (2023). Review on cyber-physical and cyber-security system in smart grid: Standards, protocols, constraints, and recommendations. Journal of network and computer applications, 209, 103540;
https://doi.org/10.1016/j.jnca.2022.103540
|
[10] |
Gervais, M. (2012). Cyber attacks and the laws of war. Journal of Law & Cyber Warfare, 1(1), 8-98;
https://doi.org/10.15779/Z38R66C
|
[11] |
Aslan, Ö., Aktuğ, S. S., Ozkan-Okay, M., Yilmaz, A. A., & Akin, E. (2023). A comprehensive review of cyber security vulnerabilities, threats, attacks, and solutions. Electronics, 12(6), 1333.
|
[12] |
Behailu, Y. (2023). The impact of artificial intelligence on society. International Research Journal of Modernization in Engineering, Technology and Science, 5(10), 3120-3125.
|
[13] |
Nabila, E. A., Santoso, S., Muhtadi, Y., & Tjahjono, B. (2021). Artificial intelligence robots and revolutionizing society in terms of technology, innovation, work and power. IAIC Transactions on Sustainable Digital Innovation (ITSDI), 3(1), 46-52.
|
[14] |
Kriebitz, A., & Lütge, C. (2020). Artificial intelligence and human rights: A business ethical assessment. Business and Human Rights Journal, 5(1), 84-104.
|
[15] |
Sakka, F., El Maknouzi, M. E. H., & Sadok, H. (2022). Human resource management in the era of artificial intelligence: future HR work practices, anticipated skill set, financial and legal implications. Academy of Strategic Management Journal, 21, 1-14.
|
[16] |
Wright, S. A., & Schultz, A. E. (2018). The rising tide of artificial intelligence and business automation: Developing an ethical framework. Business Horizons, 61(6), 823-832.
|
[17] |
Abe, O., & Eurallyah, A. J. (2021). Regulating artificial intelligence through a human rights-based approach in Africa. African Journal of Legal Studies, 14(4), 425-448.
|
[18] |
Huang, C., Zhang, Z., Mao, B., & Yao, X. (2022). An overview of artificial intelligence ethics. IEEE Transactions on Artificial Intelligence, 4(4), 799-819.
|
[19] |
Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. International Journal of Information Management, 62, 102433.
|
[20] |
Munoko, I., Brown-Liburd, H. L., & Vasarhelyi, M. (2020). The ethical implications of using artificial intelligence in auditing. Journal of business ethics, 167(2), 209-234.
|
[21] |
Cobbe, J., & Singh, J. (2021). Artificial intelligence as a service: Legal responsibilities, liabilities, and policy challenges. Computer Law & Security Review, 42, 105573.
|
[22] |
Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., 29, 353.
|
[23] |
Bashayreh, M., Sibai, F. N., & Tabbara, A. (2021). Artificial intelligence and legal liability: towards an international approach of proportional liability based on risk sharing. Information & Communications Technology Law, 30(2), 169-192.
|
[24] |
Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. International journal of information management, 48, 63-71.
|
[25] |
Shukla Shubhendu, S., & Vijay, J. (2013). Applicability of artificial intelligence in different fields of life. International Journal of Scientific Engineering and Research, 1(1), 28-35.
|
[26] |
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of service research, 21(2), 155-172.
|
Cite This Article
-
-
@article{10.11648/j.ijsts.20241205.12,
author = {Khaled Eid Abdel Moneim Abdel Fattah and Basma Mohamed},
title = {The Role of Law in Addressing the Risks of Using Artificial Intelligence
},
journal = {International Journal of Science, Technology and Society},
volume = {12},
number = {5},
pages = {151-158},
doi = {10.11648/j.ijsts.20241205.12},
url = {https://doi.org/10.11648/j.ijsts.20241205.12},
eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijsts.20241205.12},
abstract = {The law can be considered an important tool to address the risks of using artificial intelligence (AI). AI is defined in a variety of ways depending on the tasks it completes. Given that AI leverages computing power to carry out tasks that people typically undertake, it is also frequently referred to as cognitive computing or machine learning. Artificial intelligence (AI) uses data perception and synthesis to replicate human thought processes, automate tasks, and make judgments. The use of AI is regulated by many laws and regulations aimed at protecting consumers, users and society in general. The role of the law in addressing the risks of using AI includes many issues, among which are: maintaining privacy and security, maintaining fairness, civil and criminal liability, maintain safety and regulating the use of AI in business. Artificial intelligence in law firms has proven to be a golden ticket to increased productivity, improved decision-making, and higher competitiveness in the industry. Rules that individuals and organizations must adhere to when using AI, ensuring that these standards are strictly applied. Furthermore, the law helps promote transparency and accountability, as organizations have to commit to documenting AI usage processes and clarify how data and algorithms are used. This helps reduce the risk of discrimination and errors that can occur when using the AI.
},
year = {2024}
}
Copy
|
Download
-
TY - JOUR
T1 - The Role of Law in Addressing the Risks of Using Artificial Intelligence
AU - Khaled Eid Abdel Moneim Abdel Fattah
AU - Basma Mohamed
Y1 - 2024/09/20
PY - 2024
N1 - https://doi.org/10.11648/j.ijsts.20241205.12
DO - 10.11648/j.ijsts.20241205.12
T2 - International Journal of Science, Technology and Society
JF - International Journal of Science, Technology and Society
JO - International Journal of Science, Technology and Society
SP - 151
EP - 158
PB - Science Publishing Group
SN - 2330-7420
UR - https://doi.org/10.11648/j.ijsts.20241205.12
AB - The law can be considered an important tool to address the risks of using artificial intelligence (AI). AI is defined in a variety of ways depending on the tasks it completes. Given that AI leverages computing power to carry out tasks that people typically undertake, it is also frequently referred to as cognitive computing or machine learning. Artificial intelligence (AI) uses data perception and synthesis to replicate human thought processes, automate tasks, and make judgments. The use of AI is regulated by many laws and regulations aimed at protecting consumers, users and society in general. The role of the law in addressing the risks of using AI includes many issues, among which are: maintaining privacy and security, maintaining fairness, civil and criminal liability, maintain safety and regulating the use of AI in business. Artificial intelligence in law firms has proven to be a golden ticket to increased productivity, improved decision-making, and higher competitiveness in the industry. Rules that individuals and organizations must adhere to when using AI, ensuring that these standards are strictly applied. Furthermore, the law helps promote transparency and accountability, as organizations have to commit to documenting AI usage processes and clarify how data and algorithms are used. This helps reduce the risk of discrimination and errors that can occur when using the AI.
VL - 12
IS - 5
ER -
Copy
|
Download