By Confidence Mbang

Abstract

The rapid adoption and proliferation of Artificial Intelligence (AI) systems across Nigeria’s sectors such as healthcare, finance, transport, workplaces, amongst others, have raised questions pertaining to the determination of liability were AI-related harm occurs. Utilising the doctrinal methodology, this article adopts the analytical and comparative methods in examining the adequacy of existing  common law liability principles in addressing AI-related harms in Nigeria. It evaluates extant legislations relative to AI, identifying critical gaps that current doctrine fails to bridge. The article found that the Nigerian Law is ill-equiped to provide meaningful redress to victims of AI-caused harm. Drawing from selected jurisdiction, such as the European Union and United Kingdom, the article maps the contours of the liability/responsibility vacuum and calls for urgent scholarly and legislative attention. It is intended as a forerunner contribution – assessment in purpose – to a conversation that Nigerian jurisprudence can no longer defer.

Keywords: Artificial Intelligence, Liability, Responsibility, Regulatory Gap, Emerging Technologies.

  1. Introduction

As an operational reality, AI is no longer a distant technological prospect for Nigeria. Algorithm systems now perform several task cutting across diverse sectors; credit scoring evaluation  in fintech companies, diagnostic process in heatlhcare institutions, public administrative institutions, and even function in workplaces for employment relations – emphasising it cyber-physical impacts. Together with other emerging technologies, AI has transformed public administration, social relations, and commercial practices. Generally, the overwhelming features of AI, such as opacity, autonomy, and distributed agency have made its regulation complicated.[1] The ‘black Box’ machine learning models limit the ability of the courts to determine fault based on conventional evidence standards. As AI systems become more autonomous and capable of self-learning and decision-making, they become cumbersome to exact control and impose liability.[2] Moreover, the factors to determine liability in semi-autonomous and autonomous systems largely differ[3], as traditional models on responsibility based on human action, intent, and foreseeability such as liability in Tort[4], Contract[5] administrative, and criminal law[6] are no longer adequate to regulate AI and determine its liability regime. These shortcoming are also peculiar to the Nigerian Legal System where cybersecurity threats and attacks has been strengthened by AI systems and applications.[7]

In November 2023, the OECD launched its AI Incidents Monitor, which recorded over 11,000 incidents and hazards so far, which explains why compensating the harms that occur should be a primary concern if responsible AI and AI safety are to become a reality.[8] This is not good for the most populous black nation in Africa which has positioned AI as a cornerstone of its National Digital Economy Policy and Strategy (NDEPS) 2020–2030. However, the federal government has not relented from developing initiatives and programmes like the 3 Million Technical Talent (3MTT) and AI Trust, amongst others. But beyond this expanding development and deployment, there remains one under-examined issue; who is liable, and through what mechanisms can AI liability and responsibility be determine?

This issue is not merely academic but practical. As noted above, the features of AI strains traditional liability  principles and exposes regulatory lacunas. This article examines those strains. It maps the existing Nigerian legal framework against the distinctive liability challenges posed by AI systems, identifies the gaps that emerge from that mapping, and draws selectively on comparative developments – particularly the European Union’s Artificial Intelligence Act and the United Kingdom’s evolving regulatory posture – not to prescribe transplantation solutions, but to sharpen the diagnostic picture. The article is expressly descriptive in orientation: its contribution is to name and frame the problem with sufficient clarity that future scholarship, judicial reasoning, and legislative effort may proceed on firmer ground. Nigeria needs this conversation. This article is its forerunner.

  1. Understanding AI and Liability implications

Artificial Intelligence broadly refers to technological computational systems designed to undertake tasks that would ordinarily demand human cognition, such as reasoning, pattern, recognition, decision-making, and natural language processing. Within this broad categorisation, machine learning and deep learning neural networks present the most acute liability challenges. Accordingly, three features of AI create particular difficulties for liability doctrines and existing laws.

First, opacity. AI systems, especially deep neural networks functions as black boxes’, by producing untraceable and unexplainable outputs unknown to even the designer, different from the inputted data. It directly undermines the fault based liability regime, which demands proof of standards by the Claimant – an exercise that is practically impossible when the system reasoning is inscrutable.[9]

Second, autonomy. AI is not like conventional softwares that executes pre-programmed instructions, rather it exhibit a distinct degree of behavioral adaptability ND independence that may lead to decisions or actions that are not explicably anticipated by the designer and deployer.[10]

Third, distributed agency. The distribution of agency across multiple chain actors in the development and deployment of AI systems. A typical AI deployment in Nigeria involves a developer who creates the underlying model, a vendor who adapts it for a specific application, an operator – such as a hospital or bank – who deploys it in a professional context, and an end-user who relies on its outputs. When harm occurs, the causal contribution of each actor is difficult to isolate, and responsibility may be diluted across the chain in ways that leave victims without a clear defendant. These features constitute the core liability challenge that must be confronted.

  1. The Current AI Regulatory Regime in Nigeria: Through Existing Policies/AI-related Legislations and Traditional Common Law Doctrines

This section focuses on analysis, albeit briefly, the exisitng policies and Ai-related legislations in Nigeria. It also discussed about the traditional common law doctrines and how there are strained to apply to AI-related harms and risk.

  • Existing Policies/AI-related Legislations in Nigeria

In Africa, Nigeria has been at the forefront in regulating AI and its liability implications. Although there are no specific AI legislation, tangible efforts have been made posturing the nation’s interest thereof. The Federal Government through the Federal Ministry of Communication, Innovation, and Digital Economy (FMCIDE) – in expanding the National AI Policy (NAIP) according to its 2023 whitepaper – released the National AI Strategy (NAIS) 2024 as a central policy document and to serve as a strategic blueprint designed to make Nigeria a global leader by 2030.[11] The NAIS identified four broad AI risks: economic, ethical, societal, and AI model, and provides a roadmap for developing a robust framework that will support the ethical and responsible use of AI.[12]

There is also the NITDA Draft Code of Practice for AI. The Draft Code of Practice for Artificial Intelligence is a NITDA-led regulatory instrument intended to provide sector-agnostic standards, risk-based obligations, and governance processes for the development, deployment and use of AI systems in Nigeria. The draft aligns Nigeria’s AI governance with national priorities (data protection, fundamental rights, cybersecurity) while proposing conformity, transparency, and accountability measures to mitigate harm from AI.[13]

There are also several legislative bills under review at the National Assembly. For instance, the Control of Usage of Artificial Intelligence Technology Bill (HB 942), to create a unified legal framework for AI licensing, and incident reporting, while the Bill on Establishment of the National Institute for AI and Robotic Studies focuseson creating a national instituteof research, standard-setting, and capacity building. This signifies  a drift from policy-driven regulatory pattern to legislative-driven  regulatory regime.

Aside from these, there are sector specific regulations, guidelines, and policies, such as the Nigerian Data Protection Act, General Application and Implementation Directive (NDPA-GAID), 2025, NBA Guidelines for AI in the Legal Profession, 2024, Code of Practice for Interactive Computer Service Platforms, and even the National Digital Economy Policy and Strategy (NDEPS).

Nigeria has also attempted regulation of AI through reliance on existing legislations, such as the Nigerian Data Protection Act (NDPA), 2023,  Cybercrimes Act, 2015 (as amended, 2024),  Federal Competition and Consumer Protection Act (FCCPA), 2018, Nigerian Communications Act, 2003, National Information Technology Development Agency (NITDA), 2007, amongst others. The application of the forgoing legislations is usually imperfect and, many a time, leaves victims without compensation.

  • Traditional Common Law Doctrines in Nigeria
  1. Negligence

The doctrine of negligence, which is anchored the foundational principle and general duty that a person must take reasonable care to avoid foreseeable harm to the person of another[14] is one of the doctrinal vehicles through which AI liability may be addressed in Nigeria. Renown global scholars like Diamantis argue in its favour as the best model.[15] However, in Nigeria, the immediate difficulty with negligence as applied to AI is the breach inquiry. The process of establishing that a defendant’s conduct fails below the reasonable test pressuposes the defendant’s (which in this case may be the developer, deployer, designer, or user) conduct rather than the system’s output as the operative cause.

This difficulty is more apparent for autonomous systems without any human agency than semi-autonomous systems where human interaction can be discovered. Even in semi-autonomous systems, identifying the precise human decision that constituted the breach becomes deeply problematic. Did the developer breach a duty by releasing an undertested model? Did the operator breach a duty by deploying it in a high-risk context without adequate oversight? Did the user breach a duty by over-relying on its outputs? Nigerian negligence law offers no principled mechanism for resolving these questions as applied to AI.

  1. Vicarious Liability and Agency

Vicarious liability – which renders an employer or principals liable for torts of employees or agents within the scope of their authority – is another doctrinal vehicle which might hold operators liable for the deployments of AI systems.  While this doctrine is highly considered even globally[16], it is strained and applied imperfectly. It application demands the presence of legal personhood and volition conduct. Unfortunately, an AI system is neither an employee nor an agent in any cognisable legal sense. Therefore, the extension of this doctrine by analogy would require substantial judicial innovation that Nigerian courts have not indicated any appetite to undertake without legislative direction.

  • Strict Liability

The rule in Rylands v Fletcher[17], which imposes strict liability for the non-natural use of land and the escape of dangerous things, is recognised in Nigerian jurisprudence.[18] Scholars like Wendehorst views strict liability as the best model for AI liability due to its peculiar features.[19]  However, its application to AI liability context is seriously strained, due to the fact the rule requires a physical ‘escape’ from land, a concept wholly inapt to the deployment of software systems across digital networks. Albeit, while the basic consideration for the doctrine’s applicability to high-risk AI systems, rest on rationale that those who introduce extraordinary risks should bear the consequences, there exist no delibrate judicial reasoning or statutory basis for such extension.

  1. The Break Down of Nigerian Law: Sectoral Illustration

As a result of the rapid proliferation of AI across sectors, the doctrinal insufficiencies discussed above are not abstract but are copiously manifest in terms of determining responsibility. For instance, in the financial sector, companies and commercial banks deploy algorithmic credit-scoring models that make or heavily influence lending decisions affecting millions of Nigerians. Where such a model generates a discriminatory or erroneous credit assessment, the affected individual faces formidable obstacles: the opacity of the algorithm makes it impossible to identify the precise input or weighting that produced the adverse outcome, the FCCPA’s remedial provisions are ill-suited to algorithmic decision-making, and there is no data subject right to a human review of automated decisions equivalent to that provided by the EU’s General Data Protection Regulation. Even the extant data protection laws do not address automated decision-making liability with the specificity the problem demands.[20]

In the healthcare sector, AI-assisted diagnostic tools have been adopted for clinical practice, particularly in radiology and pathology. The National Health Act 2014 imposes professional obligations on healthcare providers[21], and Nigeria’s eHealth strategy anticipates expanded use of digital tools.[22] However, where harm occur as a result of the use of AI systems, existing medical negligence doctrine – calibrated to assess the conduct of the individual practitioner – does not readily accommodate the apportionment of responsibility between the clinician’s professional judgment and the algorithm’s output. The developer who may be far away Nigeria escapes liability due to the absence of a comprehensive statutory product liability regime.

Furthermore, in the public administration sector, which consists of the adoption of AI for service delivery, revenue administration, and regulatory compliance. Automated administration decisions that are adverse to citizens violate constitutionally guarateee rights with no mechanism to address them. There are no statutory framework compelling agencies to disclose and provide explanations for the use and results of automated adverse decisions or outputs. Irrespective of the fact that ibi jus ibi remedium[23] the gap between constitutional rights and its remediation in AI context remains acutely unaddressed.

Another aspect where doctrinal insufficiencies have massively impacted is employment relations across all sectors, whether oil and gas, aviation, agriculture, etc. The adoption of cyber-physical AI systems like robots in workplaces has altered the liability architecture.[24] Where for instance, a robot kills a human employer in the same workplace department, who bears the liability? Beyond physical harms, where AI is used for employee’s monitoring, performance evaluation, and task allocation, leading to claims of intrusive surveillance, loss of privacy, and undue psychological stress, potentially violating statutory duties, how can liability be determined? These concerns remain largely unsettled as there is no generally accepted doctrinal basis, nor statutory instruments governing same.

  1. Comparative Insights: Determining Nigeria’s Preparedness

For purposes of conclusiveness, at this juncture, a comparative analysis, albeit briefly, is necessary. The essence is not to advocate for transplantation but to ground it analytical assessment. Due to the distinctive nature of Nigeria’s legal culture, institutional capacity, and developmental priorities, strict transplantation is not desirable nor realistic.

This analysis illuminates questions that any serious AI liability framework must address and thereby sharpens the identification of what Nigeria currently lacks. The European Union’s Artificial Intelligence Act (EU AI Act), which entered into force in 2024, represents the most comprehensive legislative attempt to date to regulate AI systems across their lifecycle.[25] The risk-based architecture, which categorise AI application according to the severity of risk there , and imposes a graduated obligation therewith, offers a clue model to Nigeria on tinkering a regulatory pathway. The EU AI Act’s designation of certain applications as ‘high-risk’, including those used in credit assessment, healthcare, and public administration, maps precisely onto the sectors where Nigerian regulatory gaps are most acute.

The requirements for transparency/accountability, human oversight, and technical documentation addresses opacity, even if not conclusive – a problem that undermines liability attribution in Nigeria. The United Kingdom, on the other hand, adopts a different, more permissive model: principle-based and sectoral focused approach that encourages innovation.[26] The UK’s Law Commission has also engaged seriously with questions of liability for autonomous systems, particularly in the context of automated vehicles, offering analytical frameworks for attribution that may have broader applicability.[27] For Nigeria, the UK experience suggests that a principles-based approach relying on existing regulators is viable – but only if those regulators have the institutional capacity, technical expertise, and statutory mandate to exercise meaningful oversight. In the Nigerian context, those preconditions cannot presently be assumed.

Consequently, as a forward-looking nation, Nigeria as a major stakeholder at African Union (AU), which birthed the African Union’s Continental Artificial Intelligence Strategy.[28] However, the regulatory policies and existing AI-related laws are merely imperfect regulatory attempts at regulating AI and its liability. The comparative picture, taken as a whole, reveals that Nigeria is not merely behind the legislative frontier; it has yet to identify the questions the frontier requires it to answer.

  1. Gaps Identified

The analysis in the preceding sections permits a clear and timely synthesis. Nigerian law, as presently constituted, fails to provide adequate liability rules for AI-caused harm along three dimensions: doctrinal, structural, and institutional. The doctrine gap is the most precise of these dimensions. While Negligence doctrine can not reliably identify the responsible human actor where harm results from autonomous algorithmic behaviour, strict liability rules designed for physical escapes are inapt to digital systems. Again, while product liability provisions were not drafted with AI software in mind, vicarious liability concepts require legal personality and volitional conduct that AI systems do not possess. The combine effect of the foregoing is that a victim of AI-caused harm in Nigeria faces a doctrinal landscape in which no single cause of action provides a reliable pathway to redress.

The structural gap concerns the absence of disclosure and transparency obligations. In jurisdictions with developed AI governance frameworks, liability is partly backstopped by requirements that developers and deployers document their systems’ functioning, disclose their use to affected persons, and provide explanations for adverse automated decisions. Regrettably,  none of this obligation appears in binding form in Nigeria. A claimant who suspects that an AI system caused their harm cannot compel disclosure of the system’s design, training data, or operational parameters, and in the absence of such information, litigation is practically futile regardless of the doctrinal framework applied.

The institutional gap is far more sobering as it exposes further major consideration that Nigeria must observe. Assuming but not conceding that Nigeria has developed a comprehensive legal framework, its efficacy would depend on courts with the technical literacy to assess AI evidence, regulators with the expertise to supervise compliance, and a legal profession with the capacity to advance and respond to AI-related claims. This is moreso due to the inadequacy of existing institutions, such as; the National Information Technology Development Agency (NITDA), Nigerian Data Protection Commision (NDPC), National Centre for Artificial Intelligence and Robotics (NCAIR), and the Federal Ministry of Communications, Innovation, and Digital Economy (FMCIDE) in addressing AI-caused harms.

Therefore, there is a grave need for institutional infrastructure investments. In conclusion, it is instructive that the doctrinal reform conversation proceed in parallel with institutional capacity-building, not in sequence. 

  1. Conclusion

This article has argued that Nigerian law is currently nascent and largely inadequate to address AI harms and risk. The doctrinal vehicles and statutory/policy regulatory regime, while not without analytical resources, never contemplated the opacity, autonomy, and distributed agency that modern AI systems characterise, and there can not be stretched or strained to to provide reliable redress without significant doctrinal, structural, and institutional advancement. The urgency of addressing these gaps is not just jurisprudential but practical. As AI deployment in Nigeria deepens – across credit, healthcare, public administration, and sectors yet to emerge – the number of persons potentially harmed by algorithmic decisions is bound to skyrocket. Each such person who can not obtain redress represents a failure not only of the legal system but of the social contract that law is meant to sustain.

Nigeria can not afford to address this deficit after the harms have accumulated; it must begin the structural and legislative work immediately. The contribution of this article has been deliberately circumscribed. It does not propose a model statute, nor does it resolve the contested normative questions about how liability should ultimately be allocated in a mature AI governance framework. These itching tasks are reserved for subsequent scholarship and legislative exercise. This article only establishes and makes bold – like the biblical mene mene tekel upharsin (which meaning is also a consistent theme throughout the Holy Quran) – the nature and extent of the problem that Nigerian law faces. The conversation must begin somewhere. It begins here, and this is the forerunner.

References

African Union, ‘Continental Artificial Intelligence Strategy’ (2024).

Anat Lior, A, ‘AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy’ Mitchell Hamline Law Review (2020) Vol. 46, Issue 5.

Araromi, MA, Oluwabiyi, AA, and Olaleke, AM, ‘Determination of Tort Liability in the Deployment of Artificial Intelligence Technology’ (Pol’y & Globalization 2024) 141 J.L.

Brynjolfsson, E, and McAfee, A, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (WW Norton & Company 2014).

Buiten, M, Streel, AD, and Peitz, M, ‘The law and economies of AI liability’ Computer Law and Cybersecurity Review (2023) 48 105794.

Diamantis, ME, ‘Reasonable AI: A Negligence Standard’ Vanderbilt Law Review (2025) Vol. 78 Issue 2.

Donoghue v Stevenson [1932] AC 562.

EU AI Act, 2024.

Federal Ministry of Health, ‘Nigeria eHealth Strategy 2015–2020’.

Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015).

GÖRENTAŞ, MB, ‘Analysis and Evaluation of the United Kingdom’s AI Regulation using Artificial Intelligence Methods’ Selçuk Law Review (2025) Vol. 33 Issue: 3, 2051 – 2082.

Medical and Dental Practitioners Act Cap M8 LFN 2004.

Moch, E, ‘Liability Issues in the Context of Artificial Intelligence: Legal Challenges and Solutions for AI-Supported Decisions’ EAJLE [2024] 7.

NAIS 2024.

National Healthcare Act, 2014

Nigeria – AI Code of Practice (2025)<https://regulations.ai/regulations/RAI-NG-NA-DCPAIXX-2025> accessed 10 March, 2026.

OECD AI Incidents Monitor (AIM) (OECD) <htps://oecd.ai/en/incidents> accessed 10 March 2026.

Ryland v Fletcher (1868) LR 3 HL 330.

Sachoulidou, A, ‘AI Systems and Criminal Liability’ Oslo Law Review (2024) II No 1 – 2024.

Shrestha, S, ‘Nature, Nurture, or Neither?: Liability For Automated and Autonomous Artificial Intelligence Torts Based on Human Design and Influences’ (2021).

Silent Disruptors: How AI is Reshaping Nigeria’s Cyber Threats and Security (Wat3rs Cybersecurity, Nigeria, Policy and Governance 2025)<https://taxtech.com.ng/silent-disruptors-how-ai-is-reshaping-nigerias-cyber-threats-security/> accessed 10 March, 2026.

Smakman, J, et.al., ‘An Autonomy-Based Classification: AI Agents, Liability and learnings from the UK Automated Vehicles Act’ 2nd Workshop on Regulatable ML at NeurIPS (2024).

Wendehorst, C, ‘Strict Liability for AI and other Emerging Technologies’ JETL (2020) 11(2): 150–180.

Yas, N, et.al., ‘Civil Liability and Damage Arising from Artificial Intelligence’ (Migration Letters; 20) No: 6, pp. 1171-1187.

*Confidence Mbang, DIL, LL.B, BL, LL.M (Nile University of Nigeria) Abuja. Email address: mbangconfidence714@gmail.com.

[1] Enrico Moch, ‘Liability Issues in the Context of Artificial Intelligence: Legal Challenges and Solutions for AI-Supported Decisions’ EAJLE [2024] 7, 1, 214.

[2] Miriam Buiten, Alexandre de Streel, and Martin Peitz, ‘The law and economies of AI liability’ Computer Law and Cybersecurity Review (2023) 48 105794, 1.

[3] Sahara Shrestha, ‘Nature, Nurture, or Neither?: Liability For Automated and Autonomous Artificial Intelligence Torts Based on Human Design and Influences’ (2021).

[4] Marcus Ayodeji Araromi, Adeola A. Oluwabiyi and Agboke Martins Olaleke, ‘Determination of Tort Liability in the Deployment of Artificial Intelligence Technology’ (Pol’y & Globalization 2024) 141 J.L. 29.

[5] Nadia Yas, et.al., ‘Civil Liability and Damage Arising from Artificial Intelligence’ (Migration Letters; 20) No: 6, pp. 1171-1187.

[6] Athina Sachoulidou, ‘AI Systems and Criminal Liability’ Oslo Law Review (2024) II No 1 – 2024, p 1-10.

[7] Experts at Deloitte Nigeria unveiled a sobering forecast in early 2025, emphasising that AI-powered cyberattacks are not only intensifying but also automating the evolution of threats. See Silent Disruptors: How AI is Reshaping Nigeria’s Cyber Threats and Security (Wat3rs Cybersecurity, Nigeria, Policy and Governance 2025)<https://taxtech.com.ng/silent-disruptors-how-ai-is-reshaping-nigerias-cyber-threats-security/> accessed 10 March, 2026.

[8] OECD AI Incidents Monitor (AIM) (OECD) <htps://oecd.ai/en/incidents> accessed 10 March 2026.

[9] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015).

[10] Enrico Moch (n1).

[11] NAIS 2024.

[12] It is widely understood to adopt the Identify, Assess, Mitigate, Monitor, and Review procedure stated in the National Institute of Standards and Technology Framework for AI Risk Management for mitigating AI risks.

[13] Nigeria – AI Code of Practice (2025)<https://regulations.ai/regulations/RAI-NG-NA-DCPAIXX-2025> accessed 10 March, 2026.

[14] Donoghue v Stevenson [1932] AC 562.

[15] Mihailis E. Diamantis, ‘Reasonable AI: A Negligence Standard’ Vanderbilt Law Review (2025) Vol. 78 Issue 2.

[16] Anat Lior, ‘AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy’ Mitchell Hamline Law Review (2020) Vol. 46, Issue 5.

[17] (1868) LR 3 HL 330.

[18] ibid

[19] Christiane Wendehorst, ‘Strict Liability for AI and other Emerging Technologies’ JETL (2020) 11(2): 150–180.

[20] Section 37 of the NDPA, 2023 only prohibits subjection to AI decision-making.

[21] The National Healthcare Act, 2014;  s 20; see also Medical and Dental Practitioners Act Cap M8 LFN 2004.

[22] Federal Ministry of Health, ‘Nigeria eHealth Strategy 2015–2020’.

[23] It literailly means, ‘’where there is a wrong, there is remedy’’.

[24] Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (WW Norton & Company 2014).

[25] The EU AI Act, 2024.

[26] M. Burak GÖRENTAŞ, ‘Analysis and Evaluation of the United Kingdom’s AI Regulation using Artificial Intelligence Methods’ Selçuk Law Review (2025) Vol. 33 Issue: 3, 2051 – 2082.

[27] Julia Smakman, et.al., ‘An Autonomy-Based Classification: AI Agents, Liability and learnings from the UK Automated Vehicles Act’ 2nd Workshop on Regulatable ML at NeurIPS (2024).

[28] African Union, ‘Continental Artificial Intelligence Strategy’ (2024).

______________________________________________________________________ “Bridging Theory And Courtroom Practice” — Hagler Sunny Okorie, Nathaniel Ngozi Ikeocha Unveil ‘Functional’ Tort Law Book For Nigerian Legal System The book, titled The Law of Torts in Nigeria: A Functional Approach, authored by Professor Hagler Sunny Okorie Ph.D and Ikeocha, Nathaniel Ngozi Esq, offers law students, practitioners, and academics a comprehensive guide to understanding and applying tort law in Nigerian courts. Interested buyers can place orders via the following contact numbers: 08028636615, 08037667945, 08032253813, or +234 902 196 2209. ______________________________________________________________________ [A MUST HAVE] Evidence Act Demystified With Recent And Contemporary Cases And Materials
“Evidence Act: Complete Annotation” by renowned legal experts Sanni & Etti.
Available now for NGN 40,000 at ASC Publications, 10, Boyle Street, Onikan, Lagos. Beside High Court, TBS. Email publications@ayindesanni.com or WhatsApp +2347056667384. Purchase Link: https://paystack.com/buy/evidence-act-complete-annotation ______________________________________________________________________ ARTIFICIAL INTELLIGENCE FOR LAWYERS: A COMPREHENSIVE GUIDE Reimagine your practice with the power of AI “...this is the only Nigerian book I know of on the topic.” — Ohio Books Ltd Authored by Ben Ijeoma Adigwe, Esq., ACIArb (UK), LL.M, Dip. in Artificial Intelligence, Director, Delta State Ministry of Justice, Asaba, Nigeria. Bonus: Get a FREE eBook titled “How to Use the AI in Legalpedia and Law Pavilion” with every purchase.

How to Order: 📞 Call, Text, or WhatsApp: 08034917063 | 07055285878 📧 Email: benadigwe1@gmail.com 🌐 Website: www.benadigwe.com

Ebook Version: Access directly online at: https://selar.com/prv626

________________________________________________________________________ “Enhance Legal Practice With Authoritative Reports” — Alexander Payne Offers Comprehensive Law Reports, Spanning Over A Century Of Nigerian Jurisprudence

Interested buyers are encouraged to place their orders and enquiries via: 0704 444 4777, 0704 444 4999, 0818 199 9888 Website: www.alexandernigeria.com