Ethical artificial intelligence starts before the model is even made. We need to consider how we get the data. We also need to know who gives us permission to use it. These things are more important than the algorithms we use to make artificial intelligence work.
When we talk about intelligence, we often focus on the algorithms. However, data access and permissions are really what matter most.
Artificial intelligence is only going to get bigger. We need to make sure we are doing things the right way from the start. This requires thinking about data access and permissions. We must make sure we have them before we even start building the intelligence model.
People talk about artificial intelligence a lot and it usually starts with the artificial intelligence models. They want to know how these artificial intelligence models are trained. They also want to know if the artificial intelligence models are biased or not. Another thing people want to know is if the artificial intelligence models explain their outputs. These things are important especially when decisions made by artificial intelligence affect advice, strategy, and outcomes.
The truth is, a lot of the big problems with artificial intelligence happen before we even choose a model. These problems actually happen in the data that the artificial intelligence systems are using.
Before giving a suggestion, an algorithm inherits assumptions. These assumptions include who is allowed to see information, how that information is grouped, and what is considered relevant. These assumptions come from decisions made long ago, often without automation in mind. The algorithm does not know where these assumptions came from.
Ethical risk often enters through access rather than intelligence.
Most organizations did not create their data systems for automated reasoning. They built them to help people do their jobs. Files were shared to save time. Permissions were granted so work will move forward. Temporary access stayed in place because removing it felt risky or inconvenient.
Over time these systems changed. Shared folders grew. Jobs changed. Access rules did not always change with them. As long as systems kept running, these arrangements were treated as acceptable.
When intelligence tools are introduced they do not question these structures. Artificial intelligence systems rely on the permissions they are given. They work within existing boundaries without trying to correct them.
An artificial intelligence system does not know whether access was intentional, outdated, or accidental. It simply follows the rules it inherits. If those rules are broad, the system sees more than intended.
This becomes a problem when systems surface information across datasets. Material can appear in contexts where it does not belong. Outputs can reflect information users did not realize was in scope. The system behaves as designed, but not as expected.
In legal practice, access decisions are ethical decisions. Confidentiality, privilege, and duty of care depend on controlling who can see sensitive information and when they can see it.
Many artificial intelligence deployments treat access as a technical detail rather than an ethical control. Tools are evaluated based on features while underlying permissions are left untouched.
Imagine a document review tool that summarizes case materials. If it has visibility into archived matters, internal notes, or unrelated client files, its outputs may reflect that exposure. No rule is broken. No alert is triggered. Yet the result is not what practitioners would consider appropriate.
The American Bar Association has emphasized that professional competence includes understanding how technology affects confidentiality and client data. It also covers the systems that store and process that information under the Model Rules of Professional Conduct.
Artificial intelligence does not operate independently. It depends on the systems that support it.
AI tools are pattern engines. They look for relationships, rank relevance, and infer context. If access rules are loose or outdated, the system will amplify those conditions at scale.
This is why ethical issues involving artificial intelligence are often difficult to detect. There is no single failure. Instead, small mismatches accumulate over time. A summary includes information that feels out of place. A recommendation references material the user did not realize was visible.
From a governance perspective this is hard to address. Nothing clearly violates policy. The outcome simply feels wrong after the fact.
Ethical controls are operational controls.
The National Institute of Standards and Technology emphasizes least privilege access, visibility, and activity logging as essential safeguards. These measures are crucial for managing risk. These controls are often discussed in security contexts, but they are just as important for ethical use.
Understanding who has access to data is important. Knowing why that access exists is crucial. How the data is used over time must also be considered. This is not only a security concern. It is an ethical one.
When artificial intelligence is used for research, document review, or internal decision support, access reviews should be ongoing. They should not be one-time checks.
Many organizations approach artificial intelligence governance from the top down. They define principles, publish policies, and request assurances from vendors.
What is often missing is a review of the systems those tools connect to. Ethical intent does not override inherited access. Policies do not change permissions. Vendors can only operate within what they are allowed to see.
If an artificial intelligence system connects to an environment without clear boundaries, model level safeguards will struggle to compensate.
This is why some artificial intelligence deployments stall or quietly disappoint. Leaders sense risk but cannot pinpoint the source. The problem is not the algorithm. It is the foundation. Ethical artificial intelligence programs do not need to begin with new tools or complex audits. They can start with basic questions:
- Which repositories are visible to AI enabled systems today?
- Do those access rules reflect current roles?
- Are activity logs enabled and reviewed?
- Can the organization explain how data boundaries are enforced?
These questions are operational, but their implications are ethical.
Organizations that address these issues early tend to adopt artificial intelligence with more confidence. Their systems behave more predictably. Governance aligns more closely with reality.
Ethics in artificial intelligence is often treated as a future problem. In practice many of the hardest issues come from decisions made years earlier. Models do not create ethical risk by themselves; they reveal it. Before debating how intelligent systems should behave, organizations need to ensure the environments those systems rely on are worthy of that intelligence.
About the Author: Ariel Perez is the founder of AKAVEIL Technologies. He works with law firms to design and secure cloud systems, with a focus on access controls, identity management, and data governance. His work centers on how permissions, collaboration tools, and system design affect security, confidentiality, and ethical use of technology.


Leave a Reply