Technology products are not only tools, existing in isolation and
deriving their ethical valence from their users. Today's technology
affects the lives of its users in deep and meaningful ways, and anyone
who creates and operates a computer system must reckon with how it
impacts people (users, non-users, and the public at large). Sometimes,
this is a compliance imperative: data protection laws around the world
aim to protect the interest of citizens in their data and other laws,
such as anti-discrimination laws, provide for collective rights for
protected groups. Often, though, the imperative for responsibility
comes outside the law. Engineers and product designers might desire to
build systems that are ethical and do not harm their users or
others. A company's reputation will suffer if their products are seen
as harmful or invasive or biased. Products which are unfair or do not
meet users' needs will fail in the market or leave business
opportunities untapped. This is especially true of data-driven
technologies such as data analytics, machine learning, and artificial
intelligence as these technologies can easily reflect and amplify
existing unfairness in society.
Our Values
At Rocky Coast Research, we believe that:
- Anyone who develops or deploys computer systems must determine how
to do so responsibly.
- Technology can be used responsibly to improve the lives of users.
- Responsible uses of technology are consistent with successful
business.
- The long-term technology market will be won by the products that
best reflect human values.
- Supporting fairness, transparency, security, and privacy does
not make technology more expensive or less performant.
- New technical solutions can improve computer system governance
and responsibility without compromising other interests.
Components of Responsibility
Achieving responsibility requires being sensitive to a number of
important values simultaneously. When these values must be traded off
against each other or other values, it is important to do so
consciously and with purpose. Using our industry leading methodology,
Rocky Coast Research can help you develop programs to make these and
any other considerations important to your business design goals, and
can audit your existing systems and practices for consistency with
desired values.
Fairness
Despite popular claims regarding the objective nature of
mathematics and the impartiality of technical tools, computer systems
can behave in an unfair manner. Bias can even rise to the level of
illegal discrimination. Certain tools may also be more or less
bias-prone than others, and the choice of which approach to take to
building a system can be as important as determining what the system
does.
Those affected by computer systems, especially in historically
marginalized communities and vulnerable populations, desire that
systems treat them appropriately and fairly. Yet, there is no single
definition of fairness, and in many cases mathematical definitions of
fairness can be incompatible with each other. This presents a quandary
to organizations fielding information technologies, as those
organizations must determine what constitutes fairness (or at least
how to avoid unfairness) for any particular application and must
develop that requirement into a specification for their system by
engaging stakeholders from many constituencies. Making fair decisions
is often synonymous with making the best-informed decisions, and thus
the most useful decisions, so fairness may be a business
asset. Systems can also affect people who do not interact with them in
ways that implicate fairness. For example, if a software product
increases productivity but the gains in productivity are uneven across
user groups, it may exacerbate inequality.
Fortunately, cutting-edge technologies and processes can identify
and mitigate the effects of bias in computer systems. This can solve
compliance problems related to discrimination while simultaneously
reducing the reputational risk of products that appear to be
unfair. Rocky Coast Research can help you figure out how to identify
and avoid unfairness using these approaches.
Transparency
Users, whether lay or expert, are also concerned about
understanding the outcomes of computer systems and how they are
reached. How does the system make judgements about them or others? And
how can users or non-users understand what these systems are actually
doing? This information asymmetry about both ends of the IT pipeline
pose challenges for users to interact and interrogate such
systems. While computer systems are computer programs and fully
determined machines, with each action reducible to clearly
understandable steps, the size and complexity of state-of-the-art
systems, especially artificial intelligence and machine learning
systems, often impedes a full understanding of what systems are
doing. Further, it can be extremely difficult to determine what rules
a complex computer system is using to make decisions simply by
examining it.
However, understandability can be an effective design goal. One
approach to improving understandability focuses on producing
explanations or interpretations of AI systems and their behaviors,
which are short human-understandable summaries of how and why the
system chose a particular behavior or gave a particular output. But on
their own, explanations may or may not provide the requisite
understanding. Rocky Coast Research can help you understand whether
explainable technologies can solve your problems without losing
performance, and can help you develop processes which complement the
value of explanations to achieve true transparency.
Transparency means more than just being open about software and
data. Mere disclosure is not only invasive to normal operations and
intellectual property, it is often insufficient and unhelpful for the
purposes for which transparency is demanded. Decisions made during the
creation and deployment of computer systems matter as well, as does
the ability to reproduce the actions of a system after the fact. At
Rocky Coast Research, we believe that effective transparency can be
achieved while preserving intellectual property and business
confidentiality.
Accountability
It is often not enough to know how a system makes decisions. For
legal compliance and to engender trust with users, it is important to
be able to justify fully how any behavior of an AI system was
generated. Specifically, accountability wrestles with the question of
whether the behavior of a software system was proper and
intended. While transparency enables introspection into AI systems to
understand how and why they have certain behaviors and to interpret
certain decisions, to truly trust an AI system we must have evidence
that the explanations we receive about it are correct and correspond
to the system in use. This evidence supports accountability, the
property that the system's behavior can be fully justified and that
any action of the system can be reviewed for consistency with the
system's goals.
Evidence of the correct operation of an AI system might come from
something as simple as the generation and logging of metadata as the
system is trained or operated. The evidence necessary to support
review of any particular system will depend on that system's purpose,
deployment context, and who must review the evidence (that is, the
evidence convincing to the system's developer or operator is different
from the evidence that is convincing to a regulator or to the
public). Ideally, AI systems can be made accountable to the public by
way of trustworthy audits, careful impact assessments, and ongoing
interrogation.
Reliability and Safety
Systems which are responsible can be trusted to work well. That
means they have been thoroughly tested and that they support an
argument for why they can be trusted. In safety-critical applications
such as aviation, systems are designed along with a safety
case, which is a body of evidence that should convince a
fair-minded but skeptical expert that the system is trustworthy. For
computer systems in all applications, we believe there should be an
analogous responsibility case that provides evidence of
trustworthiness for both experts and average users. Without designing
systems to have the ability to supply such evidence, controllers of
computer systems should not be surprised when the trustworthiness of
their systems is questioned.
Security
Responsible software design and deployment requires attention to
the security of the information handled by that software. A robust
security program will consider when it is necessary to validate the
security of a piece of software vs. isolating the effect it can have
on the information it has access to. Security is increasingly a legal
compliance requirement, as is the ability to identify security
violations so customers can be notified. Security breaches have cost
companies their reputations and have led to the resignations of
several CEOs across many industries.
Additionally, data science and machine learning present new and
difficult challenges for security teams. Many data analysis tools are
not built with security in mind and do not support authentication of
users or authorization of sensitive actions. Data-oriented systems can
suffer different types of attack, such as data poisoning and
adversarial interaction, which are not recognized as security issues
by traditional security tools and frameworks.
The computer security experts at Rocky Coast Research can help you
navigate the modern information security landscape, including helping
you secure your data analysis infrastructure.
Privacy
Beyond complying with data protection laws and privacy policies,
privacy-sensitive design and operation is a core part of using
technology responsibly and going to market effectively. Users are
increasingly wary of giving up their data, for fear that it may be
used in ways they do not expect but which will affect them in negative
ways. And new data protection regimes such as the GDPR mandate
thinking deeply about how privacy fits into all computer systems.
New data-driven technologies such as data science, machine
learning, and artificial intelligence naturally demand large volumes
of data, creating a strong imperative for collecting data in almost
every context. However, new technologies are reducing the need for
such volumes of data, while other technologies are making storing and
querying such volumes of data safer and more private. Rocky Coast
Research can help your organization build a data governance strategy
that is responsive to concerns about values, contractual promises, and
legal obligations while still allowing for the development of advanced
data-driven technologies.
Ethics
Hardly a day goes by without a news story about the need for
technology ethics. Applying ethical principles to computer systems
requires thinking carefully about the underlying technology and asking
detailed questions about how it relates to the world. Several
frameworks exist giving principles that ethical systems should align
with. However, there is often little guidance for how these ideas can
actually be operationalized.
Rocky Coast Research can review your systems for ethical issues,
help to mediate technical and program development with the review of
professional ethicists, and help make abstract ethical principles
actionable in the context of your business.
Interested in learning more?
Contact us to learn more about the
issues your organization faces in thinking about responsible
technology!
Our Services
We work with our clients to achieve responsible software systems
without compromising business imperatives. Find out why software
responsibility is so important in today's technology marketplace and
learn how to avoid common system design pitfalls.
Learn more about our offerings!
Responsibility and the GDPR
Discover why the GDPR requires responsible software and data
practices and what you need to do to comply with this new law. Data
science and machine learning present new challenges not captured by
traditional privacy compliance approaches.
Solve for GDPR Compliance!