Newsletters




Kimberly Nevala

Kimberly Nevala is a strategic advisor at SAS (www?.sas.com). She provides counsel on the strategic value and real-world realities of emerging advanced analytics and information trends to companies worldwide, and is currently focused on demystifying the business potential and practical implications of AI and machine learning.

Articles by Kimberly Nevala

Market research, strategic planning, research and development (R&D), and proactively researching and strategizing for the future are commonplace components of business operations. The exception is the case of governance teams who are far too often recipients of, rather than participants in, strategic planning. As a result, existing policies and practices quickly stagnate or deviate from current usage.

Posted October 10, 2023

Ah, those comprehensive, yet amazingly unclear terms and conditions (T&C). You know the ones. They include a minimum of 10 pages of scrolling text detailing the company's rights and obligations. Of course, the critical bits regarding your data or rights are beyond the point at which even young eyes go blurry.

Posted June 19, 2023

A short treatise on three mistakes organizations commonly make when designing or extending governance programs. Loosely inspired by discussions about (but not written by) ChatGPT. Decision rights—who needs to make what decisions—are the crux of governance. Success is not determined by the seniority of your governance council(s) or how many data stewards you have. Successful governance hinges on understanding how decisions are effectively made and made effective.

Posted February 16, 2023

Data mesh is all the rage. The objective? To eliminate artificial roadblocks and extend the means of data production across the enterprise—thereby expanding the scope of data products the organization generates. And, ultimately, increasing the value and use of data in decision making and operational practice.

Posted December 15, 2022

The information imbalance between purveyors of AI-enabled systems and their oft unwitting subjects is profound. So much so that leading AI researchers point to this chasm as a critical ethics issue in and of itself. This is due largely to the fact that public perceptions or, more accurately, misperceptions can enable (however unintentionally) the deployment of insidiously invasive or unsound AI applications.

Posted September 29, 2022

Questioning whether your governance efforts are merely inquisitive? Here are five signs.

Posted May 16, 2022

It is easy to attribute catastrophic outcomes and insidious, unintended side effects to failures of governance. Or, more often, to a lack of governance. In practice, however, all organizations are governed, either formally or informally. Formal governance involves discretely defined accountability and expectations encoded in principles, policies, and processes. Informally—and more influentially—organizations are governed by the behaviors and norms modeled and rewarded by their leadership and peers.

Posted April 01, 2022

Organizations, public and private, are codifying principles, regulations are emerging, and standards are proliferating.

Posted December 22, 2021

In the rush to bring AI and data solutions to bear, don't guess and don't just ask, "Why?"; also ask, "Why not?" Consider why this application might not be a good idea, may not lead to our intended outcome, might not be well-received, and might not safeguard human dignity and liberties.

Posted September 27, 2021

Deploying AI fairly, safely, and responsibly requires clarity about the risks and rewards of an imperfect solution, not the attainment of perfection. An AI algorithm will make mistakes. The error rate may be equal to or lower than that of a human. Regardless, until data perfectly representing every potential state—past, current, and future—exists, even a perfectly prescient algorithm will err. Given that neither perfect data nor perfect algorithms exist, the question isn't whether errors will happen but instead: When, under what conditions, and at what frequency are mistakes likely?

Posted May 26, 2021

After a wild and turbulent 2020, the new year has ushered in a renewed commitment to establishing or improving corporate governance. Yet, positive energy aside, our traditional approach to endorsing governance of data, analytics, or AI remains fraught. As a result, governance initiatives springing from an earnest desire to do right (e.g., responsible AI), as well as the need to not do wrong (e.g., regulatory/compliance), struggle to enlist broad coalitions of the willing.

Posted April 05, 2021

For ethics to take root, sustainable governance practices must be infused into the fabric of an organization's AI ecosystem.

Posted January 18, 2021

Never have charts and graphs been more prominent in the collective public consciousness. The increased focus on data-driven insights has, just as so much in life, been both positive and negative.

Posted September 14, 2020

It is a matter of when, not if, your organization will confront a never-before-seen data source—a source that, if managed improperly, could result in catastrophic consequences to your brand and bottom line. In some cases, that data will be imported from outside your four walls. In others, the data will spring from new business processes or the fertile minds of your employees manipulating existing assets to create altogether new analytic insights,

Posted May 19, 2020

To democratize data and analytics is to make them available to everyone. It is an admirable goal and one with its roots in the earliest days of the self-service movement. If an organization is to truly be data-driven, it follows that all key decisions—from tactical operational priorities to strategic vision—must be data-informed. So where is democratization going wrong?

Posted March 20, 2020

Opportunity and Threat: The Intersection of AI and Data Governance

Posted December 23, 2019

Sponsors