Skip Navigation

UMGC Global Media Center Data Ethics Emerges as Key Theme at 7th Annual BDA EDCon

The Big Data and Analytics Education Conference (BDA EdCon), hosted by University of Maryland University College on June 3-4, 2019, brought together academics, educators and industry partners to exchange knowledge and ideas around some big themes, including the future of artificial intelligence (AI); the confluence of big data analytics, AI and cognitive computing; and how AI can be implemented into teaching and learning in order to meet industry demand.

During the two-day event, speakers discussed new ideas, innovative technical solutions, shared enlightening case studies—and several of them either focused on or circled back to one recurring theme: ethics.

David Cox, director of the MIT-IBM Watson AI Lab, opened the conference with a keynote discussion about the future and potential of AI, but expressed a concern for AI’s darker side. “Today, there is not a game that machines are not better at,” he said.

Cox also stressed that AI is hackable to extreme consequences. “We’re into a new era where every system developed by mankind can be hacked by mankind,” he said. “We’ve entered a strange and wonderful world of AI hacking that we’ll need to figure out.”

David Cox, director of the MIT-IBM Watson AI Lab, opened the conference with a keynote discussion about the future and potential of AI.

He wondered aloud about the consequences of a hack to a self-driving car. “Suppose that car should recognize a stop sign as something entirely different? Deep learning systems are amazing and powerful, but they are often not extracting the real structure of what’s there,” Cox said, adding that the future of AI depends on trust and transparency.

“Policymakers have to be engaged to prevent things like job displacement and bias. We will need to legislate and direct the technology properly because AI can also be used to take away privacy and liberties,”  he said.

In a session titled “Ethical and Responsible Use of Data,” Natalie Evans Harris, COO of BrightHive, concurred that biased data can lead to biased results. “If you’re basing your AI on biased data, then you’re not solving a problem, you’re perpetuating a problem,” she said.

Natalie Evans Harris, COO of BrightHive, discussed “Ethical and Responsible Use of Data.”

So, how do we build trust with AI? According to Evans Harris, we do so through public-private collaboration and investment in a future infrastructure that is open, collaborative, vendor-neutral and standards based. The Global Data Ethics Project, a framework and set of principles co-created by Evans Harris for data practitioners, endeavors to do just that—to define responsible use of data through a set of established criteria.

Evans Harris said that she does see progress. “Today, we’re starting to see IEEE and other organizations setting up standards for data collection and use.” She added that she remains hopeful that eventually we will see a rating system for data ethics “much like the LEED standard for green buildings.”

Former IBM executive Cortnie Abercrombie offered a crash course on data science ethics. Abercrombie, who founded the organization AI Truth to help business leaders better understand and engage in AI, said she largely based her ethics recommendations on her own experiences at IBM, where she observed “a lot of loose use of data on the client side.”

She advised the audience, primarily of data science students and professionals, to recognize red flags and prevent ethical crises before they arise. “Catch and fix issues before they become a brand issue,” Abercrombie said, reminding that the stakes are high if companies misuse data, including compromises in safety, rights and liberties, jobs, privacy, prejudices and accountability.

“Pay attention to the use of data and question everything,” she advised. Abercrombie’s checklist of questions includes: What is the upside of the AI solution? What are the stakes? Can anyone be harmed? What are the contractual terms? Who owns the product? Can you withdraw it from the market? How can continual feedback loops be included?

Abercrombie also offered recommendations for combating bias. Notably, she said that formal processes should be set to fully vet with internal and external review boards, and to evaluate the impact of false negatives and positives against the goal of a product.

She concluded by cautioning that often with AI applications we are talking about life and death situations, such as self-driving cars or health applications that can result in the denial of insurance. As such, she stressed that processes be fair, transparent and adhere to diversity best practices.

“For any project ensure that affected audiences are represented in the decision-making process and always measure AI projects in non-monetary ways,” Abercrombie said.