Janet George, Western Digital | WiDS 2019
About This Video
Janet George, “Fellow” Chief Data Officer/Scientist/Big Data/Cognitive Computing, Western Digital sits down with Lisa Martin at Stanford University for WiDS 2019.
#WiDS2019 #WesternDigital #theCUBE
Q&A: How AI is cultivating a responsible community to better mankind
Artificial intelligence initiatives powered by big data are propelling businesses beyond the capacity of human labor. While AI tech offers an undeniable opportunity for innovation, it has also sparked a debate around potential misuse through the vast reach of programmed biases and other problematic behaviors.
The power of AI can be comprehensively harnessed for good by fostering diverse teams focused on ethical solutions and working in tandem with policymakers to ensure responsible scale, according to Janet George (pictured), fellow and chief data officer at WD, a Western Digital Company.
George spoke with Lisa Martin (@LisaMartinTV), host of theCUBE, SiliconANGLE Media‚Äôs mobile livestreaming studio, during the Stanford Women in Data Science event in Stanford, California. They discussed the range of possibilities in AI and how WD is leveraging the technology toward sustainability.
[Editor‚Äôs note: The following answers have been condensed for clarity.]
Tell us about Western Digital‚Äôs continued sponsorship and what makes this important to you.
George: Western Digital has recently transformed itself ‚Ä¶ and we are a data-driven ‚Ä¶ data-infrastructure company. This momentum of AI is a foundational shift in the way we do business. Businesses are realizing that they‚Äôre going to be in two categories, the ‚Äòhave‚Äô and the ‚Äòhave not.‚Äô In order to be in the have category, you have to embrace AI ‚Ä¶ data ‚Ä¶ [and] scale. You have to transform yourself to put yourself in a competitive position. That‚Äôs why Western Digital is here.
How has Western Digital transformed to harness AI for good?
George: We are not just a company that focuses on business for AI. One of the initiatives we are doing is AI for Good and ‚Ä¶ Data for Good ‚Ä¶ working with the UN. We‚Äôve been focusing on trying to figure out the data that impacts climate change. Collecting data and providing infrastructure to stow massive amounts of species data in the environment that we‚Äôve never actually collected before. Climate change is a huge area for us, education ‚Ä¶ [and] diversity. We‚Äôre using all of these areas as a launching pad for Data for Good and trying to use data ‚Ä¶ and AI to better mankind.
Now we have the data to put out massively predictive models that can help us understand what the change would look like 25 years from now and take corrective action. We know carbon emissions are causing very significant damage to our environment and there‚Äôs something we can do about it. Data is helping us do that. We have the infrastructure, economies of scale. We can build massive platforms that can stow this data and then we can analyze this data at scale. We have enough technology now to adapt to our ecosystem ‚Ä¶ and be better in the next 10 years.
What are your thoughts on data scientists taking something like a Hippocratic Oath to start owning accountability for the data that they‚Äôre working with?
George: We need a diversity of data scientists to have multiple models that are completely diverse, and we have to be very responsible when we start to create. Creators have to be responsible for their creation. Where we get into tricky areas are when you are the human creator of an AI model, and now the AI model has self-created because it has self-learned. Who owns the copyright to those when AI becomes the creator? The group of people that are responsible for creating the environment, creating the models, the question comes into how do we protect the authors, the users, the producers, and the new creators of the original piece of art.
You can use the creation for good or bad. The creation recreates itself, like AI learning, on its own with massive amounts of data after an original data scientist has created the model. Laws have to change; policies have to change. Innovation has to go, and at the same time, we have to be responsible about what we innovate.
Where are we as a society in starting to understand the different principles and practices that have to be implemented in order for proper management of data to enable innovation?
George: We‚Äôre debating the issues. We‚Äôre coming together as a community. We‚Äôre having discussions with experts. What are we seeing as the longevity of that AI model in a business setting, in a non-business setting? How does the AI perform? We are now able to see the sustained performance of the AI model.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE‚Äôs and theCUBE‚Äôs coverage of the Stanford Women in Data Science event.