Podcast: Bedtime rituals, recycling clothes and data bias in AI

September podcast

In this edition: How getting ready for bed is hard-wired, how clothing dyes can be recycled, and what we can do about data bias in AI.

Download the complete podcast (mp3)

News: Getting ready for bed – When mice are sleepy, they make a safe nest, and now researchers have discovered the brain wiring that controls this instinct both in mice and likely in ourselves.

 

Making fashion greener – We talk to the people behind DyeRecycle – an Imperial startup that uses a new chemical process to recover dyes and colours from waste textiles, vastly reducing the water and energy use of the industry. The team recent secured an H&M Foundation Global Change Award.

 

Data bias in AI – We listen in to the Science Actually podcast as they chat data bias in AI – discussing questions including can we eliminate biases, how much should we blame big tech, and what can we do about the issue?

 

(27 September 2023)

Transcript

Transcript

Gareth Mitchell:               Hello everyone. I'm Gareth Mitchell with a quick news update in a sec. And then we're talking not fast fashion, but friendlier fashion when it comes to the environment as an Imperial startup wins the fashion industry's Nobel Prize for its novel approach. And biases in data, it's one of the big problems in AI. Should we blame big tech?

Dr Mark Kennedy:            Do I think that people when they built these algorithms and systems said, "I know what, wouldn't it be great if we could figure out how to create a bunch of hapless addicts to content that would promulgate a series of competing narratives about the way that society would look or should look that would end up tearing at the fabric of society and undermining the prospects of democracy?" No, I don't think that's what people set out to do, but it is what happened.

Gareth Mitchell:               All right. Well let's make a start and I hope you're well everybody. Thanks for being here and all that. So Hayley joins us right at the top of the podcast. Prime position, Hayley. My goodness. And can we just talk about probably the cutest story we've done on the podcast for some time about how we and our very good friends, the mice, know when it's bedtime. It's such a nice story.

Hayley Dunning:               Yes, this is the most adorable story I've ever had the privilege to write about. But it's also very interesting and important. So obviously, we all sleep, but how do we know when to get ready for bed? Researchers had looked at mice because mice can be caught by predators, so they need to find somewhere safe and warm and cozy to go to bed. So they obviously build little nests or burrows, but what is the brain wiring that makes them do that? So in this very sweet study, they kept some mice awake for a while and they kept them awake by giving them Lego bricks or other toys every half an hour for five hours, so they didn't nap, and then observed their behavior when they finally did start to build their nests and go to sleep. And also looked at the brain wiring responsible for that.

                                             And what they found is that it's controlled by certain parts of the brain that are actually really important for survival. They're quite basic. They're hardwired in that you should prepare for bed this way. And those same brain wirings are more than likely responsible for the same behavior in humans as well.

Gareth Mitchell:               So it means when we just think, "Oh, it's bedtime now." Really, all we want us to do is go and make a nest to get away from predators. Is that what this research is saying?

Hayley Dunning:               Yeah, and kind of it's saying that this is a hardwired thing. And I think we all subconsciously know that when we start to get sleepy, we start to get that: ‘Oh I'd quite like to brush my teeth now and snuggle down and maybe get a book.’ But also, in our modern world, a lot of us do ignore or override those urges. We stay up late, we watch another Netflix episode, we take our phones to bed. We don't have good what the experts call sleep hygiene, which is making sure that our nest is perfect for us. So this does suggest that perhaps we should pay more attention to those.

Gareth Mitchell:               Oh, there you go. Well, for what it's worth, the last two nights, I've been going to bed really early and sleeping so well. Just solid eight hours right through to the morning. So maybe I've been-

Hayley Dunning:               So you've built a perfect nest.

Gareth Mitchell:               Absolutely. Bringing out my inner mouse there, definitely. But enough about me and my boring sleep hygiene habits. Hayley, thank you very much indeed for that. I enjoyed that. We now meet the Imperial-based startup that has big plans to make the fashion industry more environmentally friendly. Fashion is the second most polluting industry, after petrochemicals, responsible for 8% of CO2 emissions and 20% of wastewater. So enter DyeRecycle, they use textile waste to source and recycle dyes and colors to reduce energy and cut down on water. DyeRecycle’s process uses 85% fewer chemicals, 66% less water, and generates 75% fewer carbon emissions. It's all thanks to a chemical process developed here at Imperial College and the industry is taking notice.

                                             Earlier this year, DyeRecycle secured an H&M Foundation Global Change Award considered to be the Nobel Prize for fashion. I recently caught up with DyeRecycle's Chief Scientific Officer, Professor Jason Hallett of Imperial's Chemical Engineering Department and CEO, Dr Aida Rafat. She's also a Royal Academy of Engineering fellow. Our chat began with fashion's water problem.

Dr Aida Rafat:                   With processing, which is basically the process where we colour and dye our clothing, is considered to be one of the most significantly polluting segments of the industry. Actually, about 50% of the entire greenhouse gas emissions considered to come from the wet processing part of the industry. And then there are incredibly large volumes of textile waste that is being generated. And at the moment, the rate at which these textile wastes are being recycled is incredibly low. It's about 1-2% of the entire world's textile waste is being recycled at the moment back to textiles, which basically means a lot of this textile waste either end up in landfill or incineration or downcycling. What we found is essentially two problems that are seemingly not connected and not related, but what we're trying to do is to connect those two problems together.

Gareth Mitchell:               Which brings us to your startup, DyeRecycle and a process that it's based on that you used that it's drastically reduces the amount of water use and chemicals through some science, through some technology, some chemistry that's been developed here at Imperial College. So tell me about this process.

Professor Jason...:           Yeah, I mean, as Aida mentioned, we're quite motivated by trying to clean up the textile industry. I mean, which is an absolutely terrible thing. We don't do well with plastics in general, but we're especially bad with clothing. And the biggest reason for that is because it's all colored and so it's difficult to take it back around that loop because you can really only make black polyester because of all the dyes that are present. And so our big aim was to do this without using any water and to preserve those colors. So we needed to recycle the fiber so it had to be quite a gentle process so that we didn't damage the fibers so they could be reused afterwards. And then we really wanted to be able to reuse those colors.

                                             So we utilized a low toxicity solvent. It's a liquid salt or an ionic liquid that's capable of swelling polyester fibers in removing the dyes out of the polyester. And it can do this without damaging the fibers. So you can have an intact white polyester that comes out the other end ready for reforming into fabric.

Gareth Mitchell:               Just give us a sense of how effective this process is.

Dr Aida Rafat:                   So at the moment, we can achieve quite high decolorization for the polyester, anything between 85 to 95% of the decolorization. And at the same time, we were able to extract multiple, like so many different colors and that essentially allows us to recycle so many different colors. And this is one of the most exciting value propositions that we have, the fact that we're recycling existing colors.

Gareth Mitchell:               So where are you at with this? I mean, is it just a lab thing at the moment or have you scaled this up in any way?

Dr Aida Rafat:                   It is mostly in the lab. We're looking to scale it up and that's essentially why the H&M Global Change Award would make that possible.

Gareth Mitchell:               Yeah. Well, let's talk about the H&M Global Change Award because it is a big deal. But with that has come some cash, hasn't it? So presumably, you're not investing it in a fancy car and driving off into the sunset. And how is that cash being used and then the mentoring around it to help accelerate your business?

Dr Aida Rafat:                   Yeah. So one of our key aims at the moment is to expand the team in order for us to be able to do a scale-up and also do small pilots with brands and retailers who are interested to see prototypes of the technology such that we are looking closely to all the scale-up challenges that we can face.

Gareth Mitchell:               Yeah. Because there's already a very highly developed fashion manufacturing infrastructure, isn't there? And you're not going to replace all that overnight, but you've been thinking about that, haven't you?

Dr Aida Rafat:                   Yes, of course. So we don't anticipate that we need any specialized equipment in order to scale-up the technology, which is something that is incredibly important. But from everything that we know so far, it looks like we don't actually need any specialized equipment.

Gareth Mitchell:               So where do you see this going? This is a real job interview question here, right? But where do you see the business saying in two or three years' time?

Professor Jason...:           Well, I mean this is where I'm supposed to say we're going to have completely taken over the entire textile industry.

Gareth Mitchell:               Correct answer.

Professor Jason...:           It's not true. What would I like to see? I would like to see one dye house. So I'd like to see us operating our own dye shop in a couple of years. And I keep joking with Aida, it's going to be Dyed by Aida. We're going to have our own.

Dr Aida Rafat:                   I think our own dye house is something. But obviously, in order to tackle the big problem, integrating to existing dye houses or existing manufacturers is something that we will be looking at as well in order to really tackle the textile waste problem, especially both in the pre- or post-consumer waste.

Professor Jason...:           Yeah, I always joke that new technologies, you want to be disruptive in only one dimension. So we've developed a new dyeing process or a new circular process for dyeing. But that means that absolutely everything else that we do has to integrate the industry perfectly. Aida mentioned that we're using the same dyes, we can do the same color matching so we can provide a service that is extremely close to what the industry is used to.

Gareth Mitchell:               Jason Hallett and Aida Rafat of DyeRecycle and recipients of the prestigious H&M Foundation Global Change Award. Well now, for this month's pick of the pods from the Imperial College podcast directory. This time, Science Actually supported by the Data Science Institute at Imperial. The pod’s all about tackling widely held misconceptions when straightforward answers are not always that easy to come by. In the second season, one topic has been biases in AI and one episode looks at how biases impact legal, clinical, and educational practices. Then a follow-up episode asks if we can eliminate biases, how much should we blame big tech and what can we do about the issue. Presenter Dr Ovidiu Serban from the Data Observatory team has two guests. First up, Dr Mark Kennedy, an associate professor at the business school and co-director of the Data Science Institute.

Dr Mark Kennedy:            Well, I don't think you're going to find many people who would say that businesses should just be lazy or ignore these problems. On the other hand, the idea that business is the party that we could turn to say, "Can you please clean this all up?" I think is probably unrealistic. So I suppose I have a nuanced two-part message here, which is businesses need to work very hard on this stuff. But the second part of the message is that businesses alone are not capable of eliminating biases from the systems that they build. The reason I say that business alone isn't enough, not saying business doesn't have to work hard. But if there isn't some social consensus that's developed either organically by social movements or put into codified into law, then the business becomes an actor in here where you're just not sure what the rules are.

                                             And definitely, there are times where business should go ahead and take a position and lead and lean into these fights. But I do not think that business alone is the arbiter of these fights. I think that, in fact, there are many parties to them and that businesses should approach this process not thinking they control it by themselves.

Dr Ovidiu Serban:             And if we stand on the same track of thought, how do you make sure that businesses do not abuse biases even if this is part of their modeling strategy?

Dr Mark Kennedy:            Again, there... I'm going to sound like such a nerd here, but abusing a bias, we have to be pretty clear about what we think that is. I personally would say that there's quite a bit of abusing of biases in the digital economy these days. A lot of the online content that we consume is curated and selected for us by algorithms that generally are designed to optimize for engagement. And so, one of the reasons why we've seen such a flowering of alternative facts and conspiracy theories and things that are very questionable clickbaity type content that really end up in a lot of people's feed. I see some of this stuff, and it's not that I click a lot on it, but it's that the algorithm says, "Okay, this is the kind of stuff that people really seem to jump on." Why is that? It's because any flavor of this stuff ends up being something that tells people what they want to hear or gives people a feeling that keeps them coming back.

                                             And a couple of the feelings that do that quite strongly are anger and fear. And the reason for that is that you get a dopamine hit from that. And so, it can be properly addictive. I asked this question of Anna Lembke at Stanford and she said, "Yeah, absolutely. That is something that will be addictive for people." So there is abuse already in the system. Why? Because the business is there to build a system that works for customers following some simple principles. Do I think that people, when they built these algorithms and systems said, "I know what, wouldn't it be great if we could figure out how to create a bunch of hapless addicts to content that would promulgate a series of competing narratives about the way that society would look or should look that would end up tearing at the fabric of society and undermining the prospects of democracy?"

                                             No, I don't think that's what people set out to do, but it is what happened or is happening. And at least, this is what people are debating. You may say, "Mark, here, you're over dramatizing." But there's certainly a lot of people that are very concerned about this. So how do we stop that abuse? I think there's... What the individual can do, it's like I'm prescribing a little bit as how could you be healthy. Individuals can be responsible for a healthy degree of change that does accumulate and aggregate to make a difference. I think leaving it to individuals alone though is probably not enough. I would suggest also that there's a role for activists and people who think about policy, whether in government or as social movement organizers that it's worth saying, "Hey, we should bring some awareness raising to these issues and suggest the kinds of things that we ought to be considering in terms of policy."

                                             And then in government, people who are elected officials or who are in professional positions looking over policy, I think should be having a look at this. And the more that we debate it and think about it, the more likely we will be to limit the abuses that are there. So I think would be a mistake to say, "I have found the bogeyman for all of this and it is tech company X, Y, Z." I also think it would be a mistake to say, "Oh, poor tech. Lay off. Give them a break." I think we have to apply pressure on this stuff. But again, it goes back to my first point, which is that it's about having a societal consensus on these things. Change from one party alone is really hard. So I think we have to engage that process whereby we set social norms that will prevent abuse. And by the way, I would suggest that stiff penalties for abuse might not be a bad thing.

Gareth Mitchell:               Mark Kennedy of the Data Science Institute and of the Imperial College Business School, raising the prospect of regulation there talking to the Science Actually podcast. Presenter Ovidiu also spoke to Professor Francesca Toni of the Department of Computing. Now, one of Professor Toni's titles is J.P. Morgan research chair in argumentation-based interactive explainable AI. And it's that term explainable AI that underpinned the conversation.

Prof Francesca Toni:        Yes. Actually, the fact that AI is biased is one of the main push towards the need for explainability and explainable AI. So a lot of AI nowadays is based upon data. And of course, data is collected by humans and it's about human behavior and corporates any biases that humans may have. The resulting AI is by definition somewhat biased. To make sure that we can trust AI, we need to understand what it does. So explainability is all about leveraging on abstractions of models that allow to understand what the models are doing and why they are computing some outputs thus unearthing any biases in the data. And some of the interesting work in explainability is trying to then leverage upon explanations to somewhat mitigate by assessing the model, by using the information in explanations.

Dr Ovidiu Serban:             But assuming that this is not a matter of cost, can we even eliminate the biases in our data?

Prof Francesca Toni:        I think that will be very difficult and it'd be hard to prove the data that we use to build models are completely bias free. Biases have a subtle way to get through a bit like water in a crack. The bias in data would require a massive exercise whereby we try to prompt humans involved in the data collections to go beside and beyond the data. There are lots of techniques that people use for curating data so that they are less biased. But in my mind, it will never fully eliminate bias. So even assuming that bias is the only problem you want to sort out by means of explainability, which it is not. But even assuming that the fixing the data to eliminate the bias will not eliminate the need for explainability in my mind.

Gareth Mitchell:               Francesca Toni of the computing department on the Science Actually podcast. Now, that's just a taster, but the conversation continued with how Professor Toni's group's approach differs from other research. It's a fascinating lesson. To hear that episode in full and other editions, you can find Science Actually in our Imperial College podcast directory, the Be Inspired pages or a quick search. We'll get you that. And that's it for this edition. It's been great having you along as ever. We're on all your favorite podcatchers and we even chop the Imperial College podcast into chapters if you don't want to listen to the whole thing. I'm Gareth Mitchell saying a big thank you for listening and I'll be back in October. Wow, a new academic year. See you then.