What Microsoft Copilot Could Mean for the Future of Work
What Microsoft Copilot Could Mean for the Future of Work
This week, we discuss Microsoft's announcement of Copilot, a generative AI assistant for Microsoft 365 apps and services, and its implicati…
Choose your favorite podcast player
April 4, 2023

What Microsoft Copilot Could Mean for the Future of Work

What Microsoft Copilot Could Mean for the Future of Work
The player is loading ...
The Valuu Makers

This week, we discuss Microsoft's announcement of Copilot, a generative AI assistant for Microsoft 365 apps and services, and its implications for the digital workspace.

Show Links
Follow us on Twitter: @thedwwpodcast 

Email us: podcast@digitalworkspace.works 

Visit us: www.digitalworkspace.works 

Subscribe to the podcast: click here
YouTube channel: click here

★ Support this podcast on Patreon ★

Transcript

Ryan Purvis 0:00
Hello, and welcome to the digital workspace works Podcast. I'm Ryan Purvis, your host supported by producer Heather Bicknell. In this series, you'll hear stories and opinions from experts in the field story from the frontlines. The problems they face, how they solve them. The areas they're focused on from technology, people and processes to the approaches they took that will help you to get to the scripts with a digital workspace inner workings.

Heather Bicknell 0:21
Hey Hey, and did well, how are you? Yeah, good, thanks. Well, I was thinking this week that we've been talking so much about generative AI lately, we should probably take a break from it. And then of course, Microsoft announced co pilot, they did their big launch, you know, more news is coming out about people trying Google Bard. So I feel like we can't not talk about it. It's like, potentially one of the biggest technology advancements to happen in, you know, productivity tech in, you know, so

Ryan Purvis 1:07
I'll give you a different Yes, I'll give you a different point of view on it, which, which is one that I hadn't even thought of. So if you're in the space, and you don't know what chat Gpt is, you'd be laughed at. Because everyone knows, like, if you're in tech, everyone knows. And if you're using it, you know, it's not the as your bad thing. But you know, this, you've tried probably once or twice or whatever. Some people are, admittedly talking to people in different groups, trying to figure out the use cases to use it. versus people like me, who just use it all the time, that I'm actually so reliant on it, I'm nervous not to have it. Because it's been such a boon for me. But the perfect action to give you is what happens if you're not in this space, and you still don't understand all this stuff. You know, what are you doing, and I'm thinking about people that I've chatted to, that are still struggling with using Excel and PowerPoint and sending email and you know, all those things we take for granted. And now you're going to have this this other thing that you could just tell it, you want to attend 10 slide presentation, based on your Word document. And this is going to generate it for you. I mean, that's, that's like magic.

Heather Bicknell 2:18
It is magic. I mean, Microsoft made this point, I think the average user uses like, I don't know if it was like 5% or 20%, of PowerPoints, capabilities. But if the AI is doing it for you, you don't need to know, the actual you know, how to navigate the user experience, how to use the more complex parts, you know, the power behind tools like PowerPoint or Excel, that skills gap is lessened if the AI can help you with it.

Ryan Purvis 2:50
Yeah, and I mean, which, which is, which is the good thing, for barrier to entry. The The bad thing is when it switches off, or it's not available, what are people going to do, because they don't understand how it works. So there's, you know, I think there's, there's a level of, it's great to have these tools. But it's still important to understand the basics first before you know, same as you don't just get on a bike and ride it, you still have to have some training wheels and, and learn your balance and all those things. So that you can do it properly. But it was distinction because you know, if I talk to, you know, we're in Mossel Bay, which is like old age home of South Africa, the guys in the 70s 80s 90s On average, none of them are even aware of WhatsApp, to a large extent. So chatting to them about oil I use I just typed this question, here's this paragraph I got back. They can't believe that that's that's the machine that's doing that. They believe it because you tell them but they don't believe like they can't fathom how it works. And that's the sort of the danger in some respects, of, you're gonna go talk to a person at a bank that is now a proper chatbot whose language is actually now indistinguishable from from a human because I think GDP would would pass the Turing Test sending a Turing test is now today to doesn't doesn't even factor. Because the questions and the answers are so, so good. The only thing it doesn't do is splintered spontaneity, which will be difficult for an AI to deal with, and doesn't ask you questions back, which is the other thing, it tries to answer what you've asked us and ask you why I asked that question. And maybe, you know, interpret the question with your body language and all that kind of stuff. But yeah, it's it's, it's, it's a it's a scary step forward in some respects, because, you know, I think it's a you have a lot of people behind.

Heather Bicknell 4:43
Yeah, I think, you know, you bring up a really good point about what people have been realised that they're talking with AI anymore. I think, you know, that's where sort of some of the AI ethics piece comes in, or how do we use AI responsibly Lay, how do we control it on the internet? You know, you think about things like deep fake images, or the ability to make like a video recording of a politician, basically saying, whatever, at this point and people, you know, you can just spread that on Facebook or whatever. And, you know, people will believe it, if they're not aware of how far technology has come if they're not thinking critically. So there's definitely a danger there. For sure.

Ryan Purvis 5:29
Yeah, I mean, some of those some of the images that have been generated, I mean, I've seen a couple of things on Facebook and LinkedIn, fake people being generated for modelling. You know, if you think about the sort of memes and all that kind of stuff that we created, someone's got sat and made that now something's gonna generate that. So the volume of data being created, will increase exponentially now, because you don't have to wait for someone to draw it. You can tell the AI to draw it, it can generate it, it can come back to you, it doesn't have to be perfect. But it's, it's enough that you will probably accept it as is, yeah, that's frightening. And this version for where that's coming out of GDP. I was talking to someone about it this morning, he was saying, you know, he'd written a piece of code. One way, he gave the same code to the engine and asked her to turn around and give it to him in a different format that turned the whole thing around in 13 seconds, that would have taken a junior dev medium day for one to two week thing to do. And then put him in quality problems. The single rotor perfectly, it just completely changes so many things. It's scary, the amount of startups that are starting just because this thing exists, also crazy. Because you can just do it now.

Heather Bicknell 6:44
And now it's definitely we're gonna go through this period, where everyone's are a lot of you know, startups are trying to incorporate it into their technology are starting new startups around it and feel like, it's so early, it's really hard to predict, you know, is this really going to give that transformation we expect? You know, how should we as workers even approach it, right, because I feel like there is a job security risk there as well, in terms of, you know, maybe they don't, companies don't need to downsize, but maybe they don't need to hire as readily if people can be exponentially, you know, produce exponentially more output with the use of these tools. So ultimately, it could unlock like, a four day workweek, and we could all you know, use tools like this to like, be a little bit more productive and still, you know, have that opportunity to get something like a better work life balance, or it could not, it could mean that were just doing more and more and more and not really benefiting from it at large as a as a working class, I guess.

Ryan Purvis 7:50
Well, I mean, I think I think those personal rules still have to, apply, you still have to learn, if you don't know how to do it yet, say no to extra work, how to plan your day, how to find balance, you know, how to work in a healthy way, I think those things, you know, regardless of what the technology is available, and you know, this goes back to the Industrial Revolution, and all those things where technology came in and changed, you know, from the horse, the horse and carriage to cars, to trains, to factories, to whatever, you know, as you went along, there was a need to, to, to manage yourself, and you obviously, you know, as things are modernised and, and that human rights have become more more front and Central and all those things. Having an AI that can help you do your job, your copywriting to write some text, you know, why not have it generated by an AI that you're you're fact checking, and then rewording certain things. So that's more your way of saying something, but it's just generating, generating what you need. And, you know, because most of the stuff, especially if you're putting stuff in the public domain, in their language model, it can, you can tell it to run it like you would write it. In fact, the tests I still want to do, I've seen ml in the shower this morning is I want to ask it, if it knows who I am. And if we could write content in my voice, and see if that would work. Because you know, I've got public content out there, you've probably content out there, it would be quite funny if it could do that. Because then you could be you could be rough and ready in bed, just put some stuff down and ask it to rewrite it in your voice. And all of a sudden, you're you're actually generating your own content, just in a highly efficient way. And I think it will open the door to four day workweeks or more results orientated stuff. And I think the biggest risk to you the biggest risk is the is the lack of skilled people that you know, and I told you the story about the guy that drives me and you know, he's got no skills and all the rest of it. That gap just gets bigger and bigger now, because you need to have some level of knowledge, some sort of skill to understand what you're seeing when the text comes back to you from one of these engines to use it. If you don't have the other fundamentals first then that becomes even for like to get but, you know, hopefully, it's not the case. And hopefully, this opens the door to people where they can ask for complicated topics to be explained simply because now there's something that can do it for them. And if it's a multi language model, having something in English explained in in another language, or whatever, might also be helpful. So yeah, it's the potential is there, just hope it's a good one.

Heather Bicknell 10:26
Definitely, I do think there's a funny potential for like, sort of circular analysis that happens with all of this. So like, you know, we use tools like Microsoft Copilot to generate large amounts of text, right. And then we send it out in an email, and then someone receives the email. And they use Copilot to summarise the key bullet points, and like, on and on. So it's like you're only ever getting the regurgitation of like, the summary that you started with is, is sort of how it could definitely work. And I don't know what some of the implications of that are, in terms of like, it's almost like a game of telephone, right? And like, is some of the contexts going to degrade as people continue to try to, you know, in the sake of efficiency, not ever have to engage with large, you know, bodies of text.

Ryan Purvis 11:20
It's a very good point, I hadn't even thought about that as an example. But it's such a such a doable thing. So I'm on a few WhatsApp groups that have just been created. And, you know, you guys are writing text, and you're reading the text. And when you're reading the text, you obviously, are contextually thinking about for the person that you're speaking to. And you don't know you don't know these people because they knew. So you also don't know where they come from when they when they make a comment. So you can misinterpret something, potentially. And you could ask for clarification, but if, if you started to use something else that's doing that interpretation for you, to summarise it, that definitely go the wrong direction. I think that's one of the biggest problems with with these sorts of engines, I mean, and they mentioned, hallucinations, which as a sort of concept for the inaccuracies that you're gonna get out of these generated engines where it sounds so confident, but actually below the level, it's not there. And that could be a problem. If you just say you write an email, it gets interpreted, you get the summary, the summary is incorrect from the original content, how you going to know if it was summarised for you, unless you could check the original document, which means, you know, doubled your time instead of half your time on that item. So I guess, the horses for courses, think don't use it everywhere. But I still find it hugely valuable. Like, you know, I've got 10 or 20 things in a day that I've put on the list, that that gets populated for me through the GDP engine, that when I get when I get it, it's kind of the thinking time per se, because I can now just review it and put it in the right format. I mean, I built that out if you saw it, okay, our generator, because I always do my cars, always, always, always. And I've got a whole prompt thing does everything for me. And I thought, well, that's probably something that people will use, because that's probably the hardest one of the hardest things to do is to write an objective and the key results when you're starting a new project. Once you've done a few then it's easy to like I have no okay, I know I'm going eccentric cetera. But just to get going. And, you know, I use it all the time. And so I saw I've shared the link, and I'll send it to you if you haven't got it. But I think that's huge. Like just to have that thing generated for you. Gives you that that freedom back in your day. And yes, you could take on more work, if you wanted to. But then also think back to the situation you're in. I mean, if you're doing more work in less time, and you can you know, I'm at the beach at the moment, it's a really windy day. But I can take the kids to the beach this afternoon, because I finished my work because I got through it, you know, three 410 times faster.

Heather Bicknell 13:48
Yeah. I mean, it's definitely something we're all going to have to figure out, how do we make it work? How do we take advantage of it in the best way assuming that the beta that Microsoft is in with that those few select companies now for copilot assuming that it goes well, assuming organisations are willing to adopt it. And you know, the security and I guess ethics concerns don't prohibit that use in business, but I feel like the potential for unlocking so much productivity is there that I feel like it will get adopted? And we will we'll have to see. I mean, it's basically like, what happened when we got personal computers, right. Like it's the same sort of leap forward, I think.

Ryan Purvis 14:34
Yeah, yeah. I mean, in some respects. And this goes back to my point about the skilled unskilled people is, you know, the advent of the computer, the personal computer. Initially, there was a there was a financial constraint and a size constraint because the first computers were so big and so expensive. And as they you know, miniaturised and modernised that became cheaper and more accessible. This is already is a fairly cheap thing, if you think about it for the end user, because you know, chat GDP is free. Okay, not always online, if it's free, but you can pay 20 bucks a month in dollars, I think or pounds, then you have premium access, then it's then it's available all the time. You can go through the API's, which you pay a very small amount for. So the accessibility to that capability is, is easy. I mean, it's four cups of coffee a month, or maybe three in the UK at the moment. But if you had to go build your own language model, you'd never have the money as an individual how to be in a big corporate and even then it's a huge investment. So you can already see, I mean, the thing that I'm worried about is, you know, April, open AI, open AI, as an organisation, you know, heavily. I mean, we've only ever talked about the martial investment. But, you know, their ethos and what their are whatever, I don't think anyone's actually dug into it. I know, I haven't. But I know when Elon Musk was involved, he was all for openness, transparency, and all that kind of stuff. And he separated away from them, at some point, because that wasn't the case. And that is the challenge. If you've only got only got a few models that are doing this, you get bored, and you've got churches being or whatever it is, there's going to be a need to regulation, and especially on privacy and all that kind of stuff. And I think that's the only part that's not clear. You mentioned ethical, ethical stuff, man. I think that's the other thing. And again, if you are asking questions, like you're asking questions, or around something they've done, they seem illegal, but you're asking it to see if it is illegal, that could potentially, you know, be used against you. And I use a really bad example of this, but it's quite a serious thing. Going on there have a sort of thought police, there was a YouTube clip, I was watching of a lady that was arrested. In the UK, she was 150 metres from a an abortion clinic. And the cops arrested her because she was in the vicinity in some some law that you can't be near within that range of the clinic. And basically, they're arrested for thinking about having an abortion. And I can't remember exactly what she had to go to court, she won the case, etc, etc. It was quite a serious thing. Now, that was something that was happening inside her head. If you're typing something into a language model, like I don't know, trying to think of say they won't get me into trouble, whatever it is, is this illegal illegal to do the and someone picks up on that? And it's a serious offence, let's say it, you know, some terrorist attack, whatever it is, but you're just asking a question, because you're curious, not because you're actually thinking about it, versus someone that's actually got a malicious intent? How would that be handled with? Would you be flagged up by your local authorities? Because you've asked that question, or they just let it go through? Because it's just a question. And it's, I don't know what the answer is, but it was something that I was thinking about.

Heather Bicknell 17:56
Now, there's a lot of potential for these things to go in scary directions, I guess. I mean, it's sort of an extension of the internet, then obviously, there's a tonne of horrible content out there about how to do terrible things. But if the AI can help you kind of cut to the chase faster. It's definitely a or, you know, generate, you know, new potential things stick go south, it's, um, yeah,

Ryan Purvis 18:23
yeah, look, I think it's, I mean, it's gonna be Well, I think it comes down to, you know, personal. And this goes back to my skills and knowledge, expertise, understanding all this stuff. If you don't know what you're doing, if you don't know how this stuff works, and then you're using it, you're in far more risk. And someone that has a bit more understanding, I heard works using. And so I'll give you another silly example, when I was at one of the banks, we had to block google translate from the news. And I thought, well, that's a bit of a strange thing to block because it's quite a useful tool that I use all the time. And someone said to Yeah, but you're missing the point. Because if someone is using Google Translate to translate a contract or clause for an agreement, and they've just taken the English version, then they've translated into to speak Arabic or Mandarin are one of those languages that are usually what people don't know a lot about. They won't know if the translated version is actually legally the same thing. Yes, you need to go through an approved translator, who's not only a probably a lawyer, but also probably a native speaker, as opposed to someone who's picked up the language through through learning. And I hadn't really thought about that kind of stuff, but it's kind of the same thing. You're going to have these channels which you can use them with a level of comfort, sort of a green zone, maybe Amazon, but then for the the amateur red zones, you'll you'll have to use an approved service that's gone through some rigour and some regulation and whatever it is.

Heather Bicknell 19:51
Yeah, I mean, I wonder if we'll get to the place where we have like a co pilot for financial services or still Think like there's going to be whether it's professions like legal or certain industries that are more regulated and, you know, have greater risk potential there might I mean, they're not going to be adopters.

Ryan Purvis 20:18
Yeah, without doubt, I can tell you that that's happening already, that there'll be built. There's no doubt about it. You know, if you, if you just look at cybersecurity and information security, there's two, and you need to go over, there's two frameworks as ISO 27, and the NIST one. Those are all based on controls, and risk assessments and all that kind of stuff. If you just apply those concepts into those those areas, and you've already got something that can be that can be leveraged as an assistant, any content that comes through any question goes through that same model. Because it's already the foundation, you have to do to reinvent it. So I think it's imminent.

Heather Bicknell 20:56
We'll keep on keep watching it. It's wild. Watch this develop. But it's certainly interesting. So we will try to mix it up a bit as well. But yeah, it's just seems like the most relevant thing happening right now. It's hard not to talk about it. So interesting.

Ryan Purvis 21:11
Yeah. Cool. Heather? Well, I'll catch you later. Have a good day.

Heather Bicknell 21:14
You too. Okay. Bye. Bye.

Ryan Purvis 21:21
Thank you for listening to today's episode. Hey, the big news app producer editor. Thank you, Heather. For your hard work on this episode. Please subscribe to the series and rate us on iTunes or the Google Playstore. Follow us on Twitter at the DWW podcast. The show notes and transcripts will be available on our website, https://www.digitalworkspace.works/. Please also visit our website https://www.digitalworkspace.works/ and subscribe to news. And lastly, if you found this episode useful, please share with your friends or colleagues.