Snapchat’s new AI gets one star in a “who would have seen that coming” backlash. Microsoft says we should buy fewer PCs. And where do AI companies get the data to train their new large language models?



 

These stories and more on Hashtag Trending, for Tuesday, April 25th.  I’m your host Jim Love, CIO of IT World Canada and TechNewsDay in the US.

In a world gone crazy with AI, with every company scrambling to put some AI component to its services, Snapchat’s new My AI feature is a warning that just because it’s got AI – it might not be a raging success.  

Snapchat debuted it’s My AI feature using OpenAI’s GPT technology and got a lot of positive press and attention. Chat GPT is the fastest growing application in history, reaching 100 million monthly active users in only two months. To put it in perspective, TikTok took 9 months to hit that mark even as a viral sensation and Instagram took two and a half years. 

And the app stores and browser extension forums had downloads at such a rate, that our own security podcast Cybersecurity Today was warning people to be careful in case in this mad rush, they downloaded malware. 

With that level of popularity, how could this fail? Well, it did. Following the rollout of My AI, Snapchat’s average review in the US App Store was 1.67 with three quarters, 75 per cent of the reviews being one-star. That’s according Sensor Tower that tracks these ratings. In Q1 of 2023, the Snapchat average review was 3.05 with only 35 per cent being 1 star. That’s a first warning – this is a critical audience to start with, but the difference between 75 per cent and 35 per cent one-star ratings is a chasm. 

Apptopia, another firm that tracks audience sentiment said that it’s impact score rating dropped to minus 9.2 – and their scale ranges from – 10 to + 10.  They note that Snapchat received 3 times the number of one-star ratings than usual on April 20th, the day after My AI release was announced. 

That’s pretty much the equivalent of – you could tie a pork chop around its neck and the dog won’t play with it. 

Even the 5-star ratings, which also had a spike did not include favourable comments about the AI. Many called it crap or said it should be removed. 

So what went wrong? For one thing, it was forced onto users, occupying their screen real estate without their permission. The only way to get rid of it was to pay more for a Snapchat+ subscription. 

Some found the AI feature to be, as one poster stated, “creepy.” Some worried about their data and privacy. Others worried about their location being tracked. And there were reports in the national press that raised some eyebrows.  The Washington post reported on the bot, when it was told the user was 13, it responded to a question about how to set the mood to have sex for the first time. 

How did Snapchat get it so wrong? Perhaps the key to this lies in the company’s response to the criticism, as reported in TechCrunch.

The company said that it’s constantly iterating on Snapchat’s features based on community feedback – but it didn’t commit to or even suggest that it would discontinue the app, despite the massive negative feedback. In fact, Snapchat’s spokesperson reportedly said, if users didn’t like the feature, they don’t have to use it. 


That says it all.

Sources include: TechCrunch

==

“Can modern work applications and endpoints abate end user computing greenhouse gas emissions and drive climate action?”  The answer is yes, if companies take appropriate action.  That includes “ identifying and procuring devices with a low carbon footprint; keeping devices for longer periods of time to slow demand; using devices in the most energy-efficient manner during the use phase”

That amounts to buy fewer PCs, and using the ones you have responsibly.  

It’s a sentiment echoed by many organizations who are trying to help cope with the fact that global IT is on track to account for up to 8 per cent of energy usage by the end of this decade. And it’s one of the fastest growing users of electricity.  And one of the biggest factors in the carbon footprint is not energy usage, it’s the manufacturing of computers and phones.

Some studies have shown that up to 80 percent of the carbon footprint of device is used before it is plugged in for the first time. 

So strategies for having fewer PCs purchased – extending the life of current devices to up to 8 years, abolishing desktop devices and using BYOD, especially with laptops that use far less energy that desktops with large monitors – all of these are exactly what you’d expect to hear in a strategy. 

But would you expect to hear this from Microsoft?  Well, you will, or at least you will if you read their latest document called PX3, a research paper, authored by Justin Sutton-Parker and distributed under the Microsoft logo. 

The report looks at all aspects of carbon emission reduction and has a number of models that it has used to look at different policies of acquisition and usage. 

In some models, extending life span of devices from three or five to as high as eight years, has a major impact. Even some of the power utilization strategies with smaller laptops and moving more processing into highly efficient cloud data centers has a smaller impact, but as the report points out, with over 4 billion computer users in the world, even a seemingly small amount per device would have enormous impact. 

What’s in it for Microsoft? Well, if they switch users to cloud data centers and continue with their cloud licensing models, they will continue to dominate the corporate market where they are strongest. 

Or maybe, it’s just the right thing to do – avoiding what the report calls “the pathway of apathy..represented by the “on premises policy” where business as usual continues.

Just a reminder that corporate listeners in Canada who want to find out more about how their company can reduce carbon emissions from IT can go the Digital Governance Council and look for the sustainability pledge. 

If there’s an equivalent body in the US that listeners want to recommend, contact me and I’ll put the link in the text version of this podcast.

Sources include:  Windows Frontline research and The Register

It’s called a reverse ATM. Instead of taking out money, you put it in the ATM and get a value card for the store that you are shopping at. 

It’s the way for all manner of businesses to go cashless without falling afoul of laws that are being passed to ban them from doing so, partly at least with the aim of protecting those who don’t have bank accounts or credit cards. You would think that this wouldn’t be a large amount of people, but estimates are that 4.5 per cent of people in the US fall into this category.  It’s the lowest rate it’s been in more than a decade, but it’s still a large amount of people. In Canada, cash is still used for about 15 per cent of transactions. 

But handling cash is a hassle – increasing risk of theft as well as the need to count, balance and make deposits. Worry about fake currency – and all that

Most of the time there is no cost for the use of these ATMs and the merchant is charged the same interchange charge as Visa or Mastercard. The cards can be single usage or they can be Visa or Mastercard pre-paid cards which are usable in multiple locations. 

In the near future we can expect to see that people will not need a physical card, with the most obvious being a virtual card on a smartphone.

The machines do have a cost to the merchant, starting at about $6,000 and fees for servicing it and removing the cash come in addition. There is a potential to recoup part of this from advertising.

But the critical point that, for someone you’ve just told “their money’s no good here” and you’re not buying a round, the experience of this transaction better be stellar.  

Sources include: Axios, FDIC and Bank of Canada

The Wall Street Journal reveals the “secret list of websites” that have been used to train AI like ChatGPT.

Chatbots are not intelligent, per se. They don’t yet understand what they say, although having seen a “raw” demonstration of ChatGPT-4, we might not be able to say that in the future.

But right now, those chatbots are mimicking our speech and using the results of an incredible ingestion of data. But where does it get that data? 

The tech companies that make these bots tend to be pretty secretive about where that data comes from. That could be because they feel it’s a competitive advantage. It might also be because at one point, someone is going to ask or raise the ugly question about who really owns the data that these AI engines are trained on.

But the Washington Post broke through part of this secrecy, working with the Allen Institute for AI and working with a web analytics company called Similarweb. 

They used Google’s C4 dataset, which is a “massive snapshot of the contents of 15 million websites that have been used to instruct Google and Facebook’s AI models. OpenAI does not disclose what datasets it used to train its model for ChatGPT.

Then it went through and rated them based the number of “tokens” that appeared from them in the dataset. A token is a sequence of characters that represent a “unit of meaning” – the fundamental building blocks of the prediction engines of AI.

The three biggest sites were patents.google.com, Wikipedia.org at number two and scibd.com – a subscription only digital library. Also high on the list, according to the Post is b-ok.org which is a large collection of pirated books that has since been seized by the U.S. Department of Justice.  Starting to realize why companies might not want you to focus on how they’ve trained their A.I.?

The Post also points to some sites that might raise questions about what these AI’s are learning. Wowhead.com, which is a World of Warcraft player forum is one of them – and what could possibly be bad about learning from that source?

There are other sources such as coloradovoters.info and flvoters.com – state voter registration databases. These are public but given the scandals of Cambridge Analytica and tensions in the US, voter information and how it is used is worthy of questioning.

Overall the Post reports that business and industrial sites made up the biggest category at about 16% led by sites like fool.com that provides investment advice.

Kickstarter.com and patreon.com, both of which fund creative projects were also in the mix. These start to raise issues of how much of the business or artist’s information is being shared without any compensation to the owner or creator.

And publishing sites, not surprisingly form a large amount to the involuntary contributors – half the top ten sites were news sites ranging from the New York Times, the Guardian, Forbes, Huffpost and the Washington Post.  Our own itworldcanada.com has also contributed as have other user blogs. My own, all too often ignored blog changethegame.ca also featured in the mix.

This may lead to the greatest issue of all – and the elephant in the room – who owns this training material?  The Post pointed out that the copyright symbol appears more than 200 million times in the data set.

Want to known if you are part of someone’s training set? The Post has created an application where you can search and find out by url what is included. There’s a link to this in the text version of this podcast at itworldcanada.com

 Sources include: Washington Post

And Twitter’s new verified status stumbles. 

The gang who can’t tweet straight strikes again. As social media consultant Matt Navarra told the BBC, the decision to remove legacy checkmarks was a big mistake, possibly Elon’s biggest Twitter mistake so far.” 

Twitter, rebuffed by many celebrities and rejected by some top media and offending other large media outlets from NPR to the CBC and others, just can’t seem to get their new pay for verification off the ground.  

In addition, the BBC reported that a fake Disney account somehow got verified and was tweeting vile content before it was shut down. 

Few people seem to really understand what the new three layers of verification are.

There are reports that Musk has, although Musk has blocked the leaders of the #blocktheblue campaign, that he’s also given their blocked accounts Verified Blue badges just to tick them off. Pun intended.

TechCrunch reported that “multiple top accounts (with more than 1 million followers) got their verification marks back. However, many of them, including writer Neil Gaiman, footballer Riyad Mahrez, musician Lil Nas X, actress Janel Parrish Long and British TV presenter Richard Osman said that they didn’t pay for the blue badge.”

The Insider reported that twitter is adding verified check marks to the accounts of dead celebrities, making them look like paid Twitter Blue subscribers. Kobe Bryant, Norm Macdonald, Anthony Bourdain, Chadwick Boseman, and even Michael Jackson are all on the platform as twitter blue. 

Elvis’ twitter account does not have a blue check, but then – is he really dead?

That’s the top tech news for today. Hashtag Trending goes to air five days a week with the daily top tech news stories and we have a special weekend edition where we do an in depth interview with an expert on some tech development that is making the news. 

Follow us on Apple, Google, Spotify or wherever you get your podcasts. Links to all the stories we’ve covered can be found in the text edition of this podcast at itworldcanada.com/podcasts

We love your comments. You can find me on LinkedIn, Twitter, or on Mastodon as @therealjimlove on our Mastodon site technews.social.  Or if that’s too much just leave a comment under the text version at itworldcanada.com/podcasts 

I’m your host, Jim Love, have a Terrific Tuesday.

The post Hashtag Trending Apr.25th- Snapchat’s AI tool gets disastrous ratings; Microsoft says we should buy fewer PCs; WSJ reveals ‘secret list of websites’ used to train AI first appeared on IT World Canada.

Leave a Reply