• Welcome to the Cricket Web forums, one of the biggest forums in the world dedicated to cricket.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our free community you will have access to post topics, respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join the Cricket Web community today!

    If you have any problems with the registration process or your account login, please contact us.

Jeff Bezos

GIMH

Norwood's on Fire
Yeah sorry for having a laugh st that vcs. **** knows I get it wrong often enough. Was just funny in the context of that sentence.
Exactly. Wouldn’t have said anything if it was a run of the mill apostrophe error
 

Uppercut

Well-known member
Bloke is probably too clever to have an Alexa tbh. They are a privacy and data protection nightmare which, as he presumably had a hand in designing them, will know all about that.
I don't have much faith in his privacy protection chops right now tbh.
 
Last edited:

nightprowler10

Global Moderator
Seriously **** those things. If I ever stay in a hotel that has one I am drowning the bastard in the bath.
Is the concern that it's always listening and potentially recording/uploading everything you say? I ask because I've seen this mentioned before and it makes no sense to me. If you watch your network traffic surely you'd see if it's constantly uploading data or only when you say the trigger word. It's no different to any smart phone that gets triggered by 'ok google' or the like.
 

sledger

Spanish_Vicente
Is the concern that it's always listening and potentially recording/uploading everything you say? I ask because I've seen this mentioned before and it makes no sense to me. If you watch your network traffic surely you'd see if it's constantly uploading data or only when you say the trigger word. It's no different to any smart phone that gets triggered by 'ok google' or the like.
https://medium.com/swlh/alexa-play-...-time-amazon-is-listening-to-you-a556df19613f
 

Spark

Global Moderator
It's pretty horrifying but that article goes way over its skis IMO, and draws conclusions that are not at all supported by its cited evidence.
 

nightprowler10

Global Moderator
Yeah, I'd like to do more research on this myself because my home could really benefit from Alexa or Google home. Like I say if it is constantly uploading my voice data it is going to be through my WiFi, which should show up if I monitor my WiFi usage. This is why I wanted to see their privacy page explaining how Alexa works since their response to the article made it seem like you could monitor what Alexa stores and delete stored data on yourself.
 
Last edited:

sledger

Spanish_Vicente
Haha tbh I didn't get as far as its actual conclusions. It's findings, regardless of whatever conclusions it actually draws, are the illuminating thing for mine.
 

Spark

Global Moderator
Haha tbh I didn't get as far as its actual conclusions. It's findings, regardless of whatever conclusions it actually draws, are the illuminating thing for mine.
Yeah the findings - that Alexa stores its recordings, and its processes are so shoddy that it could send them to someone else by mistake - are pretty bad. The article gets ridiculous though, and presents that as Alexa listening to you 24/7 including when not activated with the wake word, when it's pretty obvious that these are the recordings of the person asking Alexa for stuff, and thus were appropriately recorded (if not appropriately stored).

Like, unless you're into some particularly weird ****, there's nothing in those findings to suggest that it'll record you having ***, for instance

Yeah, I'd like to do more research on this myself because my home could really benefit from Alexa or Google home. Like I say if it is constantly uploading my voice data it is going to be through my WiFi, which should show up if I monitor my WiFi usage. This is why I wanted to see their privacy page explaining how Alexa works since their response to the article made it seem like you could monitor what Alexa stores and delete stored data on yourself.
I mean, it is monitoring for the activation phrase. When it hears it, it will start "listening" properly - with visual indicators - and it will send a recording of what you say to Google/Amazon's central servers for analysis. Those servers will then come back with their algorithm-based analysis of whatever you were trying to say and recommend an answer of some sort.

The listening for the wake word is entirely local. The working out what you said afterwards bit is all cloud based. You can test this with Google Assistant if you have a relatively modern Android phone when your internet doesn't work, it'll activate by voice but after that it won't be able to understand anything you say.

There's nothing particularly untoward about the fact that it's making recordings per se, that's the only way it can work because this stuff is much too sophisticated to be driven locally with the sort of responsiveness required. You could even argue that there isn't anything particularly shocking about those recordings being stored, for algorithm-training purposes since that really is how these things work, although that's much much more of a grey area. The lack of robust protections around said data is dubious as hell, though.
 
Last edited:

nightprowler10

Global Moderator
What you say is exactly how I understood it as well, and it's really the only way it makes sense for Alexa to work. As far as privacy goes, I imagine being who I am and where I am, I'm being monitored by others anyway via some method or another. I like to think NSA has a person assigned to me and that person's name is Alan. So once in a while I'll google 'hi Alan' or 'merry xmas Alan'. I wonder if he holds similar views new years greetings as GIMH. If so, Happy New Year Alan.
 

sledger

Spanish_Vicente
Yeah but in addition to the possible creepiness, and the possibility of your data being handled inappropriately, there are concerns to be had about the implications of what happens when Amazon does exactly what it will say it will do with those data.

Who knows what algorithmic analyses those data are subject to, and what the conclusions and consequences of such analyses might be.
 

Spark

Global Moderator
Yeah but in addition to the possible creepiness, and the possibility of your data being handled inappropriately, there are concerns to be had about the implications of what happens when Amazon does exactly what it will say it will do with those data.

Who knows what algorithmic analyses those data are subject to, and what the conclusions and consequences of such analyses might be.
I assume, an exceptionally powerful deep learning neural network running on hardware that puts even most academic supercomputers to shame. Google and Amazon have poured gigantic resources into the field, and one assumes they're doing things which are more profitable and sophisticated long-term than wiping the floor with chess engines.

But yeah, the philosophical implications of the tuning of these neural networks, how we should interpret how they're spit out, and the extent to which they're "organic" and spontaneous and insensitive to their initial conditions is a badly under-researched topic in the field. Probably needs people with a humanities background but computer science training, and they're pretty thin on the ground. It feels like we haven't even worked out the right questions to ask yet, let alone made any progress on good answers.
 
Last edited:

StephenZA

Well-known member
That work will be done after the fact.. not before. We shall (like many technological innovations) look at the effect on society once it is part of society.
 

Spark

Global Moderator
That work will be done after the fact.. not before. We shall (like many technological innovations) look at the effect on society once it is part of society.
Indeed, and we've already seen early signs of this for e.g. facial recognition algorithms which break hilariously/disturbingly when confronted with a black person, because no one had ever bothered to train the neural network on anyone other than white people.
 

StephenZA

Well-known member
Indeed, and we've already seen early signs of this for e.g. facial recognition algorithms which break hilariously/disturbingly when confronted with a black person, because no one had ever bothered to train the neural network on anyone other than white people.
Well let's be honest this would be a different discussion if this had only been trained on black people.....
 

sledger

Spanish_Vicente
I assume, an exceptionally powerful deep learning neural network running on hardware that puts even most academic supercomputers to shame. Google and Amazon have poured gigantic resources into the field, and one assumes they're doing things which are more profitable and sophisticated long-term than wiping the floor with chess engines.

But yeah, the philosophical implications of the tuning of these neural networks, how we should interpret how they're spit out, and the extent to which they're "organic" and spontaneous and insensitive to their initial conditions is a badly under-researched topic in the field. Probably needs people with a humanities background but computer science training, and they're pretty thin on the ground. It feels like we haven't even worked out the right questions to ask yet, let alone made any progress on good answers.
Yes indeed.

Quite a lot of work has already been done to show how data acquired through Amazon/s website, and social networking sites etc. have been used as a means of determining people's credit worthiness and similar though (i.e. potentially life-altering decisions), so it is vital that whatever processes personal data are subject to are transparent, and that the decisions/determinations made pursuant of them can be challenged.
 

Daemon

Well-known member
Are there instances where data sourced from such means has been used to a significant degree? I imagine a lot of these new AML/KYC software companies will be tempted. Stuff like Reuter's worldcheck has already been crawling for public information for ages so it can't be too much of a leap from there.
 

sledger

Spanish_Vicente
Are there instances where data sourced from such means has been used to a significant degree? I imagine a lot of these new AML/KYC software companies will be tempted. Stuff like Reuter's worldcheck has already been crawling for public information for ages so it can't be too much of a leap from there.
Most of the most notable incidents of this sort to date have involved police forces making predictive determinations about people based on dodgy AI irrc. But if you look for "automated decision making" on Google you'll find plenty of stuff explaining why it's dodgy af.
 
Top