GIMH
Norwood's on Fire
Exactly. Wouldn’t have said anything if it was a run of the mill apostrophe errorYeah sorry for having a laugh st that vcs. **** knows I get it wrong often enough. Was just funny in the context of that sentence.
Exactly. Wouldn’t have said anything if it was a run of the mill apostrophe errorYeah sorry for having a laugh st that vcs. **** knows I get it wrong often enough. Was just funny in the context of that sentence.
I don't have much faith in his privacy protection chops right now tbh.Bloke is probably too clever to have an Alexa tbh. They are a privacy and data protection nightmare which, as he presumably had a hand in designing them, will know all about that.
Is the concern that it's always listening and potentially recording/uploading everything you say? I ask because I've seen this mentioned before and it makes no sense to me. If you watch your network traffic surely you'd see if it's constantly uploading data or only when you say the trigger word. It's no different to any smart phone that gets triggered by 'ok google' or the like.Seriously **** those things. If I ever stay in a hotel that has one I am drowning the bastard in the bath.
https://medium.com/swlh/alexa-play-...-time-amazon-is-listening-to-you-a556df19613fIs the concern that it's always listening and potentially recording/uploading everything you say? I ask because I've seen this mentioned before and it makes no sense to me. If you watch your network traffic surely you'd see if it's constantly uploading data or only when you say the trigger word. It's no different to any smart phone that gets triggered by 'ok google' or the like.
Holy crap. That German example is horrifying. Hilariously, the Alexa privacy page that they link in their official statement, as well as their FAQ page, leads to a 404 error.
Yeah the findings - that Alexa stores its recordings, and its processes are so shoddy that it could send them to someone else by mistake - are pretty bad. The article gets ridiculous though, and presents that as Alexa listening to you 24/7 including when not activated with the wake word, when it's pretty obvious that these are the recordings of the person asking Alexa for stuff, and thus were appropriately recorded (if not appropriately stored).Haha tbh I didn't get as far as its actual conclusions. It's findings, regardless of whatever conclusions it actually draws, are the illuminating thing for mine.
I mean, it is monitoring for the activation phrase. When it hears it, it will start "listening" properly - with visual indicators - and it will send a recording of what you say to Google/Amazon's central servers for analysis. Those servers will then come back with their algorithm-based analysis of whatever you were trying to say and recommend an answer of some sort.Yeah, I'd like to do more research on this myself because my home could really benefit from Alexa or Google home. Like I say if it is constantly uploading my voice data it is going to be through my WiFi, which should show up if I monitor my WiFi usage. This is why I wanted to see their privacy page explaining how Alexa works since their response to the article made it seem like you could monitor what Alexa stores and delete stored data on yourself.
Unless you're shagging someone named Alexa.Like, unless you're into some particularly weird ****, there's nothing in those findings to suggest that it'll record you having ***, for instance
I assume, an exceptionally powerful deep learning neural network running on hardware that puts even most academic supercomputers to shame. Google and Amazon have poured gigantic resources into the field, and one assumes they're doing things which are more profitable and sophisticated long-term than wiping the floor with chess engines.Yeah but in addition to the possible creepiness, and the possibility of your data being handled inappropriately, there are concerns to be had about the implications of what happens when Amazon does exactly what it will say it will do with those data.
Who knows what algorithmic analyses those data are subject to, and what the conclusions and consequences of such analyses might be.
Indeed, and we've already seen early signs of this for e.g. facial recognition algorithms which break hilariously/disturbingly when confronted with a black person, because no one had ever bothered to train the neural network on anyone other than white people.That work will be done after the fact.. not before. We shall (like many technological innovations) look at the effect on society once it is part of society.
Well let's be honest this would be a different discussion if this had only been trained on black people.....Indeed, and we've already seen early signs of this for e.g. facial recognition algorithms which break hilariously/disturbingly when confronted with a black person, because no one had ever bothered to train the neural network on anyone other than white people.
Yes indeed.I assume, an exceptionally powerful deep learning neural network running on hardware that puts even most academic supercomputers to shame. Google and Amazon have poured gigantic resources into the field, and one assumes they're doing things which are more profitable and sophisticated long-term than wiping the floor with chess engines.
But yeah, the philosophical implications of the tuning of these neural networks, how we should interpret how they're spit out, and the extent to which they're "organic" and spontaneous and insensitive to their initial conditions is a badly under-researched topic in the field. Probably needs people with a humanities background but computer science training, and they're pretty thin on the ground. It feels like we haven't even worked out the right questions to ask yet, let alone made any progress on good answers.
Most of the most notable incidents of this sort to date have involved police forces making predictive determinations about people based on dodgy AI irrc. But if you look for "automated decision making" on Google you'll find plenty of stuff explaining why it's dodgy af.Are there instances where data sourced from such means has been used to a significant degree? I imagine a lot of these new AML/KYC software companies will be tempted. Stuff like Reuter's worldcheck has already been crawling for public information for ages so it can't be too much of a leap from there.