Posted in Shoddy Security

Major security holes found in Google’s Nest

Don’t use “smart home” technology. Just don’t.

After last week’s heated debate about whether Google Nest owners should be able to turn off their webcam’s recording LED, this week they have something more conventional to worry about – security flaws.

The list of vulnerabilities recently discovered by Cisco Talos researchers relate to one model, the Nest Cam IQ Indoor camera.

As $249 webcams go, this one has plenty of features, including a 4K resolution sensor, facial recognition, noise and echo cancellation, and Google’s Voice Assistant integration to control other Nest products.

There are eight CVE-level vulnerabilities in total, five relating to the Weave protocol binary built into the camera (used to set it up), and three in the Openweave interface (this being the open source version of Weave).

Some of these exploits allow the device to be taken over, or hijacked.

Google claims it’s patching the affected hardware, but cautions that updates may take a while to roll out.

Meanwhile, lots of Nest users are still angry about Google’s decision to cripple the toggle for the Nest cam’s LED status light.

Posted in Shoddy Security

Flaw discovered in Google’s Bluetooth Titan security key, prompting a recall

Embarrassing, but not surprising, considering Google’s shoddy record on security.

Google today disclosed a security bug in its Bluetooth Titan Security Key that could allow an attacker in close physical proximity to circumvent the security the key is supposed to provide. The company says the bug is due to a “misconfiguration in the Titan Security Keys’ Bluetooth pairing protocols” and that even the faulty keys still protect against phishing attacks. Still, the company is providing a free replacement key to all existing users.

Google’s recent introduction of Titan was its latest entry into a product category it has no need to be in. Google is increasingly selling hardware of various kinds, from phones (like the Pixel) to so-called “smart home” gadgets (its Nest line of products) and even Google Clip, an always-on camera.

YubiKey maker Yubico already makes high quality security keys with wireless functionality (NFC is used as opposed to Bluetooth because it’s more secure). There was no need for Titan, especially given that it’s an inferior product.

And yet, since Google bigwigs have this ridiculous desire to compete in pretty much every product category, they went ahead and made the Monster of Mountain View a competitor of Yubico. Perhaps now they’ll reconsider that decsion.

Posted in Shoddy Security

Google tries to strengthen security with over-the-air Android updates… but manufacturers can opt out

With its latest version of Android, Google is signaling it’s going to try to copy what Apple has done with respect to keeping control over mobile software updates pushed to end users. But its policy has a loophole big enough to throw a galaxy through.

Devices shipping with Android Q will receive over-the-air security patches without having to go through device manufacturers.

[…]

Devices updating to Android Q will not work with over-the-air security updates and some manufacturers can opt-out altogether, according to The Verge, which first reported the news, rendering the feature effectively useless. The new feature will also not be backported to earlier versions of Android.

According to distribution data, close to half of all Android users are still on Android 5.0 Lollipop and earlier, it could take years for Android Q to match the same usage share.

Nice try, Google, but you still haven’t solved your Android device fragmentation and abandonment problem.

Posted in Shoddy Security

Android ecosystem of pre-installed apps is a privacy and security mess

We’re shocked, shocked, shocked to… oh wait, actually, no, we’re not shocked at all by this:

An academic study that analyzed 82,501 apps that were pre-installed on 1,742 Android smartphones sold by 214 vendors concluded that users are woefully unaware of the huge security and privacy-related threats that come from pre-installed applications.

Researchers found that many of these pre-installed apps have access to very intrusive permissions out of the box, collect and send data about users to advertisers, and have security flaws that often remain unpatched.

On top of this, many pre-installed apps (also referred to as bloatware) can’t be removed, and also use third-party libraries that secretly collect user data from within benign-looking and innocently-named applications.

The study is, by far, one of the most complex endeavors of its kind, and included both an analysis of device firmware, app behavior, and the internet traffic the apps generated.

Android has been repeatedly shown to be a security nightmare. What’s particularly ironic and absurd is that many Android device manufacturers lock the bootloader to prevent rooting, which stops savvy users from getting rid of the bloatware and keeping their devices current.

And thanks to the demise of Windows Phone and BB10 (the latter of which heavily emphasized security), the only practical alternative is iOS. While iOS is superior to Android, it’s a shame that there’s no other game in town anymore. We appear to be stuck with a duopoly for the foreseeable future.

Posted in Shoddy Security

New Android adware found in 200 apps on Google Play

These issues just keep recurring… and recurring… and recurring…

Security researchers have found a new kind of mobile adware hidden in hundreds of Android apps, and downloaded more than 150 million times from Google Play.

The malware masquerading as an ad-serving platform, dubbed SimBad by researchers at security firm Check Point, infected more than 200 apps which, likely unbeknownst to the app developer, would open a backdoor to install additional malware as a way to outsmart Google’s app store scanning. Once installed, the downloaded malware also removes the app icon and persists in the background, loading each time the device boots up.

A list of the bad apps is available here.

Google has been pulling down these bad apps, but unfortunately, they will remain on the devices of anyone who installed them unless the user takes action to get rid of them. That’s what is so distressing about all of this. Google has failed to create a system for effectively vetting and screening apps before they appear on Google Play. And it seems no matter how many times security researchers find problems, Google isn’t embarrassed enough to change its ways.

Posted in Shoddy Security

Use Chromecast, get hacked

Another Google offering that is NOT secure.

Hackers have hijacked thousands of exposed Chromecast streaming devices to warn users of the latest security flaw to affect the device. But other security researchers say that the bug — if left unfixed — could be used for more disruptive attacks.

The culprits, known as Hacker Giraffe and J3ws3r, have become the latest person to figure out how to trick Google’s media streamer into playing any YouTube video they want — including videos that are custom-made. This time around, the hackers hijacked forced the affected Chromecasts to display a pop-up notice that’s viewable on the connected TV, warning the user that their misconfigured router is exposing their Chromecast and smart TV to hackers like themselves.

This is not the first Chromecast exploit, either.

Bishop Fox, a security consultancy firm, first found a hijack bug in 2014, not long after the Chromecast debuted. The researchers found that they could conduct a “deauth” attack that disconnects the Chromecast from the Wi-Fi network it was connected to, causing it to revert back to its out-of-the-box state, waiting for a device to tell it where to connect and what to stream. That’s when it can be hijacked and forced to stream whatever the hijacker wants. All of this can be done in an instant — as they did — with a touch of a button on a custom-built handheld remote.

Two years later, U.K. cybersecurity firm Pen Test Partners discovered that the Chromecast was still vulnerable to “deauth” attacks, making it easy to play content on a neighbor’s Chromecasts in just a few minutes.

Google claims it’s trying to fix the deauth bug. But it’s a four year old exploit. They have had years to fix it and they have failed.

The moral of the story: don’t use Chromecast and other woefully insecure Android products.

Posted in Shoddy Security

Google Plus suffers another security breach

Oops.

Google has now admitted that Google Plus has suffered another security failure, allowing the personal information of 52 million users to be accessed by third-party apps and developers without permission.

So, even if you had your profile information – such as your name, email addresss, occupation, etc etc – set as “not-public”, the information could be accessed by unauthorized parties.

According to Google, the flaw was introduced through a software update in November and was spotted less than a week later. The search giant says that it has seen no evidence that any app developers were aware of the flaw or misused it.

“With the discovery of this new bug, we have decided to expedite the shut-down of all Google+ APIs; this will occur within the next 90 days. In addition, we have also decided to accelerate the sunsetting of consumer Google+ from August 2019 to April 2019,” a contrite David Thacker wrote.

Posted in Shoddy Security

Google concealed a “software glitch” in Google+ that exposed data of half a million people

Irresponsibility is their policy:

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal.

As part of its response to the incident, the Alphabet Inc. unit plans to announce a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+, the people said. The move effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook Inc. and is widely seen as one of Google’s biggest failures.

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident. A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.

This revelation raises the question: what other dirty laundry is the Monster of Mountain View hiding?

Google executives have clearly relished watching Facebook take incoming fire in the press on a near constant basis this year. It’s no wonder they didn’t want to come clean about their own failings. But if they truly lived by their internal motto of “don’t be evil”, then they would have disclosed this glitch in the interest of transparency. How they expected to keep it a secret indefinitely is anyone’s guess.

It’s good that Google+ is shutting down. But the company must not be allowed to wash its hands of this incident and walk away. There should be consequences.

The European Union and the United States government should launch immediate investigations into this matter and find out what other secrets Google may be keeping from its users and stockholders.

 

Posted in Shoddy Security

Crooks infiltrate Google Play with malware in QR reading utilities

Google fails again… surprise, surprise:

SophosLabs just alerted us to a malware family that had infiltrated Google Play by presenting itself as a bunch of handy utilities.

Sophos detects this malware as Andr/HiddnAd-AJ, and the name gives you an inkling of what the rogue apps do: blast you with ads, but only after lying low for a while to lull you into a false sense of security.

We reported the offending apps to Google, and they’ve now been pulled from the Play Store, but not before some of them attracted more than 500,000 downloads.

The subterfuge used by the developers to keep Google’s “Play Protect” app-vetting process sweet seems surprisingly simple.

Prefer Android to iOS? Use F-Droid to get apps, NOT Google Play. There’s no malware lurking on F-Droid.

Posted in Shoddy Security, War on Privacy

Google admits tracking users’ location even when location services are disabled

Big Brother is watching you. Even if you’ve told Big Brother Google you don’t want to be tracked.

Many people realize that smartphones track their locations. But what if you actively turn off location services, haven’t used any apps, and haven’t even inserted a carrier SIM card?

Even if you take all of those precautions, phones running Android software gather data about your location and send it back to Google when they’re connected to the internet, a Quartz investigation has revealed.

Since the beginning of 2017, Android phones have been collecting the addresses of nearby cellular towers—even when location services are disabled—and sending that data back to Google. The result is that Google, the unit of Alphabet behind Android, has access to data about individuals’ locations and their movements that go far beyond a reasonable consumer expectation of privacy.

Quartz observed the data collection occur and contacted Google, which confirmed the practice.

When confronted, Google claimed that the tracking was happening in part to improve message delivery, which Quartz rightly deemed to be a completely bogus explanation.

It is not clear how cell-tower addresses, transmitted as a data string that identifies a specific cell tower, could have been used to improve message delivery. But the privacy implications of the covert location-sharing practice are plain. While information about a single cell tower can only offer an approximation of where a mobile device actually is, multiple towers can be used to triangulate its location to within about a quarter-mile radius, or to a more exact pinpoint in urban areas, where cell towers are closer together.

The practice is troubling for people who’d prefer they weren’t tracked, especially for those such as law-enforcement officials or victims of domestic abuse who turn off location services thinking they’re fully concealing their whereabouts. Although the data sent to Google is encrypted, it could potentially be sent to a third party if the phone had been compromised with spyware or other methods of hacking. Each phone has a unique ID number, with which the location data can be associated.

Read the whole thing.