On the switch from Apple to Google


I casually mentioned my abandoning Apple for Google, when it comes to which phone, laptop and accessories I use, to a friend of mine the other day, before hanging up on me, he asked one question.

  • Share on Google+

But why?

How could I jump ship on the eve of one of Apple’s most anticipated new hardware releases, the iPhone X?

How could I abandon the platform, when we’re into revision two of the iPad Pro ( my favorite product in years ), when the Mac comes in a range of powerful and increasingly attractive form factors, at a time when Apple have seemingly finally figured out how to do services well and just as the Watch comes into its own with series 3 and LTE?

On the software side, while iOS 11 does have some confoundingly ridiculous usability and design inconsistencies, overall it is a terrific update, as is High Sierra (Mac OS) and Watch OS 4… all extremely reliable.

Seriously Apple, much love, but notifications in iOS 11 and how users interact with them are atrocious, just sayin. For more on this, see — https://www.fastcodesign.com/90150025/the-iphone-x-is-a-user-experience-nightmare

Apple is doing, what Apple does well, they’re building top-notch hardware that is safe, secure and resoundingly successful.

I had to ask myself though, is that enough?

Reliable. Safe. Secure. Boring?


On the flip side, my platform transition is not about Google suddenly entering a golden age of doing no wrong. While their new’ish focus on hardware is bearing fruit, let’s all admit there is still plenty of work to be done. The latest hardware launches have had their fair share of problems; search for reviews of the screen quality on the Pixel 2 XL, the high cost of the Pixelbook, how privacy concerns forced a feature removal on the Google Home Mini, or how Android 8 Oreo still has “quirks”, and you’ll get an idea of where things are in Google land.

Hardware and software issues aside, my move to a Google-centric life has to do with one core conceit, the power of machine learning and AI, and the philosophical difference ( what feels like a massive one ) in how Google is providing access to these technology advancements.

Computers should adapt to how people live, we’re in a unique moment of time, where we can bring a unique combination of hardware, software and AI
— Sundar Pichai, Google CEO

Before I continue, quick definitions for AI and Machine Learning, both of which will be referenced below and I am not convinced most folks understand the difference.

Machine Learning ( or sometimes just ML ) is the concept of using algorithms to identify patterns and / or make predictions based on a data set.

Artificial Intelligence ( or AI ) is the automation of activities we generally attribute to human thinking and rationality, such as problem-solving, decision making and learning.

So we’re clear, true AI doesn’t exist, at least not in the practical sense…yet.

The majority of what we refer to as AI is really just advanced Machine Learning with extensive behavioral algorithms that adapt as they collect more data and while these systems are improving, they are not yet learning on their own. The industry has a bad habit of using the two terms interchangeably, I am going to stick with just using Machine Learning, apologies for the quotes i’m planning to include and their use of AI.

Machine Learning is everywhere these days, affecting us in ways big and small; Netflix recommendations are made better, Siri and Alexa understand more of what we say, Tesla cars stop before they crash into things, Amazon recommends more products and Facebook suggests people we may know.
Google specific examples include displaying traffic delays in Maps, preventing spam in Gmail, taking better portrait photos on phones and automating responses / quick replies in their various messenger apps.

For me, the advantage of Machine Learning is in getting the tools I use to perform as much of the heavy lifting as possible. Abundant automation, deeper context, greater access, data that shared between platforms and in the end for me to have what I need when I need it.

Apple ( and for that matter, everyone in Silicon Valley ) is investing heavily in and embedding ML into their products as well, though slowly, and with far more restrictions, limited by what makes the platform such a good choice for the average user, absolute privacy.

Apple want to be the digital privacy guardians of our time and they base each decision they make across hardware and software in the context of protecting their users. Google on the other hand, while by no means playing fast and loose with our personal data, are significantly more open to sharing access and their immense data sets mean they are making progress substantially quicker.

What Google is able to do, using their vast computational power and somewhat less fervent personal protection is nothing short of astounding. Combine this technological sophistication with truly powerful, well-built hardware and we’re setting the stage for the next step in mobile / computer evolution and the reason behind my switching platforms.

Some things I’m able to do on Google hardware and software;

Google Lens — Google Lens is a search engine for your camera, giving you actionable insight, just by taking a photo.

A photo of a business card turns into searchable digital data, snapping a photo of a movie poster or a concert flyer will bring up showtimes, add the event to my calendar and provide the mechanism for buying tickets. When I want to see reviews of a restaurant, or coffee shop, a quick photo of the outside brings up all of the data from Google Maps, pictures of landmarks highlights historical details and contextual information I might want to know, and these are just a few of the examples of what Lens has done for me in the past week alone.

Yes, I could search for all of the same information by typing searches into Google directly, but there is something so satisfying ( not to mention faster ) about snapping a photo and letting Lens do all the work.

2. Google Voice Translate

Some years ago I moved to Italy and while I had every intention of learning to speak Italian, a combination of a cottage in the remote countryside and every Italian I met wanting to practice English, meant it didn’t happen.

We’ve a trip to Tuscany planned for February and with still no ability to speak the language, we’ll be using the Assistant inside the Google Pixel Bud headphones to perform real-time language translation to both understand everything spoken to us and to converse back as well, in Italian.

Sound like some magical future? If you’ve yet to see footage of this in action, go watch, now, i’ll wait — https://www.youtube.com/watch?v=kWb1ysqtc4o

A Star Trek’ian future, available now for $159.

3. Photos

Both the Pixel 2 and Pixel 2 XL have underwhelming camera specs.

In an age of dual camera phones ( the iPhone 8 Plus, iPhone X, LG G6/30, Samsung Note 8, etc. etc. ), the Pixel phones have only one lens each, both at 12 megapixels and both with an f/1.8 aperture, giving the phone, on paper, capabilities any self respecting photographer would dismiss outright as underwhelming.

Yet both average users and photographers alike are taking photos with this hardware that are absolutely stunning, yet another gift of machine learning and what Google has referred to as computational photography.

Compare the results, portrait photos on both the Pixel 2 XL and the iPhone X ( pay special attention to the hair and edges of clothing ) where you can really see how much clearer the Pixel photos are.
Image: raymond wong/mashable

While still impressive, the softness around the edges of the iPhone photo stand in sharp contrast to the impressive level of fidelity in the Pixel shot. Just to be clear, the hardware on the iPhone is significantly better, the difference in these photos is purely the algorithms behind the curtain.

Machine learning doesn’t make for perfect photos though. In the photo below, from the Pixel 2 photography showcase, you still get crazy fidelity on the hair, but some of the jacket ( particularly the right side and hood ) look less than stellar.

I highlight room for improvement, as these are issues that could very well be solved in the coming days or weeks, as machine learning powers so much of the camera ability, platform updates behind the scenes and software updates on the phone will upgrade capabilities long before the next round of hardware updates.

Check out more of the Pixel 2 promo photos — https://photos.google.com/share/AF1QipO2_gTkgT1QwgYaWCTowaN6d2Cb5rvyJU10cjAdSU9Ao8v9Ec-r1v1cKdWEx6PNqg?key=WEdYT3BMNFZGdUlwQ0l6aEdFT1UwVlg2LUZESDhn

If you’re interested in reading in more detail how Google makes portrait mode so good on the new phones, take a look at this post from the Google Research blog — https://research.googleblog.com/2017/10/portrait-mode-on-pixel-2-and-pixel-2-xl.html

This post, while not directly linked to the Pixel phones, provides great insight into the kind of thinking Google is applying to the problem and has some, almost unbelievable photo transformations — https://research.googleblog.com/2017/04/experimental-nighttime-photography-with.html

4. Google Assistant / Voice Recognition
Comparing the quality of the Google Assistant to Siri, Cortana, Alexa and Samsung’s Bixby has been a hot topic this past year. YouTube especially is littered with comparison videos and informal tests, each poking and prodding to see which system can give the best responses. While the Google Assistant has come out on top of a good number of these examinations, I have to disagree with those that say it’s close…it’s not.

First, Google voice recognition and transcription blows every other service out of the water. At home ( where I usually work ), I type almost nothing out manually anymore, with no one to disturb I dictate my emails, my texts, my tweets, even some of my longform writing and it works beautifully. Powered by machine learning and listening for patterns of speech that make the most sense, Google rarely makes mistakes and when it does, I’m prompted with a correction option that is almost always what I intended to write.

Same goes for giving the Google Assistant, on any device, instructions on a task. By saying ( or sometimes yelling across the room ) “Hey Google”, I can get music from Spotify streaming, play videos on my living room TV, find out what the weather is for tomorrow, have my messages or email read to me, make reservations through OpenTable, find out whether the Oilers or Warriors won last night, get an automated news briefing when I wake up, turn off the lights, or adjust the temperature in my apartment…the list goes on and on.

Take a read of these to see more commands you can give the assistant. https://www.digitaltrends.com/mobile/list-of-ok-google-voice-commands/2/

Google Home commands – a comprehensive guide to them all!


For some of the automation Google Assistant provides, an argument can be made that they’re only saving seconds a day; for example creating an automated weather report, delivered by the assistant everyday at 9am means not having to pull out the phone, launch a weather app and check the information, and while I firmly believe these seconds add up, we’re not talking massive time saving.

That is until you start getting into what Google calls shortcut commands.

A shortcut command is a series of actions, undertaken by the Google Assistant, launched by a custom, phrase, that allows you to string multiple Google Assistant commands together.

For example, if I do happen to be working from somewhere other than my house, I can say, “Hey Google, it’s home time” and automatically a text will be sent to my girl saying i’m on the way, have the heat turned up in the apartment ( or turned down in summer ) and a commute playlist queued up for me in Spotify, all from one command.

Shortcuts open up myriad possibilities and real time saving, check out some further examples in the link below and add your own by launching the assistant and finding Shortcuts in the settings.

https://www.digitaltrends.com/android/google-home-shortcuts-guide/2/

One last thing about the Google Assistant, something I haven’t used that much, Actions. Actions are basically assistants within the Google Assistant, allowing for a direct voice connection to content and functionality from 3rd party partners.

Current Actions include everything from an auto meme generator called Meme Buddy, to getting transit schedule updates from numerous cities

5. More of what makes life with Google better

A random sampling of little things that Google / Android can do, that while not life changing, are a unique mix of helpful and interesting.

Apps are smarter, or at least better connected. For example, the Genius app, can detect what song Spotify is currently playing and offer lyrics and song notes, LastPass ( a strong password generator and manager ) has full access to fill-in login information and form data automatically, across both apps and browsers. Alongside all the apps running on my phone, the system share sheet will populate itself with common connections, both in people and apps, to better enable the distribution of data.

My phone will keep itself unlocked when I’m at home, using the connection to my home wifi setup to verify location, it can do the same when kept in my hand or pocket using something called On-Body Detection, essentially keeping itself unlocked until set down on a table or desk.

Google Maps highlighted the running of the Seattle marathon and how our next day’s travel plans would be affected by a longer commute.

Android allows for split screen operation, running two apps side-by-side, a godsend in the age of large screen mobile devices.

At qfc ( supermarket chain ) the other day, my phone automatically detected my location and brought up my loyalty card for scanning. Same thing happened standing outside a restaurant last week, the menu appeared.