In many ways, 2016 has been a year of disappointments for the tech industry. Many of the disruptive technologies that were hyped this year fizzled out after failing to generate significant consumer interest. In retrospect, I think it’s apparent that the industry has become spoiled by the attention garnered by a once-in-a-lifetime product — the smartphone.
As the pace of smartphone improvement has slowed, tech companies eager to capitalize on their newfound position as the center of attention have raced to replicate it’s success and scale, resulting in products that have been both overhyped and unfocused. The Apple keynote address has become a genre unto itself, subject of both imitation and parody. While Silicon Valley in particular has always loved to spout their “change the world” rhetoric about technological disruption, the newest iteration of the Apple TV doesn’t have to come with a grand pronouncement that “the future of TV is apps.” It’s alright for it to just be a better way to watch TV. Not every product will, can, or even should attempt to revolutionize (and democratize) technology to the degree that the pocket supercomputer did.
Steve Jobs crystalized this idea at the end of his 2010 introduction of the original iPad when he positioned Apple as existing at the intersection of technology and liberal arts.
The reason that Apple is able to create products like the iPad is because we’ve always tried to be at the intersection of technology and the liberal arts. To be able to get the best of both. To make extremely advanced products from a technology point of view, but also have them be intuitive easy-to-use, fun-to-use, so that they really fit the users. The users don’t have to come to them, they come to the user. And it’s the combination of these two things that I think has let us make the kind of creative products like the iPad.
There has been too much technology for technology's sake this year. In 2017, I hope that technology companies (including Apple) take Jobs’ advice and focus on building products that improve people’s lives again.
Jobs’ astutely positioned Apple at the intersection of technology and culture. This site exists there as well. In the case of the latter, it’s been a year packed full of highly anticipated releases with plenty of highlights. These are the albums, apps, and games that I enjoyed in 20161.
All lists are ordered alphabetically.↩︎
After just about a year of being hosted on Typed, bytesized.co is now statically generated and hosted on S31 — and it’s generated with Swift!
For the static generator, I’m using a fork of the open source engine that powers the Spelt blogging software by Niels de Hoog. It’s lightning fast and offers some nice features like local preview with auto-regeneration. It’s also easily customizable if you’re familiar with Swift. Under the hood, Spelt uses Kyle Fuller’s Swift template language Stencil to provide Mustache-style templating. If you’re curious about what this looks like in practice, I’ve published the source for the new version of the site on GitHub.
The combination of the Spelt CLI and the AWS CLI make updating the site painless. No more copy and pasting articles into a rich-text web editor, hoping the formatting doesn’t get mangled. No more worrying if the site can parse the flavor of markdown I’m writing in. When an article is ready to be published, I just save the .md file to the _posts directory and run this script:
aws s3 sync . s3://bytesized.co/ --exclude "*.DS_Store*"
Spelt builds the site and any changes are synced with the S3 bucket they’re served from.
Now that I’ve spent a few days refreshing the technical side of the site, I’m looking forward to trying to write more regularly. Moving into the new year my goal is to write at least one post a month — so if you notice that I’m slacking, remind me.
If you’re interested in hosting your own static site on S3, I’d recommend starting with the guide.↩︎
As a remote iOS developer, I love Slack. It’s both my meeting room and my water cooler. As interest in bots exploded, my interest was piqued. Of course I was interested in writing bots for one of my favorite services! My love of Slack and my love of Apple’s new programming language, Swift, came together in the form of SlackKit, a Slack client library for iOS, tvOS, and macOS. Unfortunately, it’s not very practical to have to run your Slack bots on a Mac or iPhone, and SlackKit wasn’t compatible with Linux — until now.
Zewo to Sixty on Linux
Even in the rapidly changing world of technology, the server-side Swift ecosystem is very new. Apple’s Linux Foundation port is a huge undertaking, as is the work to get libdispatch, one of the main concurrency frameworks that Foundation relies upon, up and running on the platform. Fortunately, a vibrant ecosystem of open source software has emerged to fill in the gaps left by Apple’s official libraries. In researching the possibilities for Swift on Linux, I discovered the open source organization Zewo, a large part of this budding community. If this sounds interesting to you, you should get involved! (Oh, and of course they have a Slack)
Do You Want to Build a Slack Bot?
The following is a step-by-step guide to writing a Slack bot in Swift and deploying to Heroku. The focus here is on macOS but this is also doable on Linux — just skip the Xcode steps and use your editor of choice.
[Update 1/2/2017: I’ve updated these instructions to work with Xcode 8.2.1 and the official release of Swift 3.]
Building the Application
For our example application, we’re going to be making an application that can render judgement on a very specific question: Robot or Not?
First, we need to create the directory for our application and initialize the basic project structure.
mkdir robot-or-not-bot && cd robot-or-not-bot
swift package init --type executable
Next, let’s edit our
Package.swift to add the SlackKit package as a dependency:
and generate our development environment:
swift package generate-xcodeproj
Show Me the Swift Code!
To create our bot, we need to open the
robot-or-not-bot.xcodeproj file we just generated, and edit the
main.swift file in Sources > robot-or-not-bot to contain our bot logic. The following code uses SlackKit to listen for messages directed at our bot and then respond to them by adding a reaction to the inquiry.
Setting Up Your Slack Bot
Next, we need to create a bot integration in Slack. You’ll need a Slack that you have administrator access to; if you don’t already have one of those to play with, go sign up. Slack is free for small teams.
- Go here: https://my.slack.com/services/new/bot
- Enter a name for your bot. We’re going to go with “robot-or-not-bot” so there’s no confusion about our bot’s sole purpose in life.
- Click “Add Bot Integration”
- Copy the API token that Slack generates and replace our placeholder token in
main.swift with the real deal.
## Testing Locally
With our bot token in place, we’re ready to do some local testing! Back in Xcode, select the robot-or-not-bot command line application target and run your bot (⌘+R).
Then head over to Slack; robot-or-not-bot’s user presence indicator should be filled in. It’s alive!
To test if it’s working, ask it if something is a robot:
@robot-or-not-bot Darth Vader?
robot-or-not-bot should add the 🚫 reaction in response to your question, letting you know that Darth Vader is not a robot.
Deploying to the ☁️
Now that it’s working locally, it’s time to deploy. To the cloud! We’re going to be deploying on Heroku, so if you don’t have an account go and sign up for a free one.
First, we need to add a Procfile for Heroku. Back in the terminal, run:
echo slackbot: .build/release/robot-or-not-bot > Procfile
Next, let’s check in our code:
git add .
git commit -am'robot-or-not-bot powering up'
Finally, we’ll setup Heroku:
1. Install the Heroku toolbelt
2. Log in to Heroku in your terminal:
3. Create our application on Heroku and set our buildpack:
heroku create --buildpack https://github.com/kylef/heroku-buildpack-swift robot-or-not-bot
4. Set up our Heroku remote:
heroku git:remote -a robot-or-not-bot
5. Push to master:
git push heroku master
At this point, you’ll see Heroku go through the build process — exciting!
Once the build is complete, run:
heroku run:detached slackbot
Over in Slack, you’ll see robot-or-not-bot’s user presence indicator fill in. It’s alive! (again)
Just to be sure if it’s working, we should ask it an existential question:
@robot-or-not-bot Robot Or Not Bot?
robot-or-not-bot will (sadly, I imagine) add the 🚫 reaction to your question — it knows it is just a computer program, not a robot.
🎊 You’re Done! 🎊
Congratulations, you’ve successfully built and deployed a Slack bot written in Swift on to a Linux server!
The Amazon Echo is a certified hit. Voice now seems obvious as the natural interface for devices within the confines of the home—and the Echo looks like a glaring miss for Apple, Google, and Microsoft—companies that have mature digital assistants and the hardware expertise to produce an Echo-like device. Spurred on by an increasing number of open APIs Amazon has been adding functionality at a rapid pace—you can now order an Uber, control your thermostat, stream music from Spotify, and more1 from the Echo. It feels like the beginning of the smart home revolution that tech companies have been promising since Bill Gates laid miles of ethernet and fiberoptic cable in his Seattle mansion in the late 1990s. Competitors are already starting to arrive, but it’s a market that’s still in its infancy—and one that Microsoft, Facebook, Google, and Apple will want a piece of. As the Echo is showing us, these digital assistant-powered devices are primed to be the smarts behind the smart home.
Microsoft’s digital assistant is named Cortana and is built into Windows 10. While Cortana is relatively new and unrefined compared to the competition, Microsoft has the necessary technologies to compete in this space with their impressive cloud infrastructure and artificial intelligence framework. They also have existing products like the Kinect and the HoloLens that could enable interesting complimentary experiences.
Over in Menlo Park, Facebook is building a text-based assistant with it’s “Facebook M” project. Built into Facebook Messenger and powered by Facebook’s own AI, Facebook M is currently in closed beta. An acquisition or partnership with the well received digital assistant application Hound would make for an interesting software-focused play for Facebook, especially given SoundHound’s Houndify platform initiative.
Google has had Google Now, their digital assistant technology, since it was released with Android 4.1 (“Jelly Bean“) in 2012. Their “Ok, Google” phrase activation is better than the competition, and given their interest in the smart home, expertise in server infrastructure, and artificial intelligence prowess2, this seems like a natural product area for the company. They already have a product that sits in the home with the Google OnHub. It has a speaker, but no microphones and none of the software functionality that makes the Echo so useful. One possible reason for this is that Nest, a Google acquisition focused on smart home products, has been tainted by it’s association with “creepy Google”3. An anonymous source inside Google acknowledged as much to Recode:
Senior executives at Nest had considered making a product similar to Echo, a voice-activated personal assistant, according to sources. But the plans were never hatched, largely out of concern that consumers would be too reticent of such a device tied to the search giant.
“At the end of the day, it’s Google,” said one source familiar with the situation. “There are trust issues.”
It’s clear that at least some people feel that an always on, always listening device from an Alphabet (née Google) company may not be welcome in people’s homes.
Finally, famously first and not always first to market in a given product category, Apple is uniquely poised to make a play in this space. Their big advantage is their existing mobile operating system, iOS. iOS is an absolutely massive platform with over 1.5 million applications available for download. Apple could leverage this platform to create a peerless experience by opening up the API that allows applications to register a vocal command interface with Siri in their upcoming release of iOS, iOS 104.
Much of the technical work for this has already been done—in iOS 8 they introduced extensions, allowing third-party applications to safely interact with the system; in iOS 9 they introduced the Core Spotlight Framework, allowing applications to register information as searchable by the system; and assuming the rumors that Siri is coming to the Mac this summer are true, the architecture has already been untied from the iPhone. Apple’s home speaker hardware (iPod, anyone?) would talk to your phone wirelessly5 and offer any Siri integrations for apps you already have on your phone in addition to it’s built-in functionality—tight integration with Apple Music and Apple’s other services. No need to hassle with yet another App Store6. By leveraging the work developers had done to support Siri with their iOS apps, Apple could debut their product with tens of thousands of integrations, all available on day one.
While there are hurdles—like Siri’s general usefulness and reliability7—Apple has long prided themselves on offering high quality products differentiated by their ‘whole widget’ approach of building both the hardware and the software. With their legendary industrial design team, Apple’s ability to create a beautiful object is beyond doubt. If they can nail the software as well, they have legions of fans already invested in the iOS ecosystem waiting to give them a foothold in the smart home.
There are over 300 “skills” (Amazon’s parlance for applications) currently available for the Echo.↩︎
Google’s AlphaGo AI, powered by DeepMind, just became the first AI to ever beat a world-class player at the board game Go.↩︎
I think that Google Glass, despite it’s ultimate failure as a product, was the tipping point for public perception in this regard.↩︎
iOS 10 will be announced this summer at WWDC.↩︎
Because of the importance of latency with a voice interface and the flakiness of current wireless technologies, the speaker would need to somehow download these Siri API modules, either from your phone or from the cloud.↩︎
Apps on the Apple Watch have largely been a flop. Not every platform needs it’s own App Store.↩︎
Apple has been slowly improving Siri.↩︎