After investing in a little graphics tablet, I thought it would be fun to create a semi-regular tech web comic about a technical writing dog called Docs Dog:
Docs Dog #1
I’ll be posting further comics here.
Posts about technical writing, API docs and documentation tools.
After investing in a little graphics tablet, I thought it would be fun to create a semi-regular tech web comic about a technical writing dog called Docs Dog:
I’ll be posting further comics here.
WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.
One of the first tasks I was given as a technical writer was writing a set of release notes. For the most part it involved pulling together and reading through developer notes, removing jargon and rewriting the text in concise English that customers could understand.
More often than not, the release notes felt like a bit of an afterthought, a chore that developers put off until the very last minute. While it might sound fairly simple, writing release notes is an important and yet under-appreciated task that requires more skill, care and attention than it is sometimes given credit for.
Although it is still fairly common to find release notes that simply state “bug fixes and improvements”, companies are investing more and more time and effort in make their release notes stand out. So, what is the purpose of release notes? And what is the best way to write them?
Release notes, sometimes called the change log or “app updates”, comprise of the documentation sent out with the latest update or version of your product that informs customers what has changed and what is new in the release.
Google technical writer Sarah Maddox gave the following advice about release notes:
“The most important function of release notes is to let customers know that something has changed in the product, particularly when that something may affect the way the customer uses the product.”
“The change may be a new feature in the product, an entirely new product, a change to the way the product works, a change to the way the customer uses the product, the removal of a feature, or even the deprecation of the entire product.”
Some key questions to think about when writing release notes are:
If you answer all of those, you won’t go far wrong.
Although there are no official guidelines to writing release notes, there are some general principles you can follow to ensure your release notes are informative and useful.
Historically release notes have always been quite dry and technical, so not much effort
Hometap head of product Adam Sigel said he looked forward to app updates to not only find out about new features but also because he hoped to find something good to read:
“Release notes are a really interesting engagement opportunity to me — most people don’t read them, but those that do represent a highly targeted audience of very engaged users. Every company with an app has to write them, and I love to see who treats it like an opportunity instead of a chore.”
Head of Growth at Paystack Emmanuel Quartey added: “App update release notes are a very small user touchpoint, but with just a little bit of imagination, they can be a way to connect with users on a whole other level.”
While some companies have started to use release notes as a small platform for expressions of creativity and comedy, it’s not an entirely risk-free art form.
However, speaking at the Write the Docs conference, technical writer Anne Edwards said she felt that “funny, quirky and friendly” release notes were often too wordy so either the main message was obscured or they created more work and confusion for the reader, especially for non-native English speakers.
She raises some valid points but when Tumblr produced a release note that was basically a 471-word fanfic-style story featuring its founder David Karp, it went viral and featured in the Guardian newspaper and Business Insider:
Some people might not have found that release note very helpful because it contained no information about what was actually in the release but it demonstrated the power that a humble release note can have as a marketing tool.
Medium is another company who are creative and off-the-wall with their release notes, no doubt a reflection of their mission to inspire creativity in the millions of people who use the platform. Medium’s release notes have appeared in the form of haiku, a fake Slack conversation, song lyrics and even an ASCII picture of a bug:
However, even the Medium writers behind the release notes admitted they were having to reign in some of the creativity of their content because users wanted more details about what was in each release version. In an interview with Verge, Medium’s community manager Nick Fisher, said:
“The most common blowback we get is from people who want to know what’s in the release. They hate these because they have no idea.”
There is sometimes a fine line between being funny and being irreverent so it’s no surprise that some companies have started to come under fire for their release notes. People don’t always appreciate jokes or zany content if it doesn’t also provide any meaningful update about the product they’ve invested their time and money in.
https://twitter.com/jaredsinclair/status/633407338347634688
In her Tech Crunch article “App Release Notes Are Getting Stupid”, writer Sarah Perez said she felt some companies were being irresponsible and disrespectful to customers by not providing decent information in their release notes:
“This inattention to detail is a disservice to users, who no longer have the benefit of understanding what the updated app will now do — or not do — as the case may be […] They don’t know what functionality has changed or how the user experience is being affected. They don’t know if the changes are even bad or good.”
She continued: “At the end of the day, if a developer wants to have fun with the release notes, that’s up to them. But no matter what, they should still feel a responsibility to their customers to communicate what’s being installed on the end users’ devices.”
Slack felt the need to apologise for their overuse of humour a few years ago but in general they’re good at striking the right balance between providing release notes that are both funny and useful to the end user:
Asana is another company that is recognised for funny and informative App Store release notes (see here, here and here). However, interestingly Asana also produce a more formal and straight-laced version of their release notes on their website. Perhaps this is a good way to appeal to different audiences in your customer base.
It might sound obvious but it’s important to be careful and professional about the language you use in release notes. At a previous employer, one of my developer colleagues wrote the following as a placeholder for one of the tickets for some internal release notes:
TBD - a shit tonne of configuration changes
The documentation team missed it and although we found it funny at first, our smiles soon dropped when we realised the release notes had gone out to a customer. The shit hit the fan so to speak 💩.
Remember that you’re a person speaking to another person when writing release notes. It’s another layer of user experience that helps connect you connect to your customer on a human-to-human level. For example, “We are doing x for you”:
Most of the focus typically goes on the content of release notes but it’s also worth considering the visual design of your release notes. Some companies are going the extra mile to make their release notes pages visually interesting. GatherContent has a colour-coded, interactive updates page:
Similarly, Todoist use different emoji as visual aids to inform their customers of the different change types in each release, using a ⚙️ for improvements, 🐛 for bugs and ⭐for new features:
Product designer Rob Gill wrote a brilliant post about release notes design in which he advocates (among other things):
Release notes are a great opportunity to reward loyalty, especially as the people who read them are more likely to be your most dedicated and loyal customers. PolyMail took this approach and rewarded users who read their release notes with stickers:
PolyMail co-founder Brandon Shin, who wrote about how they make release notes more exciting in this post, said: “We looked for more ways to grow this feeling of appreciation and interaction. Sometimes we tucked in small prizes in the release messages, giving stickers to people that always took the time to read through.”
It doesn’t have to be a physical reward. Citymapper recently rewarded readers of their latest update by telling them about their new transport pass that will save you money in London.
Not necessarily. Facebook took the somewhat controversial decision to no longer produce detailed release notes and produce in-app notifications about new features and changes instead. It wasn’t particularly popular with some users:
Amidst the backlash, a Facebook engineer posted on the MacRumours website, to defend the decision.
“… to describe every one of the thousands of changes that go into our mobile applications each and every release, the plain fact is that is just impossible. Many changes are under the hood for performance and bug fixes.”
He went on to describe the difficulties of providing release notes for pieces of work on features that haven’t been released yet and argued it was easier to provide in-app walkthroughs rather than putting blurbs in the App Store.
“We’re not trying to keep secrets from you. There are just simply better ways of telling you what’s interesting when those features are ready for you.”
Yes, apparently they do. I’ve also been conducting a survey to see how many people actually read release notes, how regularly they read them and why they read them. The results were a lot higher than I thought they would be:
At the time of writing I had 364 responses, with 83.2% saying they read release notes or app updates. I’ll write about my findings in my next post so watch this space!
Ultimately, release notes are totally subjective. Some readers just want the factual information, while others want to be entertained. My advice would be:
In the end it is up to you to get the style and balance that is right for you and your company but as long as your release notes provide users with meaningful and informative content, they’re definitely worth the time and effort.
The late Stephen Hawking famously said that artificial intelligence would be “either the best, or the worst thing, ever to happen to humanity.” As a technical writer documenting AI technology, I’d like to believe it would be the former and it’s fair to say there have already seen positive signs about how AI might shape and assist with documentation in the future.
A number of tech companies have already dipped their toes into the water, with some developing AI-assisted, predictive content generation and others harnessing machine learning to predict the help content the end-user is looking for.
Google introduced its natural language processing development, Smart Compose, to help Gmail users write emails in May 2018. They combined a bag-of-words (BoW) model with a recurring-neural-network (RNN) model to predict the next word or word sequence the user will type depending on the prefix word sequence they wrote previously.
Smart Compose was trained with a corpus of billions of words, phrases and sentences, and Google carried out vigorous testing to make sure the model only memorised the common phrases used by its many users. The Google team admits it has more work to do and is working on incorporating their own personal language models that will more accurately emulate each individual’s style of writing.
Arguably one of their biggest challenges they face is reducing the human-like biases and subsequent unwanted and prejudicial word associations that AI inherits from a corpus of written text. Google cited research by Caliskan et al which found that machine learning models absorbed stereotyped biases. At the most basic level, the models associated flower words with something pleasant and insect words with something unpleasant. More worryingly, the research found the machine-learning models also adopted racial and gender biases.
The research found that a group of European American names were more readily associated with pleasant than unpleasant terms when compared to a batch of African American names. Researchers also found inherited biases included associating female names and words with family and the arts while male names were associated with career and science words. Yonghui Wu, the principal engineer from the Google Brain team, said: “…these associations are deeply entangled in natural language data, which presents a considerable challenge to building any language model. We are actively researching ways to continue to reduce potential biases in our training procedures.”
With 6.9 million daily users, one of the most common tools people are using to assist the accuracy of their spelling and grammar is Grammarly. The company are experimenting with AI techniques including machine learning and natural language processing so the software can essentially understand human language and come up with writing enhancements.
Grammarly has been training different algorithms to measure the coherence of naturally-written text using a corpus of text compiled from public sources including Yahoo Answers, Yelp Reviews and government emails. The models they have experimented with include:
Although this is still a work in progress, their long term goal is for Grammarly to not only tell you how coherent your writing is but to also highlight which passages are difficult to follow.
Some companies have started to look at ways that AI can help with predicting and directing readers to the exact content they are looking for. London-based smart bank Monzo launched a machine-learning powered help system for their mobile app in August 2017.
Their data science team trained a model of recurring-neural-networks (RNNs) with commonly asked customer support questions to make predictions based on a sequence of actions or “event time series”. For example:
User logs in → goes to Payments → goes to Scheduled payments → goes to Help.
At this point, the help system provides suggestions relating to payments and as the user starts typing, returns common questions and answers relating to scheduled payments. Their initial tests showed they were able to reach 53% accuracy when determining the top three potential categories that users were looking for out of 50 possible support categories. You can read more about their help search algorithm here.
I think we will see more content composition tools like Smart Compose emerge but I think it will take a lot of time and work before they can be trained to effectively assist with the complex and often unpredictable user-oriented content that technical writers are tasked with producing on a daily basis.
I’m sure some technical writers are already using Grammarly to assist with their spelling and grammar. It can be a really powerful tool to ensure your text is not only accurate but in the future will be able to measure the coherence of your writing. I’ve dabbled with Grammarly but found it either wasn’t compatible with certain tools or prevented some of my applications from working so it became a bit of hindrance rather than an assistant for me personally. No doubt these are kinks they will iron out at some point down the line.
I do see the benefits of AI-assisted help so it would be awesome to see some more development in this area. It really could be something that saves customer support and documentation teams a lot of time in terms of predicting and directing end-users to answers before they’ve even asked a question.
So are we there yet? Not quite… but I think some very promising foundations have been laid. While some technical writers might be concerned, I think it will be a very long time before AI is advanced enough to supplant our role in the development teams. So don’t be afraid of AI, for the time being these tools are only going to make our lives easier!
If you want to build a simple but attractive looking API documentation site, you can’t really go wrong with an open-source tool like Slate. Despite being created by a teenager developer during a summer internship, it has become an incredibly popular tool with the project being forked more than 15,000 times and with well-known organisations including NASA, Best Buy, Monzo and Skyscanner all using it.
Slate is a Ruby-based tool that generates a great-looking, three-panelled API documentation static site from a set of markdown files. It was built by developer Robert Lord in 2013 when he was an 18-year-old intern at at travel software company Tripit. He convinced his boss at the time to let him open-source the project and the rest is history.
He told me found it pretty surreal that so many people were now using and maintaining his “buggy project” nearly six years later. However, the results speak for themselves — you can see some examples in the Slate in the Wild repository.
So before you begin, make sure you have met the following requirements:
In this example, I’m going to use the generic Swagger Petstore example which I have saved to my Desktop and called petstore.yaml. To convert this to Markdown using swagger-to-slate
, open a terminal and run:
swagger-to-slate -i ~/Desktop/petstore.yaml
This saves a file called petstore.md in the same location as the .yaml file. Once you have this, you can get started with Slate.
To build your API documentation site using Vagrant, follow these steps:
git clone
. For example:git clone https://github.com/<your_github_username>/slate.git
~/slate/source
and remove the index.html.md file.cp ~/<local_path>/index.html.md ~/<local_path>/slate/source/
vagrant up
You can also use Docker to create your site but you must edit some additional files. Although this method is not officially supported, I had no issues when I tried using it.
If you are using Ruby version 2.5.1 or newer, you will need to create three files:
.git source
FROM ruby:2.5.1
MAINTAINER Adrian Perez <adrian@adrianperez.org>
VOLUME /usr/src/app/source
EXPOSE 4567
RUN apt-get update && apt-get install -y nodejs \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
CMD ["bundle", "exec", "middleman", "server", "--watcher-force-polling"]
app: build: . ports: - 4567:4567 volumes: - ./source:/usr/src/app/source
After you create these files, follow these steps:
docker-compose up
For alternative Docker methods refer to this documentation.
Alternatively, if you want to run your Slate site locally, you can also use Bundler. To use this method you must install Ruby version 2.3.1 or newer. To check which version you have installed run: ruby -v
If you need to install Ruby, see the installation documentation for the different methods available.
Once installed Ruby, you can install Bundler: gem install bundler
To build your API docs site using builder:
cd slate
bundle install
bundle exec middleman server
That’s pretty much it! Good luck.
I recently gave a talk at the API the Docs conference in London where I was finally able to share some valuable advice about GraphQL documentation. My talk followed my journey from first being told that GraphQL was self-documenting and didn’t need documentation, to speaking to GraphQL co-creator Lee Byron in my quest for answers and receiving the words of wisdom that I was able to share at the conference.
After initially being told by a developer that GraphQL wouldn’t need documentation, I was pretty sceptical, but as I starting researching I found numerous examples of developers advocating GraphQL’s self-documenting nature with someone even declaring that it didn’t need documentation.
Although the majority of people were fairly positive about the self-descriptive features, one tweet from a developer who was unhappy with the GraphQL documentation he had encountered made me realise I might be onto something.
I explored what is meant by self-documenting – something written or structured in such a way that it can be understood without prior knowledge or documentation – and highlighted how the PC Magazine definition came with this caveat about subjectivity:
“It’s very subjective. However, what one programmer thinks is self- documenting may truly be indecipherable to another.”
I investigated the risks of subjectivity, how words like homographs such as “second”, “number” and “subject” can have multiple meanings and might be interpreted differently. I also shared different opinions on self-documenting code, how some people feel it is a myth and an excuse for developers to avoid writing documentation:
I also referred to a blog post by Write the Docs co-founder Eric Holscher who said self-documenting code was “one of the biggest documentation myths in the software industry”, adding that the self-documenting argument boils down to:
Holscher argued that people who believe in a world of self-documenting code are actually making it more difficult for normal people to use their software.
To test some of these self-documenting claims, I stripped out the introductory documentation from the Github GraphiQL explorer and asked six of my colleagues (members of QA, development and documentation) to try and retrieve my name, location and avatar URL with a GraphQL query from just my Github login name.
The results were pretty interesting with pretty much all of them struggling with the syntax and encountering fairly similar parsing errors. The amount of time it took them to formulate a query through trial and error proved to me that GraphQL isn’t actually that intuitive without some form of example query or hand-holding documentation to get you started.
The other common issue I have encountered with some GraphQL APIs was developers either failing to add descriptions or using ‘self-descriptive’ as a description for queries and fields that weren’t particularly descriptive. Some of these relied on assumed knowledge, expecting the end user to have a prior knowledge of the schema and the data it relates to.
After looking at the GraphQL spec, I found this line, which might explain why some developers are not including descriptions: “All GraphQL types, fields, arguments and other definitions which can be described should provide a description unless they are considered self descriptive.”
Whether they realise it or not, the issue is these people are unintentionally making it difficult for people to use their APIs. GraphQL co-creator Lee Byron spoke about the importance of naming at the GraphQL Summit in 2016:
“It’s really important not just to have names that are good but to have names that are self-documenting […] Naming things that adhere closely to what those things actually do is really important to make this experience actually work.”
I thought this was pretty interesting but I still wanted a definitive answer about GraphQL documentation so I emailed Lee Byron, who also happens to be the editor of the GraphQL spec, asking him if he would answer some of my questions. To my surprise he agreed to an interview back in September. We spoke for about half an hour and he told me all about the history of GraphQL, his hopes for its development and we touched upon documentation. When I asked him about the importance of descriptions in GraphQL, he gave the following advice:
“APIs are a design problem, way more than they’re a technical problem and you know this better than anybody else if you’re working on documentation.
If there’s no documentation, it doesn’t matter how good the API is because so much about what makes an API work well is mapping well to your internal mental model for how something works and then helping explain those linkages and the details.
If you do that wrong, it doesn’t matter how good your API is, people aren’t going to be able to use it.”
“GraphQL doesn’t do that for you, it provides some clear space for you to do that.
There’s the types and the fields and introspection, you can add descriptions in places so it wants to help you but if you don’t put in the thought and you end up with a poorly designed API, that’s not necessarily GraphQL’s fault right?”
I asked Lee’s permission to use the video clip of him giving this advice during my talk as I knew it would resonate with other API documentarians and having one of the GraphQL co-creators validate what I’d set out to prove all along was a pretty awesome mic drop moment for me!
Lee Byron spoke about how GraphQL provides you with “clear space” for the documentation: the types, the fields, the descriptions and introspection. So by using self-descriptive names for the types and fields and by using good descriptions in your GraphQL schema, you make it a much more user-friendly experience for your end user. I have highlighted where descriptions appear in GraphiQL for the Star Wars API (SWAPI):
However, these descriptions will only get you so far because documentation generated dynamically from your schema is never going to be very human.
Former technical writer and developer Carolyn Stransky spoke about this issue and a number of other blockers she encountered while trying to learn GraphQL at the GraphQL Finland conference. These included “an unhealthy reliance on the self-documenting aspects of GraphQL”, unexplained jargon and assumed knowledge. She felt most of these issues could have been easily prevented if more care and consideration had gone into the documentation.
I wanted to see what other technical writers were saying about GraphQL documentation but given the technology is so new, my questions on the Write the Docs Slack channel and other forums went unanswered. However, I did find a couple of good resources.
Andrew Johnston, who works on the GraphQL documentation at Shopify, spoke about the importance of providing on-boarding or “hand-holding” documentation for people who are new to GraphQL and not just assuming your end users will know how to formulate queries and mutations.
Technical writer Chris Ward wrote a blog post about whether GraphQL reduces the need for documentation and concluded that while it “offers a lot of positives on the documentation front”, documentarians should treat it just like any other API. He wrote:
“Documenting API endpoints explains how individual tools work, explaining how to use those tools together is a whole other area of documentation effort. This means there is still a need for documentation efforts in on-boarding, getting started guides, tutorials, and conceptual documentation.”
So my conclusion was GraphQL can be self-documenting but only if you put in the effort to give your fields and your types self-descriptive names, if you add good descriptions and if you provide adequate supporting documentation, especially for people who are new to GraphQL. Ultimately I think technical writers have a pretty important role to play in documenting GraphQL and ensuring the experience works, to repeat Lee Byron’s advice – if your API doesn’t have any documentation then people aren’t going to be able to use it.
Further reading: Here is a link to my talk, my slides and my resources.
I recently interviewed the GraphQL co-creator Lee Byron for Nordic APIs, an international community of API enthusiasts. It was a great opportunity to find out how GraphQL came about, why it was open-sourced and where he sees it developing in the future. We also touched upon documentation and the importance of descriptions in GraphQL, something I’ll share in a future post.
Ten years ago Lee Byron was a graphics engineer designing interactive news graphics at the New York Times when a friend approached him to join a small social media startup based in San Francisco, California. The company was Facebook, which had only just surpassed MySpace as the world’s most visited social media website at the time, and four years later Byron would find himself managing the team working on the Facebook native iOS app when the first seeds were planted for what would later evolve into GraphQL.
“Right around then our mobile apps were built with HTML, they had native wrappers around them and they had suffered from real performance problems,” he said. “We made a bet on that technology thinking that Apple and Google would maintain really high quality web browsers and they didn’t so that didn’t really work out very well and we decided we needed to build a native app.”
“We started this little skunkworks project where two engineers from my team and two engineers who were relatively new to the company started building out what would become the native iOS app for News Feed.”
The team produced a high quality, working prototype but Byron spotted that News Feed stories were missing because they had used a three-year-old, unsupported platform partner API and he realised they would need to build a new one.
“That kind of sent things to crisis,” he said. “They thought they were almost done and it turned out they had a ton of stuff left to do so I started focussing in on those problems and I was like “Okay, I need to build a News Feed API somehow. Who are the people I need to talk to? How does that need to get done?” A big problem is that the News Feed is incredibly complicated and typical API technology probably wouldn’t do quite the right job so I started sketching out what a good API might look like. It definitely wasn’t what GraphQL is now but it was sort of like really beginning inklings in that direction.”
“Meanwhile another one of the GraphQL co-creators Nick Schrock had just spent the last couple of years working on a bunch of data infrastructure on our server side and had spent a little bit of time exposing some of that over APIs, not GraphQL but a different kind of API, and had an idea about how this could be made much, much more simple so I credit Nick Schrock with the first prototype that really resembled GraphQL. He called it SuperGraph.”
A screenshot of an early GraphQL prototype that Nick Schrock called SuperGraph.
A member of Byron’s team introduced him to Schrock and Dan Schafer, hailed as the best News Feed engineer at Facebook at the time, and the trio started work an initial version of GraphQL. “The three of us got to work trying to figure out how to build a better News Feed API and we just got super far down the rabbit hole,” Byron said. “I think just a month or two of iterative improvements on what started as a prototype enfolding all of our ideas ended up being the first version of GraphQL.”
The launch of the native iOS app, helped by the introduction of GraphQL, was a success and the excitement around GraphQL and its capabilities made other Facebook teams interested in using it. As a result, Byron and the early GraphQL team would go on to develop a whole ecosystem around GraphQL; how it integrated with the iOS and Android apps, how it integrated into the server and GraphiQL, the in-browser IDE.
The final phase of the project was the decision to open-source GraphQL in 2015, something that was driven in part by the successful release of React, Facebook’s open-source JavaScript library, and also by the desire to open-source Relay, Facebook’s open-source JavaScript framework, which was inherently linked to GraphQL.
“We were excited about it,” Byron said. “I mean sharing things with the community is always good but it would be a lot of work and we weren’t totally sure people outside of Facebook would even care or find value in it. We thought maybe this was something that only solves a Facebook problem and wasn’t a generic solution but the Relay team had us excited so we followed that path and I’m super happy that we did. GraphQL now has a really big community outside of Facebook.”
The adoption of GraphQL took far less time than the team initially predicted. Speaking at the first ever GraphQL Summit in October 2016, Byron said he hoped GraphQL would be picked up by big companies within four years and reach ubiquity within five years. Byron laughed when he reflected on the accuracy of those predictions.
“I think I overestimated how long it would take for large companies to adopt it and underestimated ubiquity,” he said. “It’s probably because ubiquity is kind of vague but certainly I still talk to tons of people who work in the API space and at best they say “Oh GraphQL, I think I’ve heard of that before but I don’t really know what it is”. It’s certainly better this year than it was last year and better than the year before that.”
He added: “I remember going to an APIDays conference shortly after the first GraphQL Summit and literally there were zero talks on GraphQL. After the next one, there was a whole track talking about GraphQL. The one after that GraphQL was featured in one of the key notes and there wasn’t a specific track but GraphQL was scattered around. So it’s definitely picking up steam. I think there’s visible progress towards a ubiquity, if we want to talk about ubiquity as knowledge. People are aware of the technology and what it does and why they should use it or not.”
One of the biggest surprises for Byron was seeing Github become one of the early adopters of the technology, particularly as he considers them an API leader.
“I was really surprised to see that within a year of GraphQL being open-sourced, Github decided that their public API would be GraphQL,” he said. “That was particularly significant because they kind of helped to popularise REST. You know REST has been around for a while but it wasn’t really the dominant, popular way to build APIs until Github decided to build their API and they used REST and they made a big deal about it and wrote a bunch of blog posts and everybody paid attention.”
He added: “I thought “Wow, this API is really well built, it must be because of REST” and it was to a large degree but it’s also because the people at Github are really smart and they built a really great API. It’s really exciting to me that I consider Github to be sort of an API leader and they jumped on that first and they’re not the only ones any more.”
Although GraphQL has been lauded as the natural successor to REST technology, Byron is modest about its capabilities and believes the two can co-exist.
“There are plenty of things that REST does well or that does better than GraphQL and vice versa,” he said. “I’m a big believer in the more tools that we have, the more choices that we have to solve problems. I’m certainly not one of those people who that think I’ve invented the silver bullet here and everything should be GraphQL and there’s no room for anything else. I think that would be a little unwise. I think REST is an amazing technology so I would be really sad to see it disappear.”
“I’m certainly not one of those people who that think I’ve invented the silver bullet here and everything should be GraphQL and there’s no room for anything else. I think that would be a little unwise. I think REST is an amazing technology so I would be really sad to see it disappear.”
“I do think that as GraphQL continues to expand in scope we’ll see a much healthier balance between the two. My expectation was that public APIs would remain REST because that was simpler and more familiar where internal APIs, so to build your company’s own product, would use GraphQL because while it brought more complexity, it also brought some more expressiveness and capability.”
As GraphQL continues to grow, one of the things Byron is excited to see is more public APIs adopting the technology, like companies have done with REST.
“I think the space of public APIs or partner APIs is particularly interesting because I think the vast majority of GraphQL adoption so far has been for a company’s own internal projects. For example, Walmart use GraphQL but they use it for the Walmart app and I think it would be really interesting if GraphQL starts to be used for these public and partner APIs so that we have companies that are working with each other and then it’s not just about the API design and the mental model for within that company but between companies.”
“I think that could be really interesting because it could help start to build one conceptual graph of all information. I don’t think GraphQL is going to be the technology that gets us there but that’s one of the big dreams of the internet is that we could have the one data internet but we need to start having some serious conversations along that path if we ever want to get there. I think GraphQL could be a really useful stepping stone on that path.”
Despite being happy with its growing popularity and some the open-source development going on around it, Byron hopes to see more growth in GraphQL tools and integrations.
“It’s kind of sad that there’s the Apollo Client for iOS and Android and then that’s kind of it,” he said. “There needs to be many competing pieces there and that’s true for any sort of technology that’s reached ubiquity has at least two if not closer to a dozen different options for how you would go about implementing that. If you wanted to build a web server, there’s like hundreds of ways to build a web server in dozens if not hundreds of languages and that’s kind of where I want to get to with GraphQL as well.”
Byron left Facebook after a decade of service to become head of web engineering at fintech startup Robinhood earlier this year, citing the desire to work at a smaller company and its refreshing vision as some of his reasons for leaving.
“Robinhood’s roughly the same size today that Facebook was when I joined it and I really missed that and I realised that some of the best work that I did at Facebook was when there were a little smaller. Not that Facebook’s not a great place to work now, it’s just I really appreciated having the smaller work environment and was happy to have that back.”
“I’m also just kind of interested in finance in general so it’s a new space for me to learn which has been pretty fun and then they’ve got a bunch of really interesting technical challenges and people challenges. That’s my bread and butter. I really love technical problems and people problems, then the product problems I’m interested in but it’s new to me so there’s room to learn.”
On top that, he is still the editor of the GraphQL spec and runs the working group meetings to ensure that GraphQL continues to improve while also maintaining stability.
“One my of goals for GraphQL is that it is stable because Amazon and Twitter and Pinterest and Airbnb and Facebook and Walmart and so many other companies have bet their future on GraphQL,” he said. “If GraphQL changes so rapidly that every year there’s like maintenance work to have to go in and improve all of those pieces of infrastructure, if I was an engineering director at those companies I’d feel shaken and I’d question the choice to use that technology. At the same time I want to make sure that there’s room for it to grow and improve and those improvements don’t have to come from me. I don’t think that I’m the smartest person in the room. I want to make sure that experiences of people from lots of different companies and environments can help influence that direction.”
He added: “GraphQL is still new. I’m really impressed with how much has been built by the open-source community and how much adoption has happened within the open-source community, especially the large companies. I mean, there’s a ton of large companies that are using GraphQL and that’s only three years out from open-sourcing, I think that’s pretty incredible but there’s always room to grow.”
If you Google for ‘API trends’ or ‘the future of APIs’ , one technology that crops up a lot is GraphQL. Developers rave about it being a more powerful and flexible alternative to REST. Not only that but if you’re a technical writer like me, claims that it is self-documenting are particularly interesting. So what is GraphQL and is it really as self-documenting as people say?
GraphQL is an open source data query and manipulation language that was developed internally by Facebook for their mobile applications before being released publicly in 2015. Since then it has grown in popularity with some people claiming it might replace REST APIs in the future.
Like REST APIs, both operate over HTTP with requests being sent to retrieve or manipulate data. The key difference is with REST you might need to send requests to multiple endpoints to retrieve a particular set of data, with GraphQL there is only one endpoint so with a single request you can retrieve an object and all of its related objects.
For example, with this GraphQL schema and server wrapping the SWAPI (Star Wars API), you can retrieve multiple pieces of data using just one endpoint. In this case, finding out the species and home planet of Luke Skywalker by adding more fields to the endpoint:
There seems to be plenty of love for GraphQL on Twitter with developers praising its speed, flexibility and introspective nature. The other key attribute that crops up a lot is “self-documenting” or “self-descriptive”:
One developer even went as far to say that GraphQL doesn’t require documentation at all. However, after playing around with GraphQL and experimenting with some public GraphQL examples out there, I’m not so sure I agree.
The key thing about GraphQL from a documentation perspective is the importance of naming. Lee Byron, one of developers behind GraphQL, spoke about this in his talk “Lessons from Four Years of GraphQL” at the GraphQL Summit in November 2016: “Naming things is super important in GraphQL APIs,” he said. “An important question to ask when designing APIs is ‘Would a new engineer understand this?'[… ] And that means no code names or server-side lingo.”
He continued: “Imagine that most of the engineers who are going to be using your API might not find it so easy to go and find out how that field maps to some underlying system. It’s really important not just to have names that are good but to have names that are self-documenting. Naming things that adhere closely to what those things actually do is really important to make this experience actually work.”
“An important question to ask when designing APIs is ‘Would a new engineer understand this?’ […] Naming things that adhere closely to what those things actually do is really important to make this experience actually work.” – Lee Byron
Despite Byron’s warnings, fields with poor or no descriptions were a common issue in the different GraphQL APIs I looked at. In the example below, taken from the GraphiQL documentation explorer, I had no idea what the ‘section’ query field did or what data it sent back because it had no description:
Apart from the documentation explorer, another way to see what query and mutation fields are available is the auto-populating feature in GraphiQL. Hovering over the field or type reveals a description but this can be as useless as the description in the documentation explorer if all it says is ‘Self descriptive’, as this Twitter user found out:
I agree that GraphQL is self-descriptive and if you’re familiar with the query language and the schema, its introspective nature means it is easy to refer to the description of a field or type to find out what it does. One of the other advantages of GraphQL is the API documentation is easy to keep accurate and up-to-date as the descriptions are pulled directly from the code. In version 0.7 or above of GraphQL, this is as simple as adding a comments directly above the type or field in the code:
However, GraphQL is only “self-documenting” if the developer or a technical writer has given the fields adequately intuitive or self-descriptive names or has added decent descriptions for them in the schema code. If the names are obscure or the descriptions aren’t great then your GraphQL API is as useful as a chocolate teapot and there are already a few chocolate teapots out there from what I’ve seen. So I guess the good news for technical writers is that we still have a role to play in helping to document GraphQL, it isn’t a magical solution that renders us unnecessary just yet!
Back in 2013, developer Robert Lord, then an 18-year-old intern at Tripit travel software company, was challenged to create an API documentation tool by his boss. It took him several weeks but the result was a beautiful, responsive API documentation generator called Slate. Five years later, it has grown into a popular open-source tool that is used by a number of global organisations and companies including NASA, IBM and Coinbase.
Lord said the Slate project grew out of a set of requirements the Tripit engineering team had at the time. He said: “I was interning at TripIt and my boss pointed me towards some two-column documentation pages and said ‘We’d like a page like this for our new API.’ They also had the requirement that their technical writer could make changes, and I think they didn’t want to write raw HTML. I made a generator that ended up being pretty generic to any documentation, and convinced them to let me open source it.”
Slate is simple to use, you fork the Slate Github repository and create a clone. Next you customise the code to meet your requirements; adding a custom logo, fonts and any additional CSS styling in the source folders, before adding your API endpoints and their descriptions in Markdown.
When you’re done, you start Slate and launch your API documentation site using Vagrant or create an image using Docker. The result is an attractive, responsive three-panelled API documentation site with code samples in multiple languages down one side and a smooth scrolling table of contents down the other. For more information on how to use Slate, follow the instructions in the Slate README.
Today more than 90 people have contributed to Slate on Github, it has been forked more than 13,000 times and has been given more than 23,000 stars. Some of the organisations and companies listed as users include NASA, IBM, Sony, Monzo, Skyscanner and Coinbase. There is a list of more than 90 companies that have used it on the Slate in the Wild sub-page of the repository.
Lord admits he still finds it “pretty surreal” that such large companies have adopted what he labels the “buggy project” he created as a teenager. “I really did not expect anybody else to see it or care about it,” he said. “Slate never really had a big rush of new users all at once, the growth in stars has been more or less linear over the years. No hockey sticks here. So there was never a single moment where suddenly a bunch of people were using it, it was a very slow process of discovering one company at a time.”
Interestingly, a year after working at Tripit, Lord interned at Stripe, one of the leading API-first companies whose own API documentation inspired him when creating Slate. Stripe realised the value of their product hinged on people being able to read and digest their APIs. They invested a lot of time and effort in developing their own in-house API documentation tool and set the bar for the rest of the industry with the two-panelled design that has inspired so many other API tools.
Lord had plans to develop further API tools but decided to focus on other things. “Initially had some plans for similar tools,” he said. “But I think I realized I’m still early in my career, and would rather branch out and work on a variety of projects instead of focusing in on just one area.” Despite moving onto other projects and being fairly modest about the success of Slate, it’s an impressive piece of work for the young developer to put on his resumé. Indeed, one of the main reasons he asked Tripit to allow him to open source the project was so he could show future employers his work. “I mostly convinced them to open source it just so I could point future employers to this chunk of code I wrote,” he said. One company clearly took notice, Lord starts work on Fuschia at Google in a few of weeks time.
Earlier this year I stumbled upon Write the Docs, a global community of people who care about documentation, and through its Slack channel, I have learned so much from the advice and knowledge shared by its thousands of members. The discovery has been a real godsend for someone like me who has worked independently or in small teams for most of my technical writing career.
This month I was lucky enough to go halfway across the world to the annual Write the Docs conference in Portland, Oregon to meet some of the community in person and listen to some brilliantly insightful and entertaining talks from fellow technical writers. In this post, I’ll share my highlights of the conference, my favourite bits of Portland and offer some advice on how to get there.
DISCLAIMER: I didn’t attend every single presentation but all of the talks I listened to were great. I’ve highlighted a few memorable ones below:
Kat King from Twilio, who had the unenviable task of giving the first talk of the conference, delivered an entertaining and engaging talk about how she and her team were able to quantify and improve their documentation with user feedback.
Beth Aitman from Improbable spoke about how to encourage other members of your development team to contribute to the documentation. This is something I think we all struggle with and can relate to. It’s well worth a watch:
Bob Watson gave a great talk about strategic API documentation planning, with some interesting tips about your target audience and the different types of API doc consumer you might come across. These included the ‘Copy and Pasters’ and the ‘Bigfoot’, the rare developer who actually studies the documentation and applies the code!
As well as the main talks, there were some excellent Lightning Talks, five minute presentations given during the lunch breaks, that contained some real gems such as Mo Nishiyama’s resilience tips when dealing with Imposter Syndrome and Kayce Basque’s talk on improving response rates from feedback widgets:
If the talks aren’t your thing, there was also an Unconference where you could discuss topics such as API documentation, documentation testing, individual tools; whatever you want really. I just sat and talked with two technical writers about a documentation tool for half an hour!
Apart from the people, one of the best things about Write the Docs Portland was the venue, a striking 100-year-old ballroom with a “floating” dance floor that has played host to the likes of Jimi Hendrix, the Grateful Dead, Buffalo Springfield and James Brown. Also, if stickers are your thing then you could collect a load of stickers provided by the conference sponsors, hiring companies and Write the Docs themselves (see below):
Apart from its scenic surroundings and the views of the Tualatin Mountains, Portland has a lot to offer in the city itself. Some of my personal highlights included:
Doughnuts – Portland has a reputation for great doughnuts. We skipped the enormous queues outside Voodoo Doughnuts and went to Blue Star Donuts instead. The PB & J with habanero pepper was pretty unusual!
Coffee – Portland has developed a thriving yet relaxed coffee culture with more than 30 coffee roasters across the city. It goes without saying that the coffee here is good! Check out Heart or Barista.
Restaurants – The food in Portland was amazing. One of my favourite meals was at Life Aquatic-themed oyster bar Jacqueline in SE Portland. For sushi check out Masu on SW 13th Ave and for a relatively cheap but delicious lunch go to Nong’s Khao Man Gai thai food cart.
Washington Park – If you want to escape the sights and sounds, head to the 412-acre Washington Park which boasts a Japanese garden, a zoo, a rose garden, an amphitheatre and lots of trees!
Powell’s Books – No trip to Portland is complete without visiting the world’s largest independent bookstore. My only advice would be to pick up a map and have some idea of what you’re looking for, otherwise you’ll find yourself wandering the many colour-coded sections and aisles for hours.
If you live in the US or Canada, it might be slightly easier to convince your boss to fund your trip to Write the Docs. If like me, you’re based in the UK, its slightly more difficult but there are a number of options:
1. Use your training budget – Ask if you can use your training budget for the trip. It cost me my annual budget but it was well worth it and I was able to combine it with a trip to my company’s head office in San Francisco.
2. Become a speaker – I met a few writers whose company paid for them to be there because they were speakers. It’s great exposure for you, your documentation team and your company.
3. Recruitment – If you’re company needs to grow its documentation team, you might be able to justify the cost by attending because there is a job fair and you have the opportunity to network and meet writers with a wide range of experience.
4. Exposure – Even if you don’t become a speaker, it’s a great way to raise your personal profile and that of your company. You never know when that visibility might come in handy in future.
5. Specific talks – Highlight a few specific talks from the schedule of the upcoming conference or a previous conference that may benefit you or your team. Write the Docs is a fantastic opportunity to learn from some of the best technical writers in the business!
If all else fails, see the sample email and other tips under the ‘Convince Your Manager‘ section of the Write the Docs website.