2019-07-13 20:25

Using Cloudinary Without the Cloudinary Ruby Gem

I always try to keep the number of gems I use in projects as small as possible. If you're not careful you end up adding tens of thousands of lines of code that you don't know, that could harbor strange side effects–or worse–introduce security flaws.

This article by Thoughtbot puts it well:

Adding another gem is adding liability for code I did not write, and which I do not maintain.

The article makes other good cases to think twice before adding yet another gem to your project.

Recently I was facing the decision whether or not to add the Cloudinary gem to a project. Because I only needed to create signed URLs and to compute upload signatures I decided to write the necessary code myself.

Signed URLs

Signed URLs prevent tampering with URL parameters. For instance, suppose you display small photos on a thumbnail gallery, with a thumbnail URL looking like this:


You don't want to enable downloading full-size images simply by manipulating parameters:


Signed URLs prevent this kind of tampering. The code to create a signed URL looks like this:

def signed_url(public_id:, transformations:)
  to_sign = ([transformations, "v1", public_id]).join("/")

  secret = ENV.fetch('CLOUDINARY_API_SECRET')

  signature = 's--' + Base64.urlsafe_encode64(Digest::SHA1.digest(to_sign + secret))[0,8] + '--'

  "https://res.cloudinary.com/#{ENV.fetch('CLOUDINARY_CLOUD_NAME')}/image/upload/" + [signature, to_sign].join("/")

I obtained the signature magic from this article.

Compute upload signatures

The upload signature must be passed along when uploading a file to the Cloudinary API. It's a hash of the file name (called public ID), timestamp, folder name, and API secret.

The following comes straight from one of my Rails controllers:

def signature
  folder = ENV.fetch('CLOUDINARY_FOLDER')
  public_id = SecureRandom.urlsafe_base64(32)
  timestamp = Time.now.utc.to_i # Cloudinary expects UTC epoch
  payload_to_sign = "folder=#{ENV.fetch('CLOUDINARY_FOLDER')}"
  payload_to_sign << "&public_id=#{public_id}"
  payload_to_sign << "&timestamp=#{timestamp}"
  signature = Digest::SHA1.hexdigest(payload_to_sign + ENV.fetch('CLOUDINARY_API_SECRET'))
      api_key: ENV.fetch('CLOUDINARY_API_KEY'),
      signature: signature,
      folder: folder,
      public_id: public_id,
      timestamp: timestamp

By writing two small pieces of code (plus two tests) I've eliminated the need for an extra gem which would have added even more gems as dependencies (aws_cf_signer, domain_name, http-cookie, mime-types, mime-types-data, netrc, rest-client, unf, and unf_ext)!

It just feels cleaner to carry less bagage around.

§ Permalink

ξ Comments? Kudos? Find me on Twitter

2019-06-22 21:35

The DEMO Method

While working on a new kind of project management tool I realized that the best work I've done in the past was with teams who reveled in giving demos to each other.

This made me realize that you could actually codify this proces.

I call it The DEMO Method.

It is not meant to replace Scrum, Lean, or Kanban. Nor should it replace OKRs, KPIs, or PPIs.

Instead it can be used in addition to your favorite methodology.

The sole purpose of the DEMO Method is to make every material (i.e. important or impactful) objective demonstrable. Hence the acronym DEMO which stands for Demonstrate Every Material Objective.

The DEMO Method encourages to communicate project status through demonstrations. Demos are an effective way to inform team members, leadership, and customers about progress. Oftentimes demos result in new ideas and improvements.

The DEMO Method's five rules are easy to apply:

  1. Teams hold weekly demo meetings. Preferably team members take turns giving demos. Group managers give monthly demos. Upper management gives demos several times per year.

  2. A demo can be in any shape or form (deck, screencast, live or pre-recorded). Weekly demos should not last longer than a TED Talk (18 minutes).

  3. At minimum two artifacts must be published per demo: 1) a screenshot, screencast, or short written synopsis, and 2) a list of contributors. An archive of demo content should be kept.

  4. Every demo must cover a material objective, such as a new feature, new hire, new tool, new procedure, et cetera.

  5. Demo attendees are encouraged to send a message to the demo presenter afterwards using a sentence that starts with "Have you considered …"

Let me know if this resonates with you.

§ Permalink

ξ Comments? Kudos? Find me on Twitter

2019-06-15 17:35

Using GraphQL With the WIP API

WIP (https://wip.chat/) is a community of makers started by Marc Köhlbrugge. Recently I dug into its GraphQL API to check out the capabilities. Below you'll find my notes.

After signing up you can go this page which displays your private API key.

Let's start by requesting your own account profile.

Two headers must be set:

  • Authorization: bearer YOUR-PRIVATE-API-KEY
  • Content-Type: application/json

The API endpoint is https://wip.chat/graphql.

Send the following query to obtain your account profile data:

  viewer {
    products {
      todos {
    todos {

Notice the use of viewer. You'll see this often in GraphQL APIs. It refers to the authenticated user. It's not an actual GraphQL specification but rather a practice which originated at Facebook (just as GraphQL).

The WIP API uses viewer to return data associated to the authenticated and authorized API user.

The result looks like this:

  "data": {
    "viewer": {
      "id": "1377",
      "url": "https://wip.chat/@realhackteck",
      "username": "realhackteck",
      "first_name": "Erik",
      "last_name": "van Eykelen",
      "avatar_url": "https://wip.imgix.net/store/user/1377/avatar/5c9fae4c0cd2343c8a86edbf822825b4.jpg?ixlib=rb-1.2.2&w=64&h=64&fit=crop&s=6c4e9eb0dd3b1679c34b056ad532d05b",
      "best_streak": 0,
      "completed_todos_count": 0,
      "products": [
          "name": "Wip.Chat API test",
          "todos": []
      "todos": [],
      "streaking": false

Note: replacing viewer by user(username: "realhackteck") will return the same data.

One of the cool things about GraphQL is its ability to provide introspection:


The screenshot shows the Insomnia client which I use a lot to run tests against APIs. The popup with the orange-colored Gs shows attributes which belong to the Todo object. Insomnia is able to display these attributes by querying the GraphQL schema. This article explains how introspection works.

The next step is to create something using the API. The following example shows how you can create a WIP todo including an attachment. Three steps are needed to accomplish this.

Step 1

Generate a pre-signed URL which you can use to upload an attachment:

mutation createPresignedUrl {
  createPresignedUrl(input: {filename: "foobar.jpg"}) {

The result looks like this:

  "data": {
    "createPresignedUrl": {
      "fields": "{\"key\":\"cache/c79b...7fe0.jpg\",\"Content-Type\":\"image/jpeg\",\"policy\":\"eyJl...fQ==\",\"x-amz-credential\":\"AKIA...HQKQ/20190615/us-east-1/s3/aws4_request\",\"x-amz-algorithm\":\"AWS4-HMAC-SHA256\",\"x-amz-date\":\"20190615T150217Z\",\"x-amz-signature\":\"497b...13df\"}",
      "headers": "{}",
      "method": "post",
      "url": "https://s3.amazonaws.com/assets.wip.chat"

Step 2

Use the previous result to upload the attachment to AWS S3:

curl -F "key=cache/c79b...7fe0.jpg" \
     -F "Content-Type=image/jpeg" \
     -F "Policy=eyJl...fQ==" \
     -F "x-amz-credential=AKIA...HQKQ/20190601/us-east-1/s3/aws4_request" \
     -F "x-amz-algorithm=AWS4-HMAC-SHA256" \
     -F "x-amz-date=20190615T150217Z" \
     -F "x-amz-signature=497b...13df" \
     -F "file=@foobar.jpg" \

Step 3

Send a mutation request to the GraphQL API:

mutation createTodo {
  createTodo(input: {body: "This todo has an attachment", attachments: [{key: "cache/cf6c...304c.jpg", filename: "foobar.jpg", size: 25681}]}) {
    attachments {

The result looks like this:

  "data": {
    "createTodo": {
      "id": "117565",
      "body": "This todo has an attachment",
      "attachments": [
          "aspect_ratio": 1.0104166666666667,
          "filename": "foobar.jpg",
          "id": "11373",
          "mime_type": "image/jpeg",
          "size": 25681,
          "url": "https://wip.imgix.net/cache/cf6c1e6f1770e8c1b6970c2b05a7304c.jpg?ixlib=rb-1.2.2&s=4e81b4f6003ea43b984a366ff0de520a"

Check out https://wip.chat/graphiql if you're interested to know what you can do with the WIP API. This page runs GraphiQL which is a graphical in-browser IDE. Check out https://github.com/graphql/graphiql for more information.

§ Permalink

ξ Comments? Kudos? Find me on Twitter

2019-06-05 21:05

Using oAuth With the Makerlog API

Recently I was investigating the Makerlog API. The API offers three different authentication methods namely Basic Authentication, Bearer Authentication, and oAuth. I chose the latter because I wanted to write a basic oAuth authentication flow from scratch. I've documented my steps, perhaps it's useful to the reader.

Create An Application

With oAuth you first need to register your app with the service that contains the data you want to access. The oAuth 2 Simplified article explains this in more detail.

Open https://api.getmakerlog.com/oauth/applications/ to create your Makerlog app.

Once you've done this you are provided with a client ID and client secret.

Create An Authorization URL

I'm using the oauth2 Ruby gem to generate the authorization URL (but as you can see it's fairly easy to craft this URL yourself):

client = OAuth2::Client.new("YOUR-MAKERLOG-CLIENT-ID",
                            { site: "https://api.getmakerlog.com/oauth/authorize/" })

client.auth_code.authorize_url(redirect_uri: 'http://localhost:5000/oauth/getmakerlog',
                               scope: "tasks:read tasks:write",
                               state: "...")

=> "https://api.getmakerlog.com/oauth/authorize?client_id=YOUR-MAKERLOG-CLIENT-ID&redirect_uri=http%3A%2F%2Flocalhost%3A5000%2Foauth%2Fgetmakerlog&response_type=code&scope=tasks%3Aread+tasks%3Awrite&state=abc123"

Notice the space between tasks:read and tasks:write, a space is required when requesting more than one scope.

You must generate a value for state yourself, store it in a session, and compare it with the state value returned by the Makerlog authorization dialog. This prevents malicious authorization attempts.

Now paste the authorization URL in your browser. You'll be presented with Makerlog's authorization dialog, asking the user whether it's OK to provide task reading and writing rights.


After clicking "Authorize" the dialog redirects to your app. In my case I had set up a web server on localhost (port 5000) to capture the redirect.

The GET request looks like this (taken from the Rails log):

Started GET "/oauth/getmakerlog?code=ASFT...kAbM&state=abc123"
Completed 200 OK in 1ms (Views: 0.5ms | ActiveRecord: 0.0ms)

You now have an authorization code. Again, don't forget to compare the state value with the value you've remembered by storing it as a session variable.

Token Exchange

Now it's time to exchange the authentication code for an access token.

This is done by performing a (server-side) POST to the Makerlog API:

curl -X POST -d "grant_type=authorization_code&code=TOKEN-FROM-GET-REQUEST" -u "YOUR-MAKERLOG-CLIENT-ID:YOUR-MAKERLOG-CLIENT-SECRET" https://api.getmakerlog.com/oauth/token/

The result should look like this:

  "access_token": "yBB9...vppN",
  "expires_in": 36000,
  "token_type": "Bearer",
  "scope": "tasks:read tasks:write",
  "refresh_token": "teu4...OnXf"

Note: you can only exchange a code for an access token once!

You can now use the access token to perform all API requests allowed by the requested scope.

The access token will expire after 36000 seconds (10 hours). To receive a new set of tokens, you need to make a POST request using grant_type=refresh_token:

curl -X POST -d "grant_type=refresh_token&refresh_token=lcIg...3mtc" -u "YOUR-MAKERLOG-CLIENT-ID:YOUR-MAKERLOG-CLIENT-SECRET" https://api.getmakerlog.com/oauth/token/

Notice the trailing slash after .../oauth/token/. Without the / the POST fails silently.

The result should look like this:

  "access_token": "zjeA...ezN0",
  "expires_in": 36000,
  "token_type": "Bearer",
  "scope": "tasks:read tasks:write",
  "refresh_token": "5Nm0...Zqw9"

Note: you can only use a refresh token once!

Rinse and repeat every time your access_token expires.

Test Your Access Token

Since we've asked permission to read and write tasks, let's use the newly minted access token to obtain the latest 20 public tasks (note: the API uses pagination, the default page size is 20):

curl -X GET -L -H 'authorization: Bearer J44D...fRqP' https://api.getmakerlog.com/tasks

Note: the -L directive ("follow redirects") is necessary with the Makerlog API because it performs a redirect. You can see this by adding --verbose to your curl command.

The result should look like this:

  "count": 93448,
  "next": "https://api.getmakerlog.com/tasks/?limit=20&offset=20",
  "previous": null,
  "results": [
      "id": 150371,
      "event": null,
      "done": true,
      "in_progress": false,

Have fun exploring the Makerlog API!

§ Permalink

ξ Comments? Kudos? Find me on Twitter

2019-06-04 20:34

Capturing AWS API Gateway Requests as SQS Messages

At the end of this article you know how to create and store a payload as shown below in Amazon Simple Queue Service based on an inbound AWS API Gateway request:

  "bodyJson": {
    "foo": "bar",
    "woo": 3
  "bodyRaw": "{foo:\"bar\",woo:3}",
  "requestId": "334029f8-***-ad7a6e7bbd88",
  "resourcePath": "/v1/enqueue",
  "apiId": "7cy***abd",
  "stage": "Staging",
  "resourceId": "4l***dd",
  "path": "/Staging/v1/enqueue",
  "protocol": "HTTP/1.1",
  "requestTimeEpoch": "1559294802021",
  "params": {
    "path": {
    "querystring": {
      "foo": "bar"
    "header": {
      "Accept": "*/*",
      "Content-Type": "application/json",
      "Host": "***.execute-api.eu-west-1.amazonaws.com",
      "User-Agent": "insomnia/6.5.3",
      "X-Amzn-Trace-Id": "Root=1-5cf0***719e",
      "X-Forwarded-For": "188.***.***.***",
      "X-Forwarded-Port": "443",
      "X-Forwarded-Proto": "https"

You can use this data for logging, debugging, or perhaps for a basic analytics application.

While investigating this topic I found the following resources helpful:

Create SQS Queue

In this step we're creating an SQS queue.

  • Open https://console.aws.amazon.com/sqs/home

  • Click "Create New Queue".

  • Enter a name for your queue e.g. test-sqs-queue.

  • Select "Standard Queue" (the FIFO-type queue requires passing along additional IDs which is beyond the scope of this article).

  • Click "Quick-Create Queue".

  • Select the queue and copy/paste the URL and ARN values displayed in the bottom pane to a notepad because you'll need these values later on. The values look like this:

URL: https://sqs.eu-west-1.amazonaws.com/744520962556/test-sqs-queue

ARN: arn:aws:sqs:eu-west-1:744520962556:test-sqs-queue

Create IAM Policy

In this step we're creating an IAM policy.

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": "sqs:SendMessage",
      "Resource": "REPLACE-WITH-YOUR-SQS-ARN"
  • Click "Review policy".

  • Enter a name for your policy in the next screen e.g. test-sqs-policy.

  • Click "Create policy".

Create IAM Role

In this step we're creating an IAM role.

  • Open https://console.aws.amazon.com/iam/home#/roles.

  • Click "Create Role".

  • Select "AWS service" and choose "API Gateway".

  • Click "Next: Permissions".

  • Don't make any changes in the next screen, just click "Next: Tags".

  • Again don't make any changes in the tags screen, just click "Next: Review".

  • Enter a name for the new role e.g. test-sqs-role.

  • Click "Create role". You should see a notification saying something like "The role test-sqs-role has been created.".

  • Select the new role and click "Attach policies".

  • Use the search filter to find test-sqs-policy. Tick the checkbox next to the policy and click "Attach policy". You should see a notification saying something like "Policy test-sqs-policy has been attached for the test-sqs-role.".

  • Copy/paste the Role ARN to a notepad because you'll need it later on. The ARN looks like this: arn:aws:iam::744520962556:role/test-sqs-role.

Create API Gateway

In this step we're creating an API Gateway.

  • Open https://console.aws.amazon.com/apigateway/home#/apis.

  • Click "Create API"

  • Select "REST", then "New API", and choose a name e.g. "Test API Gateway".

  • Click "Create API".

  • Click "Actions" and choose "Create Resource".

  • Enter a resource name e.g. v1. Click "Create Resource".

  • Click /v1, select "Actions", choose "Create Resource", and create a resource called traces. The resources path now looks like /v1/traces.

  • Click traces, select "Actions", choose "Create Method", select POST, and click the round checkbox icon.

  • Now select "AWS Service" at the right side of the screen. Select the appropriate region and choose "Simple Queue Service (SQS)" from the "AWS Service" select.

  • Select "Action Type" / "Use path override".

  • Look up the URL created in step "Create SQS Queue" and enter only the path part of this URL in the "Path override (optional)" field. In our example the path override should be set to 744520962556/test-sqs-queue.

  • Look up the Role ARN you created in step "Create IAM Role" and paste the value in the "Execution role" field.

  • Click "Save".

  • In the next screen, click "Integration Request".

  • Add an HTTP Header. Set "Name" to Content-Type and "Mapped from" to 'application/x-www-form-urlencoded' (notice the single quotes, they are required for static values!).

  • Add a Mapping Template. Select "Never" and add a Content-Type application/json.

  • Add the following template:

  "bodyJson": $input.json('$'),
  "bodyRaw": "$util.escapeJavaScript($input.body)",
  "requestId": "$context.requestId",
  "resourcePath": "$context.resourcePath",
  "apiId": "$context.apiId",
  "stage": "$context.stage",
  "resourceId": "$context.resourceId",
  "path": "$context.path",
  "protocol": "$context.protocol",
  "requestTimeEpoch": "$context.requestTimeEpoch",
  #set($allParams = $input.params())
  "params" : {
    #foreach($type in $allParams.keySet())
    #set($params = $allParams.get($type))
    "$type" : {
      #foreach($paramName in $params.keySet())
      "$paramName" : "$util.escapeJavaScript($params.get($paramName))"

Deploy API

  • Select "Actions" at the top of the screen and choose "Deploy API". You may have to create a stage first e.g. staging.

  • In the next screen you'll see the new endpoint e.g. https://scn6oc2fnj.execute-api.eu-west-1.amazonaws.com/staging.

  • Use curl to send a test request to your new API end point. Important: make sure you use the complete path e.g. https://scn6oc2fnj.execute-api.eu-west-1.amazonaws.com/staging/v1/traces in our case.

  • Open https://console.aws.amazon.com/sqs/home and tick the checkbox next to the queue name.

  • Click "Queue Actions" and select "View/Delete Messages". Click "Start Polling for Messages". You should see a new message containing the request payload you just sent!

§ Permalink

ξ Comments? Kudos? Find me on Twitter

2019-05-29 14:19

Tired of Scrum? A Simple Project Management Methodology for Small Teams

People often ask me about my favorite project management methodology. My answer is always "it depends" because many factors are at play like the type of service you're delivering, the product you are building, the geographical distribution of your team, and the maturity of the organization to name some factors.

I've applied Scrum, Agile, XP, and Kanban. I have defined KPIs and OKRs for teams. But I like nothing better than a small team who thrives by using a dead simple methodology with just four rules:

  1. Priority 1, 2, or 3 is assigned to every issue in the company, ranging from development to sales. With (a development) issue I mean a feature request, bug, or enhancement. Sales, marketing, and management i.e. founders have differently named issue types.
  2. Every month all outstanding issues are reviewed. P2s are promoted to P1 or demoted to P3. Items which have been a P3 for a long time are deleted.
  3. Everyone works exclusively on P1s unless the remaining time on a working day only leaves room for a small P2 or P3.
  4. Every Friday finished P1s are demonstrated to the whole team by the (main) author of every P1.

By reviewing priorities every month, with input from the whole team, you ensure that only the most critical items are bumped to P1.

Priorities should be based primarily on input from sales, support, and development.

Obviously the backlog consists of a manageable amount of work: not too few items but also not too many.

Once you've done four or five monthly priority meetings you'll get a sense of what your team is able to push out of the door every month.

I am not claiming this methodology is better than anything else that's out there. But it has worked for me, teams genuinely like it, and get a lot of work done. That's good enough for me.

§ Permalink

ξ Comments? Kudos? Find me on Twitter

2019-05-25 15:30

Title Casing Is Harder Than I Thought

You've probably noticed that many article titles use stylistic formatting called "title casing". Recently I wanted to add a titlecase method to the Msgtrail static blog engine. Quickly I realized that title casing is harder than I thought!

Let's begin with a few examples to demonstrate some edge cases:

  • In: Small word at end is nothing to be afraid of
  • Out: Small Word at End Is Nothing to Be Afraid Of

Notice that:

  • Small words like at and to are not title-cased;
  • Is is title-cased;
  • Of is title-cased, but only because it's the last word.

Or take this example:

  • In: Never touch paths like /var/run before/after /boot
  • Out: Never Touch Paths Like /var/run Before/After /boot

Notice that:

  • /var/run and /boot remain untouched;
  • before/after becomes Before/After.

See here for additional test cases.

While researching the topic I ran into an article by John Gruber about title casing. His article points to a Perl script which he uses to title-case the articles of his (magnificent) blog. The article also points to implementations in other languages, including Ruby.

I looked at several implementations in order to understand the rule set:

  • Gruber's script is clever, but hard to read for a non-Perl coder like me. It's basically a set of regular expressions.
  • Aristotle Pagaltzis refactored it to make it more readable.
  • Sam Souder created a Ruby version in the form of a gem called 'titlecase'. Sam's version is succinct (about 30 lines of code) but fails 13 of Gruber's test cases (which is fine, it is not the gem's intention to support all edge cases).
  • Grant Hollingworth created another version in Ruby in the form of a gem called "titleize". It has about 50 lines of code and fails 3 of Gruber's test cases (again, this is fine).

I ended up writing my own implementation which has about 40 lines of code and passes all tests.

My implementation is a fraction faster than "titlecase" and 3x faster than "titleize". Benchmarking a run on 10.000 English sentences yields:

  • My implementation: 0.8611196667 seconds (average over 3 runs).
  • Titlecase gem: 0.9050176667 seconds (average over 3 runs).
  • Titleize gem: 2.7170636667 seconds (average over 3 runs).

It was fun to write this code because it was a challenge to make it fast, readable, and pass all test cases. I am planning to turn the code into a Ruby gem. I have released a Ruby gem based on this implementation: https://github.com/evaneykelen/nicetitle.

§ Permalink

ξ Comments? Kudos? Find me on Twitter