Even Stensberg

Thoughts of mine

Reasoning Computer Science with other genres

Computer Science is so vast, yet limited. We’ve discovered so much, but how far away are we from other disciplines? Not far. By looking at biology, chemistry, math (of course) and languages, we can abstract more or less anything into these concepts and interchange these topics with one and another.

An ecosystem
An ecosystem

In Biology, we use ecosystems to denote a set of animals that live together and the use of animal resources to construct a cycle of life and death. Birds eat fish and fish eat alges. Alges consume oxygen and sunlight. How can we relate this to Computer Science? It is no hard task. If we think of the ecosystem of the whole machinery of programming, with CPU’s, ALU’s and registers. We can then relate fish as instruction code, and birds as programming langugages. If we can make up large programs based on how we fly, we can learn to fly by really well. We can travel large distances, meaning we can construct a lot of programming langugages.

Programming languages are the best way to reason about the computer, because we tell how the computer should behave, rather than how it should behave in detail. A fish and a bird are the same, namely animals. Animals behave similar. If we can write machine code, we can write a programming language. Assembly and JavaScript could generate the same code, but a fish use less time to fetch food than a bird that need to evaluate everything in its eyesight first.

Programming languages and machine code tells how code is structured, and how it should be exceuted, just as animals. Let’s go in depth of a use case of how we can relate.

Everytime a human needs to move, it sends signals to it’s brain. Similarly, everytime we need to retrieve something from a server, we send a SYN/ACK in a TCP protocol. The SYN in this example, is the brainwave requesting access to the hand, and the ACK is the movement of the hand. How can we use this? Imagine you need to construct a data structure. We can say that we are reading a book. Ironically the first pages is a dictionary of how the book is constructed. For this exercise, let’s make a double Linked List. First, the book is connected by pages and the book as a whole.

[| <- previous - data- -> next] - [ previous page <- page -> next page]

Doubly Linked List
Doubly Linked List

The previous page is the Linked Lists’ previous pointer, the next is the next page and the page itself is the data the node holds. Easy, right?

So far, we’ve considered ecosystems to be dependent on each other, but they are also encapsulated. One could say that a tree and water are dependent on each other, but the tree is also dependent on itself. It needs pollen to flourish, and it needs to be located with safe ground to become a healthy tree. We are encapsulating data in the tree. The leaves we are getting, are not only the information the tree holds, but also a reference to the tree, meaning it’s inheriting the DNA of the tree. Trees also have this kind of inherance on a higher level, meaning classes of trees is a set of trees that act on behalf of each other. Trees communicate with each other. They tell whenever a danger is nearby, and they can behave as a flock, disrupting air.

If I can say so myself, this is pretty cool!

But what happens if we need to make our mind palace relate to another field or all fields? Let me tell you a story. When we want to trasfer data in Computer Science, we tell chunks of data to send their information to a destination. Namely, a buffer. In Chemistry or Biology, a buffer will restrict its PH value to another solution relative to its substance.

When we want to transfer information in the brain, we use neurons, or even blood. Blood transfers data too, food! Food is essential for us to function, we carry medicine and vitamins in the blood.

bird eating plastic
bird eating plastic

In the ecosystem, we can relate to a buffer as the ecosystem parts transfering toxic substances to its lower chain in the energy pyramid. A bird can transfer its digested plastic, to a fish, and we will fish that, and then we will consume that plastic. All the way, a buffer made that possible, us!

We use all our vocabulary to denote Computer Science expressions, and perhaps my favorite one, is debugging. Whenever it is a bug in our code, we need to debug it. Same goes for humans, whenever we need to figure out what is wrong with us, we go see a doctor. The doctor is the programmer, and the bug is us. We search for answers, then we apply those answers to a solution or a fix. These fixes often needs to be maintained, just as you need yearly checkups at a dentist, you need to check your program every now and then. The only difference, is the time span.

To conclude this, you clearly can draw parallel lines between Software Engineering and other genres. From Medicine to Biology, we can construct the same meaning to different problem sets.

webpack UI

Navigating the webpack landscape is often hard. You don’t know what to put in your configuration or how it should be done. With webpack UI, I hope to make it easier for people to create or manage their webpack configurations.

Brief Introduction

Webpack UI is a scaffolding website made to make it easier for people to create or manage their configurations. With the UI, you are able to modify existing configurations, create new ones or choose templates that will generate best practise configurations for you. This project needs sponsoring though. If you want to support me, consider donating here


A Guide to Good Pull Request Reviews

When working with source code, either if it is Open Source, Closed Source or as a collaborative effort, one thing is inevitable; PRs.

Pull Requests are the final stage of getting your code ready for a production build, staging or to any sort of project where other people will rely on the code you write. It is important that it lives up to the expectations of the codebase. In this post, I will try to summarize some things I have learnt through Open Source and working together in teams for companies.

A review in progress

Submitting Unfinished Code, Drafting and PR Discussions

I really like how the Lighthouse team at Google handles their Open Source projects, and it is no lie that I have been inspired by how they govern their projects. They are good at what they do and they have experience. For instance, instead of bringing up long discussions of an issue through the issue tracker, a collaborator sometimes submit a draft PR with a rough sketch of how things would work, and then they have that discussion at a code-review level instead.

Some might argue that this is bad, although in my opinion, it is easier to understand the practical case if you have some code to refer to. Submitting unfinished code is not a bad thing, not following up on your work is. What that means, is that if you submit a draft Pull Request, it is easier for people to follow your thought process and intention as why they should implement a potential feature or bugfix.

Code is a common thing for all developers, and people can make their own conclusions based on that. By the way, GitHub now has a nice drafting feature that I really like, which makes it easier for maintainers and for developers in general to have these kinds of discussions. The only downside I can come up with submitting a draft Pull Request (as a way to discuss a potential fix or implementation), is that time is lost if not accepted. Now, why would you care and why would you need to do this?

What Your Team Wants x What Your Team Needs

Working in teams consists of some sort of splitting of roles, traditionally you got some sort of leader, either if it is a tech lead, product manager or senior developer, it is important that you have someone to guide and provide balance to a team of engineers (and or designers). I will use these terms interchangeably. This individual should be able to be a bridge between the product and technical needs of a project. That is sometimes is hard to balance.

To do this, for instance, a Product Manager should be able to understand what the product needs, by talking to the right people, and by being a good messenger to the developers (and or engineers, designers) implementing. This is like Yin and Yang: If the Product Manager fails at one side, the other will suffer. You can be good at figuring out what your product needs, but bad at messaging that to the developers, which then would have downsides. It is important to balance the equation in such way that both the product will see daylight, as well as that it is delivered by the engineers and designers. This is usually why large scale companies split a Product Manager Role between a more product sided role and a techncial one (and maybe you should do too).

Let me go through some Point of Views from different roles under a PM, for us to better understand why communicating in Pull Request Reviews are important.


As a designer I need to know a lot about a product in order to implement it, it is important for me to understand what the need of the product is, what a user is struggling with and what features we need. This way, I can more precisely put together a design and solution to a problem space. I need some users feedback in order to know why the design I have sketched out is bad, in order to know how to fix those.


As an engineer, I need to know the requirements of the product, a use case or an elaborative description of the problem, in such way that I know how to implement it. I do not need designs for this initially, but I will need them in order to convey a understanding of the technical requirements if there are no technical documents. I need to understand why we need this feature or fix in order to understand how to implement it.

What it means

As briefly shown, these two types of roles are dependent on a Product Manager delivering information to them, one being user feedback, the other being a technical requirement or document. A routined leader is therefore needed, in order for teams to stay on a positive trajectory. Both of the tasks mentioned, needs some sort of recipe, much like composing a dinner that consists of different components together. In order to cook the entire dish (your product), you need to do well in both making the main component (i.e meat) as well as the siders (salad).

This is where the Pull Requests come into play. As both designers, product managers and engineers (designers should have GitHub too, we can fight about it online) needs to communicate, the last step of the train ride is usually the written code. It is easier to iterate on code, than it is to plan first and then write. You will remove a chain of dependency because most teams are agile in some way.

If you have a good CI, you have a test server your designer could give feedback based upon, which makes things nice.

Externalizing tools

As designers nowadays use Figma, developers use GitHub and Product Managers use Jira, you will loose some speed versus quality and deliverables. I would say that you need to be able to access the content quickly, either by having a dedicated README for where to find critical information at your main documentation source, or have it pasted in red, capitalized letters at your doorstep for quick accessability. Clarity is the important part. If you live in three distinct worlds, loss of effectiveness as a result of communication being spread across tools is more likely than speaking two different languages. Write good issues, elaborate precisely and listen to feedback.

For Agile teams, in specific, I cannot say this enough: "write good issue tickets".toUpperCase(); it will reduce your time to implement drastically because you do not need to browse through a lot of information to implement something or step into the Product Manager role.

In Code Reviews, technical talk wins, i.e “The button does not show when hovering” versus “I cannot press this button, is my internet not working?”, as it vague and does not convey action or constructiveness towards a fix.

Design talk (altough it is good to discuss in Pull Requests) are what issues are for. Higher level (non-technical) issues and a mix between abstractions, the correct way to implement and technical flavors are intended for Pull Requests.

If you do not have an agile team, you will usually have a QA tester that will make case studies to reproduce, screenshots and their versions in jira or at the Pull Request itself (a good behaviour to mimic). Note here that feedback comes in form of what works versus what is not working, not how it looks.

A final point, is that discussing and fixing constraints in PRs make the steps to production less cumbersome and the likeliness for finishing a feature is bigger, as the code is being continously validated and pushed towards either staging or production. Speed and Value.

Techniques For Improving Reviews

Altough review quality might differ in terms of experience, there are a few tricks that makes the process easier.

For starters, if you notice code conventions differ, it might be a good idea to point that out. If you notice a user bug, point that out. Anything that might affect the issue will help triaging. Be constructive, no need to be rude, HOWEVER be proffessional such that the review is taken seriously and that the author will follow up on actionable items. Constructiveness is important, because it is easy to come of as arrogant as a reviewer, and when tables turn, you are the one that will suffer from that (karma or something). Constructiveness and indicating progress is important.

Some Pull Requests are big, take those reviews in stages. Do not review all files at once, pick a few and if there are issues (with the code reviewed), comment those. This will make the process easier. Look at PRs like a fluent set of waves that work together to hit the shoreline. You are not alone, split the review across the organization and team, reviews are not a one man show.

These are important, but how do you know how to actually review? I have some methods to determine which type of review to do.

Simple Reviews and Small Source

If the code is small, and you know that the user experience is not affected, looking at the code is enough. The quality of the review usually is based on the experience of the reviewer, but keep in mind to comment on inconsistencies and ask for an explaination where needed.

Intermediate Reviews and Architectural Changes

Once a Pull Request grows in size, the importance of making a good review increases proportionally. Have a look at the structure is it written. Is it following good practise? Is it concise with rest of the codebase? If not, indicate you want those aligned. For features, testing is important, but for a developer, this is above your paygrade. QA, PM, Product Owners or similar will test the usability of the software. If not, split this responsibility with someone else. The author should not review this him/herself because the ones that usually makes a change is blind to the bugs a merge might introduce. However, the author might know pitfalls, so ask for those edge cases. If there is some edge cases that might not be covered, optionally split those into issues so that you are aware of a potential issue (or fix it in review).

These intermediate reviews differ. For instance, if the source code is good, but the architectural structure is bad, you will need to maybe put up a figure, do a pair review to figure out a good solution to the complexity it introduces. If the architecture is good, code is bad, you will need to test the request locally and verify that the code compiles and behaves as usual.

Intermediate Reviews and Large Source Changes

Big source code is hard to review, because it introduces more concepts for a developer to get. Even though knowing an entire source is not required, it is important to understand what this Pull Request would do. For those kinds of reviews, a one-on-one, asking the author to go through the source might be positive, or multiple reviews with multiple developers, that is up for you to decide.

Advanced Reviews and Architectural Changes and Large Source Changes

I have encountered these kinds of Pull Requests and these normally comes from a refactor or a big feature being introduced. If it is a MVP and your team is agile, merging an unfinished PR to staging might be good, but if this is meant for production, make sure to be accurate in your reviews. This means that you need to figure out if the abstractions are good, if the source code is consistent and complies to your standards. Pick one of two things to focus on: Architecture or Source. Usually I begin with the abstractions and architecture, due the fact that it is easier to fix code standards rather than abstractions once the Proof of Concept or Intent to Implement (Permission to implement this feature is ok) is approved, thereby spending time better.

Verification and Roles within Teams

As mentioned, it is important to split tasks, because a Pull Request is a fluent process. Multiple people are involved to shape a product in the long term. Sometimes the source needs a massage, sometimes content needs verification from marketing and sometimes testers need to figure out if the code fixes the issue. All these tasks are why people have fancy titles. These tasks should be split across a longitude of people and if you do not have those roles, figure out something with your co-engineers.

If you have a Product Manager, this is your point of contact. The Product Manager is a Steve Jobs for Software, and a Product Manager not making sure the product is on-line is not good. Developers should code, Managers should manage, that is how it is. Developers have blindspots that managers pick up, so it is important to convey those facts during testing, such that the feedback loop is tight.


In this post I have explained how I would consider a good Pull Request process to be like in teams and people working together. Make sure that your code is consistent, split responsibility, pick your type of review and iterate towards a fix.

You can hit me up on twitter for questions or feedback on this post.

Thanks for reading, hope it was worth your time!

I attached a gif because life

A panda in progress

Are manual Performance Optimizations sufficient enough?

Are manual Performance Optimizations sufficient enough?

For the last year, I’ve focused on tooling in order for front-end developers to keep their focus on building rich user experiences that isn’t abounded by lack of toolchain knowledge. I began developing mink, a string template -based scaffolding tool for universal react applications with webpack. A lack of tooling to give beginners an advantage over a sea of information and anti-patterns inspired me to bring up the discussion of an extensive, yet simple CLI tool for both scaffolding and running a rather complicated bundler, webpack.

The problem about initial user confusion and hardship isn’t singular to webpack. Most of the choices a developer makes during an average work day is, by my experience proportional to; Integration & re-iteration.

These two categories are pretty simple to understand, yet they are very hard to keep separate. As a “Junior Developer”, I’ve often found myself re-writing code based on feedback a lot, which basically is defined as re-iteration. In fact, I used 2–3 months on the webpack-cli to figure out how to write a scaffolding tool that is efficient and extensive.

Performance is just that.

This is a bold statement, but nobody cares about performance in the integration phase, especially not beginners. You’ll have to iterate a couple of times, maybe even finish your application before you start being selective about how the user interacts with your app with respect to performance. This has implications, because nobody finishes their app, unless there are a lot of people working on it.

Google has done an incredible job promoting good practises for the web, which makes it easier to approach performance in a constructive way. Lighthouse is in my eyes, one of the most valuable repositories today as it gives developers an insight about how well their application is doing and suggestions to fix performance issues or relative. As a toolsmith, I wish that we could one step further, by not only suggesting, but rather implementing those suggestions,** so that a developer doesn’t have to spend time on re-iteration** too much.

It makes more sense to use time implementing core functionality and let toolchains fix human-errors.

Data-based tooling

In tooling, I’m excited about automation of work that could ease not only for beginners, but also established developer’s tasks. We’ve got CI’s, code formatters, CLI tools for scaffolding and performance hints, but no tool yet for predictive data loading or automatic fixes to source code based on data and performance hints.

Data-based tools are important, because the data already gives us an indication of how well your application is doing either by how big your source code is, when you decide to load the code, your first meaningful paint or when your application is interactive.

There’s several tools that already gives us these metrics, such as lighthouse audits, monitoring network requests and service worker activity in chrome and the output of module bundlers on compilation.

How it is realistically possible

After getting data from chrome protocol, module bundlers, react-tracking or lighthouse, one’s got a lot of information to use with machine learning. The optimization itself could potentially be done on runtime with worker-threads or clusters, along with editing the source code with AST’s.

For automatically improving applications based on lighthouse audits, one doesn’t necessarily need machine learning optimizations, some changes could be done just with the use of AST’s.

0CJS is not what you might think it is

Our goal is to make libraries easier to use, but 0CJS is not about running one magic command to fix every issue a potential user of your library has, it is much more than that.

There are many aspects of succeeding in creating a well written CLI tool, I’m not going to go through all of those aspects, just three; Baselines, Abstractions and Zero Configuration.


When describing baseline, what I mean is that your tool should have a good foundation to succeed. This is the goal of your tool, how you approach the problem and how you choose to implement it. For instance, what makes sense to hide from the user, and what is useful? This could be something as primitive as hiding status text, or outputting information about what your tool is doing while the user awaits for compilation to finish.

The more time you think about how the tool behaves, the less time you have to spend on fixing bugs and refactoring your code, because you are thinking about the user and what they might struggle with.

A technique to help you, not bound to Agile Development, Test Driven Development or any other fancy term, are design schemes. With webpack-cli, we decided to use Test Driven Development, but we did also decide to write design documents to save us work in the future. We knew how the architecture would look like before hand, implementation and abstractions became easier.

If you think about what your tool might look like in a couple of months, you saved yourself some months answering issues with your project. Have a clear goal in mind.

An example of how architecture can play a huge role in the long term, I’d like to use Polymer-CLI as an example. Polymer is entirely class based and runs commands with a runner. The source code is slim, intuitive and it is friendly for new users with its commands; init, analyze and serve.



Abstractions are leveraged in a variety of. You’re probably using one right now. Abstractions are key design pattern in CLI tools. Abstractions are superb, as they hide complexity from the user. You abstract modules, the tool itself or any other type of system, such as Google Chrome doesn’t require you to know how v8 works.

As a library author, one essential thing all users want to do, is to customize. The solution you’re providing isn’t always optimal for their use case, and they want to customize your tool to fit into their integrations. This is where ecosystems comes in hand. Ecosystems could be denoted as another level of abstraction, except it is based on the tool itself. One example of this is yeoman.

As a library author, you could create scaffolding instances of your tool where yeoman abstracts the details from you. Yeoman takes care of prompting users for questions and preparing folder structure. The only thing yeoman should be concerned about, is using those inputs with your framework or library.

Ecosystems serve as a way to abstract complexity, either by asking human-friendly questions, or by abstracting your core logic into a more convenient API. That’s a good thing. Babel has succeeded in this by allowing people to write plugins for their tool. NPM has succeeded in this by allowing everyone to use a library with a single install command.

There are nuances of how ecosystems function, but they have one thing in common, they hide complexity.


When you as a library author have the responsibility of promoting good practice, you should be aware of the defaults you are setting. Imagine millions of people using your tool. Take React or Vue as two examples. If React doesn’t set good defaults to users, then no user would. As the library author, follow advice from advocates, Microsoft or Google.

It’s important to underline the importance of promoting good practices and not anti-patterns, because when you as an author dont know how to set good defaults, why should the user?

One tip I learnt from being involved with webpack-cli is that you don’t necessarily have to be in line with what everyone says, as long as you’re setting a default. For instance, if you have a configuration based project, which needs an entry point of a users’ application (webpack), why can’t you set the entry point to src/index.js instead by default? Users will follow that pattern eventually. By setting conventions and listening to advocates, you’re already well off.

Google Developers is a good resource when educating new users

Google Developers is a good resource when educating new users

Another point worth mentioning is to scale complexity. The initial user might need a lot of help getting started, so asking users to fill out long lines of code to get the application booted up and ready might not be ideal. Instead, scale up complexity. The more users use your tool, the more creative you could be with options and allowing users to set those. Performance isn’t always one sided, users have different ways of rendering their application. Rather than fine-tuning that in the start, let users gradually do so.

As you might have got a hint of, is that 0CJS isn’t only about letting the user run your app straight away, it’s about educating them. 0CJS shouldn’t be an excuse not to know the tool, but rather a way to use the tool without knowing how it works early in the project.

Educating the developer is important, because if we do so, they know how to fix potential memory leaks, bugs or issues they might have. For instance, imagine a user using the webpack-offline-plugin without knowing what it is. Without knowing service-workers, users can use them. This is all about hiding the complexity from the user, so that he or she doesn’t have to implement a whole service-worker.

However, we do want to educate people on service-workers, why they are important and how they work. If the user then have an issue, they don’t have to refer themselves to Stack Overflow all the time, they might have the background information and knowledge of how a service worker functions to know what the issue is.

Finally, I want to wrap this up by mentioning Create React App and how it makes such a great tool. Create React App sets defaults, it has user guides longer than an academic paper and allows people to dive into React without knowing any build step. That is where we want to be, when people don’t have to know tools before actually using the library. In the end the user might need to know about the tool, but it isn’t initially important.

This is what scaling complexity looks like in practice. By allowing people to eject, they allow people to work with complexity, but by default, the user doesn’t know anything about the underlying logic. CRA also educates the user, which makes it easier to onboard new people into the front end realm.

S19 Internship at Cognite

This summer I was an intern at Cognite, a firm that is trying to make digitalize the industrial world. It was great and I have had a great time! I will try to summarize my two months into this post.

I was employed as a Software Engineer Intern and I worked on a lot of exciting stuff, both in front-end but also alot with cloud functions, backend and some front end infrastructure. It was great and I really liked the ability to switch between tasks, as long as the context switching would not be too big. As an intern, we had a onboarding period where we learnt more about how the company was like, how to navigate around and other practical stuff.

They also hosted social events, all sorts of! Everything from soccer on fridays, to kayaking together. This kind of culture is something I have not experienced before and in the start I was really shocked that this was how working in tech should be, and they were entirely right about providing the most supportive and kind environment for me to explore and be in. If someone from Cognite reads this, that was really important to me personally.

As a SWE intern, I was handed tasks given to regular Engineers and I was super happy for having the challenge of a regular employee. That is the only way you grow. When I reached out to my manager asking to work on a collaboration with NASA Goddard, in order for Cognite to use the timeseries data they have to improve our solutions, my manager was really positive. For me, having that freedom means a lot, because you learn yourself and you provide value for others as well as the firm itself, a “win-win” situation.

In general, Cognite is flexible and is an understandable, inclusive employer (and I am not getting paid to write these words). By that, they understand that an employee is effective some parts of the day and some other not so. What that meant, was that they understood that you needed to take a step back to be more productive, and that social life as well as personal health was important to them in order for them to make me succeed, which I think was one an eye-opener for me as well.

This is what tech companies should be like and I was lucky to have a chance there working alongside some mega smart (and driven!!) people. If you consider to work in Norway, already working here or want to work for an employer that actually cares, I would consider Cognite.

All in all, a successful internship, getting new friends, producing code and having fun. Thanks to everyone being apart of that, much appreciated. If you have questions about how it was there, feel free to reach out to me and I will be happy to provide some thoughts!

Using less time in tools

I have been in a few companies since starting in the industry as a developer and I want to share some insight in how in general these companies and maybe you as an individual could do better and why you should care.


I am from a frontend background with a sweet little mix of low-level from school, even though I have touched and heavily worked with all aspects of a typical stack. In general, everything from startups developing their product to large scale companies serving millions of users.

A common trait I have seen is that tooling or infrastructure is not neccecarily a priority, altough it should have been in the first place. Let me walk you through a scenario that made this a problem.


For a startup, it is probably the easiest to see the timeline because the reason for being late to the party is really a clear path forth. For frontend projects, mostly a flavor of Create React App is used, where some firms tend to make a fork of that to support their own variety.

Let me say this already, if you are a startup, please do not maintain a fork of a tool unless there is a special reason to. You are gonna use too much time maintaining a fork, end up being drowned in syncing against an upstream and potentially lock yourself in towards a special node version because one of your dependencies only support a legcy version of node. Do not do that.

Anyways, so here we are, developing an app in Create React and forget about the burden to come later. You start using this boilerplate more and more, maybe even create a few applications based on this.

One year later, you are pushing to production. What could go wrong? For large scale companies, you more or less do the same thing, except you might have a bash script to set everything up for you.


Yea why though? Why use time maintaing a fork, avoiding to create a tool to help you bootstrap a boilerplate with fresh deps, knowing that you will be stopped later?Answer is that I do not know, but it has a lot to do with the time people want to use with a tool because they just want to develop a front-end app, cash out and be happy. I respect that, but also I do not. Why? Tools should be respected, not because they are the building blocks that you create your applications from, but rather that they will more or less likely be the foundation you build from next time you create an application.

From the example with Create React App, not creating a custom generator with, say,for instance, yeoman means that you will have to sync somewhere anyway. Instead, invest some time in creating a generator, which will get the latest code, add some readmes to it with your own specialized writing and maybe even add your own linting rules. It will save you a lot of time, and it will reduce the amount of work you will be doing with a tool in general.

Doing so will really help the health of your infrastructure in general, as mentioned, reduce time to maintain and it will avoid you having to be concerned about the cascade of not upgrading dependencies(security, depreciation, bugfixes).

On Reusabilty

Reusing existing tools out there is a thing that is not yet very adopted for some sort of reason. Reusing code is good and it reduces the amount of waste you will produce, much like recycling. For companies, reusing a tool that does exactly what you want to create is nice, because it will allow you to not maintain it or have to maintain a lesser slice of the whole code. For instance, big corps have spent time creating tools that visualize data for you, generate component pieces or even linting rules that makes sense, no need to reinvent the wheel.

On Specialization

For tools, I highly recommend that you spend a minute or two learning a tool, because it is really important in the performance aspect and shipping less code. It will help you to create applications that could scale, for instance across different stacks. This could for instance be that you understand webpack to know what it does, in such way that you can eject your Create React App and support web-components, or multiple stacks. That knowledge will be put into good use long term.


Try to build upon existing tools to create code that is scalable, thereby reducing maintenance cost for keeping those tools alive. Use other tools and try to reduce amount of boilerplate code you create that is outdated or is not functioning in such way that you can specialize it against your application in the future for production.

Writing a solid Google Summer of Code Proposal

I’m apart of webpack, an organization that participated in Google Summer of Code in 2018 and this year (2019). In this post, I’ll share some tips on how to write a proposal that will increase your chances getting accepted. As a mentor I’ve been reviewing a lot of proposals and this is a summary of best practises.


What are you applying to work on? What is the organization doing and what does the project you want to contribute to do? This is where you elaborate on a higher level what the project is.

After doing a short introduction of the organization and the project you want to contribute to, start with an introduction of your idea. What makes your proposal want us to accept you? What makes it unique?



This is probably the longest and most important section of your proposal. In the previous section, you’ve highlighted your idea, this is the section where you provide an overview of how to solve the problem.

Start with explaining the problem space and the problem you are trying to solve. After doing so, you will elaborate on how you intend to solve the problem. A non-technical anecdote is not enough. You will need to come up with a layout that explains the solution at a technical level, without going too deep.

A tips for writing these kinds of texts, is to discuss the problem as it was a select company listening what you are saying and asking you questions about problems that might arrive. Remember to split up your solution into multiple sub-sections.



What’s your schedule? How will you come through with your proposal? Explain how you will dedicate your time during the summer to submit deliverables. This section is important, because it will be the reference to your deliverables this summer. GSoC has several phases and make sure that you write a table or an overview of what you will be doing during these phases.

For instance:

Phase 1: 28–29th of February:

These phases are evaluated by mentors and they will use your proposal as a reference to determine if you are passing or not. Do you have anything to mention that we need to know about? Are you going on holidays? How much time are you planning to use each day or week? Planning and letting the mentors know will be in your favor.

It does not do to leave a live dragon out of your calculations, if you live near him.
~J.K Tolkien, The Hobbit

Important phases to mention:

May 6–27— Community bonding

May 27-June 24th,— Phase 1

June 28th-July 24th— Phase 2

July 24th-August 19th — Phase 3



If you have contributed to the project or organization before, that is a positive thing. If so, mention how you contributed as it will show that you are motivated and know how to contribute to the project.

In this section you should write about yourself, a little like a resume but keep it relevant. Include personalia, experience, OSS experience (none is ok, that’s a reason for applying), work experience and relevant information. If you do not have much experience, a tip is to give us some examples of projects you’ve done at school and how you solved them.


Few Remarks

Example Structure


I hope this guide has helped you getting a feel of how a proposal is structured. This is not the recipe though, you can shape your proposal as you’d like as long as it is thorough and it contains relevant information. Think of it like a university report or a science report. Feel free to reach out to me if you have any questions, happy to help.

First Meaningful Paint: When page content finally shows up to the party

Painting is a nice way to get ideas to life

First meaningful paint or more often referred to FMP is a key metric in web performance. It’s important because it tells us more about how the general user perceives your website. In 2014 the average website sent 75 requests, so let’s focus on one thing and see what we can do there.

Asynchronous Loading & non-blocking rendering

Asynchronous Loading and non-blocking rendering is all about loading files on-demand. This means that your first priority is always to make the website look interactable, followed up with functionality efforts. Most of modern websites today are using some sort of build tool, either if it is gulp, parcel or webpack. The good thing about these tools is that they often provide these mechanisms out of the box for you.

Async loading is a way to omit loading a module or a JavaScript file until it is really needed. Have a look at the webpack documentation as an example of how this is achieveable. As Google developers has a nice tutorial about just this theme.

I’ll give you two other resources to check out:


Lighthouse is a tool designed to analyze your webpage and its metrics. It will give you some indications of what you did right and what you did wrong so you could have a chance to improve your websites’ performance and loading time. You should check it out.
Lighthouse | Tools for Web Developers | Google Developers


Performance is important but hard to master. Use lighthouse to monitor your page and try to find out where your application bottleneck is. WebpageTest is also a tool you could use to audit your application.

Building Front End Applications with Deno

This is a nodeJS logo

I recently stumbled into Deno, which is a typescript runtime for something something blockchain ai.

Thing is, Deno is great, because it doesn’t leave a node_modules folder in your application, so you’ll have a better time maintaining stuff. It’s still flaky at some points, and there’s some aspects of it which isn’t quite done.

The current trend in web right now is also directed towards having less dependencies visible in your root folder and storing complexity or dependencies somewhere. This is great, and developers will have a better time maintaining stuff.

I started with experimenting with having a simple TCP server that would get a response and I’d answer with static web page. Simple enough.

Deno is great as an isolated server instance, but porting existing dependencies might take a while and people aren’t quite sold on the good parts Deno provides. Anyways, so I was hacking away and there was some complications, not big, but still makes your head banging a bit.

Deno doesn’t support CJS format in imports, as this is a Rust isolated instance with the V8 sandbox made by Google, sprinkled with TypeScript on top, it’s hard to import already present front end solutions such as React and Preact into your Deno app.

Now why would I use React and these deps in Deno? My primary goal was to emit zero bundles to anything on disk and serve everything on the go, like you want your coffee. Secondly, Deno provides TypeScript out of the box, so creating a new React application would simply be done by having a

$ deno ./index.js and you would have access to both JavaScript & TypeScript. Best of two worlds.

Lastly, I’m procrastinating.

To start this project, I went ahead and imported Preact from unpkg ( a CDN provider), it turns out, as I’ve briefly explained here, that Deno doesn’t like CJS files, so I tried to find a MJS CDN for Preact (MJS and Deno works great by the way, let’s use MJS in NodeJS, don’t @me). I found the CDN, and it was all honey and ice until I wanted to render actual React/Preact components with Deno.

I thought: Hmm, I’m hungry, so I ate. After that I went ahead and started working on a solution to build front-end applications with Deno using JS and without having to rewrite any modules. Deno is a runtime, so it’s hard. However, we can do dynamic import injection (read a file, parse it and import the actual module). We’re gonna do this because regular JS modules doesn’t suffice, we need to transform CJS modules that are used in front-end to ES6 modules.

Thing is, we already got a tool that does the trick for us, webpack. However, webpack is written in CJS so we need to convert that into ES6 first. After that is done, one could run webpack in memory, which in turn compiles files in memory, which in turn will not output any files locally, but emit them in memory. Then, you will be able to serve un-allocated files in your local TCP server without actually emitting anything to disk. Neat, right?

You will also be able to develop without setting up transpilation because of the nature of Deno.

I’ve started some work on this, which you can find here:

📖 more posts 📖