Even Stensberg

Thoughts of mine

šŸ“– earlier posts šŸ“–

Using less time in tools

I have been in a few companies since starting in the industry as a developer and I want to share some insight in how in general these companies and maybe you as an individual could do better and why you should care.

Introduction

I am from a frontend background with a sweet little mix of low-level from school, even though I have touched and heavily worked with all aspects of a typical stack. In general, everything from startups developing their product to large scale companies serving millions of users.

A common trait I have seen is that tooling or infrastructure is not neccecarily a priority, altough it should have been in the first place. Let me walk you through a scenario that made this a problem.

Problem

For a startup, it is probably the easiest to see the timeline because the reason for being late to the party is really a clear path forth. For frontend projects, mostly a flavor of Create React App is used, where some firms tend to make a fork of that to support their own variety.

Let me say this already, if you are a startup, please do not maintain a fork of a tool unless there is a special reason to. You are gonna use too much time maintaining a fork, end up being drowned in syncing against an upstream and potentially lock yourself in towards a special node version because one of your dependencies only support a legcy version of node. Do not do that.

Anyways, so here we are, developing an app in Create React and forget about the burden to come later. You start using this boilerplate more and more, maybe even create a few applications based on this.

One year later, you are pushing to production. What could go wrong? For large scale companies, you more or less do the same thing, except you might have a bash script to set everything up for you.

Why

Yea why though? Why use time maintaing a fork, avoiding to create a tool to help you bootstrap a boilerplate with fresh deps, knowing that you will be stopped later?Answer is that I do not know, but it has a lot to do with the time people want to use with a tool because they just want to develop a front-end app, cash out and be happy. I respect that, but also I do not. Why? Tools should be respected, not because they are the building blocks that you create your applications from, but rather that they will more or less likely be the foundation you build from next time you create an application.

From the example with Create React App, not creating a custom generator with, say,for instance, yeoman means that you will have to sync somewhere anyway. Instead, invest some time in creating a generator, which will get the latest code, add some readmes to it with your own specialized writing and maybe even add your own linting rules. It will save you a lot of time, and it will reduce the amount of work you will be doing with a tool in general.

Doing so will really help the health of your infrastructure in general, as mentioned, reduce time to maintain and it will avoid you having to be concerned about the cascade of not upgrading dependencies(security, depreciation, bugfixes).

On Reusabilty

Reusing existing tools out there is a thing that is not yet very adopted for some sort of reason. Reusing code is good and it reduces the amount of waste you will produce, much like recycling. For companies, reusing a tool that does exactly what you want to create is nice, because it will allow you to not maintain it or have to maintain a lesser slice of the whole code. For instance, big corps have spent time creating tools that visualize data for you, generate component pieces or even linting rules that makes sense, no need to reinvent the wheel.

On Specialization

For tools, I highly recommend that you spend a minute or two learning a tool, because it is really important in the performance aspect and shipping less code. It will help you to create applications that could scale, for instance across different stacks. This could for instance be that you understand webpack to know what it does, in such way that you can eject your Create React App and support web-components, or multiple stacks. That knowledge will be put into good use long term.

Summary

Try to build upon existing tools to create code that is scalable, thereby reducing maintenance cost for keeping those tools alive. Use other tools and try to reduce amount of boilerplate code you create that is outdated or is not functioning in such way that you can specialize it against your application in the future for production.

Writing a solid Google Summer of Code Proposal

Iā€™m apart of webpack, an organization that participated in Google Summer of Code in 2018 and this year (2019). In this post, Iā€™ll share some tips on how to write a proposal that will increase your chances getting accepted. As a mentor Iā€™ve been reviewing a lot of proposals and this is a summary of best practises.

Introduction

What are you applying to work on? What is the organization doing and what does the project you want to contribute to do? This is where you elaborate on a higher level what the project is.

After doing a short introduction of the organization and the project you want to contribute to, start with an introduction of your idea. What makes your proposal want us to accept you? What makes it unique?

Keywords:

Description

This is probably the longest and most important section of your proposal. In the previous section, youā€™ve highlighted your idea, this is the section where you provide an overview of how to solve the problem.

Start with explaining the problem space and the problem you are trying to solve. After doing so, you will elaborate on how you intend to solve the problem. A non-technical anecdote is not enough. You will need to come up with a layout that explains the solution at a technical level, without going too deep.

A tips for writing these kinds of texts, is to discuss the problem as it was a select company listening what you are saying and asking you questions about problems that might arrive. Remember to split up your solution into multiple sub-sections.

Keywords:

Timeline

Whatā€™s your schedule? How will you come through with your proposal? Explain how you will dedicate your time during the summer to submit deliverables. This section is important, because it will be the reference to your deliverables this summer. GSoC has several phases and make sure that you write a table or an overview of what you will be doing during these phases.

For instance:

Phase 1: 28ā€“29th of February:

These phases are evaluated by mentors and they will use your proposal as a reference to determine if you are passing or not. Do you have anything to mention that we need to know about? Are you going on holidays? How much time are you planning to use each day or week? Planning and letting the mentors know will be in your favor.

It does not do to leave a live dragon out of your calculations, if you live near him.
~J.K Tolkien, The Hobbit

Important phases to mention:

May 6ā€“27ā€” Community bonding

May 27-June 24th,ā€” Phase 1

June 28th-July 24thā€” Phase 2

July 24th-August 19th ā€” Phase 3

Keywords:

Experience

If you have contributed to the project or organization before, that is a positive thing. If so, mention how you contributed as it will show that you are motivated and know how to contribute to the project.

In this section you should write about yourself, a little like a resume but keep it relevant. Include personalia, experience, OSS experience (none is ok, thatā€™s a reason for applying), work experience and relevant information. If you do not have much experience, a tip is to give us some examples of projects youā€™ve done at school and how you solved them.

Keywords:

Few Remarks

Example Structure

Summary

I hope this guide has helped you getting a feel of how a proposal is structured. This is not the recipe though, you can shape your proposal as youā€™d like as long as it is thorough and it contains relevant information. Think of it like a university report or a science report. Feel free to reach out to me if you have any questions, happy to help.

First Meaningful Paint: When page content finally shows up to the party

Painting is a nice way to get ideas to life

First meaningful paint or more often referred to FMP is a key metric in web performance. Itā€™s important because it tells us more about how the general user perceives your website. In 2014 the average website sent 75 requests, so letā€™s focus on one thing and see what we can do there.

Asynchronous Loading & non-blocking rendering

Asynchronous Loading and non-blocking rendering is all about loading files on-demand. This means that your first priority is always to make the website look interactable, followed up with functionality efforts. Most of modern websites today are using some sort of build tool, either if it is gulp, parcel or webpack. The good thing about these tools is that they often provide these mechanisms out of the box for you.

Async loading is a way to omit loading a module or a JavaScript file until it is really needed. Have a look at the webpack documentation as an example of how this is achieveable. As Google developers has a nice tutorial about just this theme.

Iā€™ll give you two other resources to check out:

Lighthouse

Lighthouse is a tool designed to analyze your webpage and its metrics. It will give you some indications of what you did right and what you did wrong so you could have a chance to improve your websitesā€™ performance and loading time. You should check it out.
Lighthouse | Tools for Web Developers | Google Developers

Conclusion

Performance is important but hard to master. Use lighthouse to monitor your page and try to find out where your application bottleneck is. WebpageTest is also a tool you could use to audit your application.

Building Front End Applications with Deno

This is a nodeJS logo

I recently stumbled into Deno, which is a typescript runtime for something something blockchain ai.

Thing is, Deno is great, because it doesnā€™t leave a node_modules folder in your application, so youā€™ll have a better time maintaining stuff. Itā€™s still flaky at some points, and thereā€™s some aspects of it which isnā€™t quite done.

The current trend in web right now is also directed towards having less dependencies visible in your root folder and storing complexity or dependencies somewhere. This is great, and developers will have a better time maintaining stuff.

I started with experimenting with having a simple TCP server that would get a response and Iā€™d answer with static web page. Simple enough.

Deno is great as an isolated server instance, but porting existing dependencies might take a while and people arenā€™t quite sold on the good parts Deno provides. Anyways, so I was hacking away and there was some complications, not big, but still makes your head banging a bit.

Deno doesnā€™t support CJS format in imports, as this is a Rust isolated instance with the V8 sandbox made by Google, sprinkled with TypeScript on top, itā€™s hard to import already present front end solutions such as React and Preact into your Deno app.

Now why would I use React and these deps in Deno? My primary goal was to emit zero bundles to anything on disk and serve everything on the go, like you want your coffee. Secondly, Deno provides TypeScript out of the box, so creating a new React application would simply be done by having a

$ deno ./index.js and you would have access to both JavaScript & TypeScript. Best of two worlds.

Lastly, Iā€™m procrastinating.

To start this project, I went ahead and imported Preact from unpkg ( a CDN provider), it turns out, as Iā€™ve briefly explained here, that Deno doesnā€™t like CJS files, so I tried to find a MJS CDN for Preact (MJS and Deno works great by the way, letā€™s use MJS in NodeJS, donā€™t @me). I found the CDN, and it was all honey and ice until I wanted to render actual React/Preact components with Deno.

I thought: Hmm, Iā€™m hungry, so I ate. After that I went ahead and started working on a solution to build front-end applications with Deno using JS and without having to rewrite any modules. Deno is a runtime, so itā€™s hard. However, we can do dynamic import injection (read a file, parse it and import the actual module). Weā€™re gonna do this because regular JS modules doesnā€™t suffice, we need to transform CJS modules that are used in front-end to ES6 modules.

Thing is, we already got a tool that does the trick for us, webpack. However, webpack is written in CJS so we need to convert that into ES6 first. After that is done, one could run webpack in memory, which in turn compiles files in memory, which in turn will not output any files locally, but emit them in memory. Then, you will be able to serve un-allocated files in your local TCP server without actually emitting anything to disk. Neat, right?

You will also be able to develop without setting up transpilation because of the nature of Deno.

Iā€™ve started some work on this, which you can find here:

Architectural Beams in Front End

Every now and then a new post about how to work with a stack, regardless of library is published. Having done a lot of work in React, there seems to be few posts that cover a well defined architecture.

Front End projects are unique in a way, because unlike many back-end systems and legacy front-end libraries and frameworks, we can reuse more modules of code. In an abstract level, this is essentially what happens in module bundlers as well. We recognize those modules the same, and glue them together to spit out chunks of code that are optimized for our application, which again makes the user experience better in effect of faster parse and load time on webpages.

Architectural wise, we can relate to this with a different perspective. When we build a front-end project with reusability and user experience in mind, we can come up with a clean structure which is easy to navigate around. This is reflected in terms of developer experience.

There is all kinds of combinations to a well defined project structure and how you choose to structure a Front End application built with React is dependent on which condition your project is in. Before expanding on that thought, let us define a good baseline for a traditional React application.

We start by creating a minimum project with React and webpack. For a clean top-tier structure, I like to have all webpack configurations under one folder.

For different environments, a fairly standardized convention is to suffix your files with the environment and use those configurations for the given npm command, such as npm run prod . This gives developers headspace so that they do not need to navigate through the entire webpack configuration to know which part is configured for a given environment.

At this point, the most important part is how you are going to structure the React application. I prefer a top-level folder named lib although src works just as fine.

Inside lib is where the important sections are. To make a project readable and maintainable, it is important to make clear separations of concerns. For instance, if you have a REST-API, it is smart to put the Http calls inside their own respective folders.

This setup makes your view (React) cleaner to work with instead of embedding REST-API code in your view folders.

Next up is React. Depending on which phase your project is in, a technique that has worked well in many of projects Iā€™ve been involved in, is to split logic based on different types of services your application offers, or to split based on the types of users your application has. This is up to you, if the users have the same layout, it might be better to split logic based on services you provide.

In this example, your project could have a codename, as well as you might have different stacks on different parts of your application. As seen in the picture above, we split the architecture based on the services that a given application might have.

In each of these service folders, we have a 2 folder ā€” 1 file layout. Containers are responsible for fetching and handling data, sending them down to components. This way, a divide between visualizing data and handling data is provided.

For styling, I like to have styles locally for each component so that each component is isolated and it is easy to navigate and edit design without having to use time on global stylesheets. Another variant, is to have a flat style structure, but it will not scale well once there are a lot of components.

In the utils folder, normally one just has the pure JavaScript functions/helpers that are needed across the application. This might seem smart, but in reality when developing a product there are a lot of situations where your project is a bit noisy, and a good middleway to strike then, is to put your common react components in this folder and abstract it to a private npm package or styleguide later. You can also remove it and move it closer to the folders using it the most (incommon/utils folders) later.

Why is a strict folder structure really needed?

Once you are in the wind and your application has gone through some development, it is more normal to produce code on top of your existing stack instead of having to refactor code to incrementally improve your application.

When the Egyptians built pyramids they didnā€™t remove two bricks to move the structure, they used blocks to stack up the structure. In this case, you do not usually move big files and re-arrange them every time you do a Pull Request. In that case your team members would think youā€™re insane doing so.

Instead, developers tend to:

A) Live with a well defined architecture and build layers on top of a solid foundation

B) Develop a unmaintainable until they get depressed and quit, followed up by another developer hopefully starting to refactor the code

The ideal case would be A. Since the market is heavily based on shipping code and making things work, it is obvious that you will need to start with a fresh slate and then build your way up to complexity. In the end, you will save more money as a business, doing the work properly and then focusing on being fast, then rushing to market with an unbearable codebase.

And there are more aspects to this argument, which will often be proved empirically. As a newly hired developer you might get thrown straight into the frying pan without any knowledge of a battlefield of pitfalls. These pitfalls usually are centered around ā€œgetting startedā€, ā€œgetting up and runningā€ and so on.

I disagree on that part. Why? Itā€™s a developers job to make the source code easy to navigate around, easy to start and a developer should never be concerned with not being able to do his/hers job because the infrastructure is too much of a headache to get by with.

The contradiction to a well defined structure is clear as opposed to the winning arguments, which is, loosing talent and building a strong engineering team. From a managers perspective, it might be hard to see the profit in investing time making your code look neat opposed to making money, but making money is an in-direct effect of how a developer can use time focusing on development. In this case, maintainability.

Too many bugs in your bed? Replace it, donā€™t fix it

Iā€™ve been working in the industry for a couple of years now and also with the tooling that drives the industry, and Iā€™ve seen some tendencies towards a bug being a singular case where one thing goes wrong cause the issue is obviously isolated and only occurs this one time and nowhere else. From an analyst perspective, you know that these incidents arenā€™t really unique but they share a common trait. Once you see more bugs of the same start grouping, the finite solution is to fix the way the system works, not monkey patching an issue since the stack is being replaced somewhere soon anyway.

You, as a developer will save yourself tons of hours debugging, working around quirks and delaying timelines because of this. The problem isnā€™t the isolated instance, but the un-encapsulated structure of it.

This case is also related to performance , believe it or not. It is unfortunate that we do not have metrics on well-defined structures in applications to those who arenā€™t, my hypothesis is that we will see the well-defined ones are more performant than those who are smaller in features. Why? Itā€™s a prioritization problem. Larger apps have more work, sure, but they also have the advantage of a well designed architecture.

Conclusion

If you are building an application, do it properly. Define well defaults and make up a good structure for it before starting. It will yield great results in length and even if your traction isnā€™t big to start with, it will have a longer lifetime than a short-term solution.

Apache and PHP with OCI8 + Oracle Instant Client in Mac OSX

Iā€™m running a 13 inch MacBook Pro v.10.13.6 ( High Sierra ).

After spending some time in Apache trying to configure OCI8, I quickly concluded that it is a pain to configure. This post will go through how to get any OCI and Instant Client up and running on your Mac OSX.

Prerequisites

Getting started

PHP

Before actually doing anything, Iā€™m making sure that I have PHP installed at the correct path and that the php.init file is located somewhere I know.

$ php ā€” version

Iā€™m running PHP version 7.2.6
Iā€™m running PHP version 7.2.6

$ which php

php is under /usr/local/bin
php is under /usr/local/bin

$ php ā€” ini

Our configuration file for php is under usr/local/etc/php/7.2
Our configuration file for php is under usr/local/etc/php/7.2

Our configuration file for php is under usr/local/etc/php/7.2

The reason why we are looking for this file is because this is where our OCI8 module will get included in order for PHP and Apache to detect the dependency and to use it. If you donā€™t have the file located, you can create one by doing touch /etc/php/is/located/here/php.ini . If you are wondering what your PHP path is, you can do which php. The easiest way to install PHP is via Homebrew if you donā€™t already have that installed.

Instant Client

Instant Client is what OCI8 depends on in order to run successfully. We will have to install Instant Client from the Official Oracle site, and you might need to create a user.

Install The Following:

After these have been Downloaded, navigate to where you have downloaded them. My downloads are located at/Users/ev1stensberg/Downloads/instantclient-x-macos.x64.

Now we are ready to link the dependencies to OSX to be able to use their commands to make them function as regular Command Line Applications. Before doing that, we need to unpack the zip files at a given location.

From experience, to avoid denied permissions from the system and forgetting the location of where the folder is located, I tend to extract these files under /opt/somewhere .

We begin by creating a folder for our directory for zip extraction:

$ mkdir -p /opt/oracle

Secondly we move the downloaded folders to the oracle folder

$ mv /Users/ev1stensberg/Downloads/instantclient-basic-macos.x64-12.2.0.1.0-2.zip /opt/oracle

$ mv /Users/ev1stensberg/Downloads/instantclient-sdk-macos.x64-12.2.0.1.0-2.zip /opt/oracle

Third is to unzip the files we have moved to the oracle folder

$ unzip /opt/oracle/instantclient-basic-macos.x64-12.2.0.1.0-2.zip

$ unzip /opt/oracle/instantclient-sdk-macos.x64-12.2.0.1.0-2.zip

Optional: Now as we have unzipped the files to the oracle folder, you can safely remove the zip files from the folder by doing$ rm -rf /opt/oracle/*.zip .

If you failed at the third step, donā€™t worry. You can move and unzip the files via finder.

Integrating Instant Client To OSX

Now we are ready to actually use the programs we have installed. In order to do so, we will have to create symlinks to /usr/bin in order for the mac to treat the files as executables, such as the cd or ls command.

You can do so by doing:

**$ sudo ln -s /opt/oracle/sdk/include/*.h /usr/local/include/**
**$ sudo ln -s /opt/oracle/*.dylib /usr/local/lib/**
**$ sudo ln -s /opt/oracle/*.dylib.11.1 /usr/local/lib/**
**$ sudo ln -s /opt/oracle/libclntsh.dylib.11.1 /usr/local/lib/libclntsh.dylib**

You canā€™t really verify the command you are doing if you arenā€™t symlinking sqlplus as well. You can also look at this guide, to try another approach. In this example we are creating a soft symlink to our macā€™s /usr/local folder, which will make the mac pick up the library as a global executeable.

Next up is to install OCI8, which is a hard nut to crack. Iā€™m going to show you the hard way to link OCI8 because the pecl way of doing it is quite buggy.

Installing and linking OCI8

When trying to install oci8, it might be productive to check if the extention is loaded. To check if oci8 is enabled, you can add the line dd(extension_loaded(ā€˜oci8ā€™)); to your entry php file (mine is index.php)

You will need to install OCI8 from source as the package manager sometimes is hard to reason about. First, find your oci8 download link, and navigate to the downloaded folder.

This part is tricky, because Pecl, the PHP package manager usually extracts the package for us and runs the installation with symlinks. First, we start by extracting the tar (zipped) file into a destination:

$ tar -zxf oci8-x.x.x.tgz
$ cd oci8-x.x.x

After doing so we need to prepare the package meta-data by using phpize . If you are having troubles, you will need to spend some time debugging why the command might fail, but here are some links:

If your command works, run: phpize inside the un-extracted directory.

Now is the time to install OCI8, and you can run the following command:

$ ./configure --with-oci8=shared,instantclient,/path/to/instant/client/lib

$ make install

If you have troubles with installing this at any point, I advice you to install OCI8 through PECL and then follow along whichever script PECL is running. Reproduce those steps manually and fix the issues the build steps have.

Congrats! Youā€™re done!

šŸ“• end of posts šŸ“•