📖 earlier posts 📖
This is a nodeJS logo
I recently stumbled into Deno, which is a typescript runtime for something something blockchain ai.
Thing is, Deno is great, because it doesn’t leave a node_modules folder in your application, so you’ll have a better time maintaining stuff. It’s still flaky at some points, and there’s some aspects of it which isn’t quite done.
The current trend in web right now is also directed towards having less dependencies visible in your root folder and storing complexity or dependencies somewhere. This is great, and developers will have a better time maintaining stuff.
I started with experimenting with having a simple TCP server that would get a response and I’d answer with static web page. Simple enough.
Deno is great as an isolated server instance, but porting existing dependencies might take a while and people aren’t quite sold on the good parts Deno provides. Anyways, so I was hacking away and there was some complications, not big, but still makes your head banging a bit.
Deno doesn’t support CJS format in imports, as this is a Rust isolated instance with the V8 sandbox made by Google, sprinkled with TypeScript on top, it’s hard to import already present front end solutions such as React and Preact into your Deno app.
Now why would I use React and these deps in Deno? My primary goal was to emit zero bundles to anything on disk and serve everything on the go, like you want your coffee. Secondly, Deno provides TypeScript out of the box, so creating a new React application would simply be done by having a
Lastly, I’m procrastinating.
To start this project, I went ahead and imported Preact from unpkg ( a CDN provider), it turns out, as I’ve briefly explained here, that Deno doesn’t like CJS files, so I tried to find a MJS CDN for Preact (MJS and Deno works great by the way, let’s use MJS in NodeJS, don’t @me). I found the CDN, and it was all honey and ice until I wanted to render actual React/Preact components with Deno.
I thought: Hmm, I’m hungry, so I ate. After that I went ahead and started working on a solution to build front-end applications with Deno using JS and without having to rewrite any modules. Deno is a runtime, so it’s hard. However, we can do dynamic import injection (read a file, parse it and import the actual module). We’re gonna do this because regular JS modules doesn’t suffice, we need to transform CJS modules that are used in front-end to ES6 modules.
Thing is, we already got a tool that does the trick for us, webpack. However, webpack is written in CJS so we need to convert that into ES6 first. After that is done, one could run webpack in memory, which in turn compiles files in memory, which in turn will not output any files locally, but emit them in memory. Then, you will be able to serve un-allocated files in your local TCP server without actually emitting anything to disk. Neat, right?
You will also be able to develop without setting up transpilation because of the nature of Deno.
I’ve started some work on this, which you can find here:
Every now and then a new post about how to work with a stack, regardless of library is published. Having done a lot of work in React, there seems to be few posts that cover a well defined architecture.
Front End projects are unique in a way, because unlike many back-end systems and legacy front-end libraries and frameworks, we can reuse more modules of code. In an abstract level, this is essentially what happens in module bundlers as well. We recognize those modules the same, and glue them together to spit out chunks of code that are optimized for our application, which again makes the user experience better in effect of faster parse and load time on webpages.
Architectural wise, we can relate to this with a different perspective. When we build a front-end project with reusability and user experience in mind, we can come up with a clean structure which is easy to navigate around. This is reflected in terms of developer experience.
There is all kinds of combinations to a well defined project structure and how you choose to structure a Front End application built with React is dependent on which condition your project is in. Before expanding on that thought, let us define a good baseline for a traditional React application.
We start by creating a minimum project with React and webpack. For a clean top-tier structure, I like to have all webpack configurations under one folder.
For different environments, a fairly standardized convention is to suffix your files with the environment and use those configurations for the given npm command, such as npm run prod . This gives developers headspace so that they do not need to navigate through the entire webpack configuration to know which part is configured for a given environment.
At this point, the most important part is how you are going to structure the React application. I prefer a top-level folder named lib although src works just as fine.
Inside lib is where the important sections are. To make a project readable and maintainable, it is important to make clear separations of concerns. For instance, if you have a REST-API, it is smart to put the Http calls inside their own respective folders.
This setup makes your view (React) cleaner to work with instead of embedding REST-API code in your view folders.
Next up is React. Depending on which phase your project is in, a technique that has worked well in many of projects I’ve been involved in, is to split logic based on different types of services your application offers, or to split based on the types of users your application has. This is up to you, if the users have the same layout, it might be better to split logic based on services you provide.
In this example, your project could have a codename, as well as you might have different stacks on different parts of your application. As seen in the picture above, we split the architecture based on the services that a given application might have.
In each of these service folders, we have a 2 folder — 1 file layout. Containers are responsible for fetching and handling data, sending them down to components. This way, a divide between visualizing data and handling data is provided.
For styling, I like to have styles locally for each component so that each component is isolated and it is easy to navigate and edit design without having to use time on global stylesheets. Another variant, is to have a flat style structure, but it will not scale well once there are a lot of components.
Why is a strict folder structure really needed?
Once you are in the wind and your application has gone through some development, it is more normal to produce code on top of your existing stack instead of having to refactor code to incrementally improve your application.
When the Egyptians built pyramids they didn’t remove two bricks to move the structure, they used blocks to stack up the structure. In this case, you do not usually move big files and re-arrange them every time you do a Pull Request. In that case your team members would think you’re insane doing so.
Instead, developers tend to:
A) Live with a well defined architecture and build layers on top of a solid foundation
B) Develop a unmaintainable until they get depressed and quit, followed up by another developer hopefully starting to refactor the code
The ideal case would be A. Since the market is heavily based on shipping code and making things work, it is obvious that you will need to start with a fresh slate and then build your way up to complexity. In the end, you will save more money as a business, doing the work properly and then focusing on being fast, then rushing to market with an unbearable codebase.
And there are more aspects to this argument, which will often be proved empirically. As a newly hired developer you might get thrown straight into the frying pan without any knowledge of a battlefield of pitfalls. These pitfalls usually are centered around “getting started”, “getting up and running” and so on.
I disagree on that part. Why? It’s a developers job to make the source code easy to navigate around, easy to start and a developer should never be concerned with not being able to do his/hers job because the infrastructure is too much of a headache to get by with.
The contradiction to a well defined structure is clear as opposed to the winning arguments, which is, loosing talent and building a strong engineering team. From a managers perspective, it might be hard to see the profit in investing time making your code look neat opposed to making money, but making money is an in-direct effect of how a developer can use time focusing on development. In this case, maintainability.
Too many bugs in your bed? Replace it, don’t fix it
I’ve been working in the industry for a couple of years now and also with the tooling that drives the industry, and I’ve seen some tendencies towards a bug being a singular case where one thing goes wrong cause the issue is obviously isolated and only occurs this one time and nowhere else. From an analyst perspective, you know that these incidents aren’t really unique but they share a common trait. Once you see more bugs of the same start grouping, the finite solution is to fix the way the system works, not monkey patching an issue since the stack is being replaced somewhere soon anyway.
You, as a developer will save yourself tons of hours debugging, working around quirks and delaying timelines because of this. The problem isn’t the isolated instance, but the un-encapsulated structure of it.
This case is also related to performance , believe it or not. It is unfortunate that we do not have metrics on well-defined structures in applications to those who aren’t, my hypothesis is that we will see the well-defined ones are more performant than those who are smaller in features. Why? It’s a prioritization problem. Larger apps have more work, sure, but they also have the advantage of a well designed architecture.
If you are building an application, do it properly. Define well defaults and make up a good structure for it before starting. It will yield great results in length and even if your traction isn’t big to start with, it will have a longer lifetime than a short-term solution.
I’m running a 13 inch MacBook Pro v.10.13.6 ( High Sierra ).
After spending some time in Apache trying to configure OCI8, I quickly concluded that it is a pain to configure. This post will go through how to get any OCI and Instant Client up and running on your Mac OSX.
You have PHP installed on your Mac ( v.7.x is the latest, but anything is fine)
You have knowledge of what a symlink is
You can read
Before actually doing anything, I’m making sure that I have PHP installed at the correct path and that the php.init file is located somewhere I know.
$ php — version
$ which php
$ php — ini
Our configuration file for php is under usr/local/etc/php/7.2
The reason why we are looking for this file is because this is where our OCI8 module will get included in order for PHP and Apache to detect the dependency and to use it. If you don’t have the file located, you can create one by doing touch /etc/php/is/located/here/php.ini . If you are wondering what your PHP path is, you can do which php. The easiest way to install PHP is via Homebrew if you don’t already have that installed.
Instant Client is what OCI8 depends on in order to run successfully. We will have to install Instant Client from the Official Oracle site, and you might need to create a user.
Install The Following:
After these have been Downloaded, navigate to where you have downloaded them. My downloads are located at/Users/ev1stensberg/Downloads/instantclient-x-macos.x64.
Now we are ready to link the dependencies to OSX to be able to use their commands to make them function as regular Command Line Applications. Before doing that, we need to unpack the zip files at a given location.
From experience, to avoid denied permissions from the system and forgetting the location of where the folder is located, I tend to extract these files under /opt/somewhere .
$ mkdir -p /opt/oracle
Secondly we move the downloaded folders to the oracle folder
$ mv /Users/ev1stensberg/Downloads/instantclient-basic-macos.x64-18.104.22.168.0-2.zip /opt/oracle
$ mv /Users/ev1stensberg/Downloads/instantclient-sdk-macos.x64-22.214.171.124.0-2.zip /opt/oracle
Third is to unzip the files we have moved to the oracle folder
$ unzip /opt/oracle/instantclient-basic-macos.x64-126.96.36.199.0-2.zip
$ unzip /opt/oracle/instantclient-sdk-macos.x64-188.8.131.52.0-2.zip
Optional: Now as we have unzipped the files to the oracle folder, you can safely remove the zip files from the folder by doing$ rm -rf /opt/oracle/*.zip .
If you failed at the third step, don’t worry. You can move and unzip the files via finder.
Integrating Instant Client To OSX
Now we are ready to actually use the programs we have installed. In order to do so, we will have to create symlinks to /usr/bin in order for the mac to treat the files as executables, such as the cd or ls command.
You can do so by doing:
**$ sudo ln -s /opt/oracle/sdk/include/*.h /usr/local/include/**
**$ sudo ln -s /opt/oracle/*.dylib /usr/local/lib/**
**$ sudo ln -s /opt/oracle/*.dylib.11.1 /usr/local/lib/**
**$ sudo ln -s /opt/oracle/libclntsh.dylib.11.1 /usr/local/lib/libclntsh.dylib**
You can’t really verify the command you are doing if you aren’t symlinking sqlplus as well. You can also look at this guide, to try another approach. In this example we are creating a soft symlink to our mac’s /usr/local folder, which will make the mac pick up the library as a global executeable.
Next up is to install OCI8, which is a hard nut to crack. I’m going to show you the hard way to link OCI8 because the pecl way of doing it is quite buggy.
Installing and linking OCI8
When trying to install oci8, it might be productive to check if the extention is loaded. To check if oci8 is enabled, you can add the line dd(extension_loaded(‘oci8’)); to your entry php file (mine is index.php)
You will need to install OCI8 from source as the package manager sometimes is hard to reason about. First, find your oci8 download link, and navigate to the downloaded folder.
This part is tricky, because Pecl, the PHP package manager usually extracts the package for us and runs the installation with symlinks. First, we start by extracting the tar (zipped) file into a destination:
$ tar -zxf oci8-x.x.x.tgz
$ cd oci8-x.x.x
After doing so we need to prepare the package meta-data by using phpize . If you are having troubles, you will need to spend some time debugging why the command might fail, but here are some links:
If your command works, run: phpize inside the un-extracted directory.
Now is the time to install OCI8, and you can run the following command:
$ ./configure --with-oci8=shared,instantclient,/path/to/instant/client/lib
$ make install
If you have troubles with installing this at any point, I advice you to install OCI8 through PECL and then follow along whichever script PECL is running. Reproduce those steps manually and fix the issues the build steps have.
Congrats! You’re done!
📕 end of posts 📕