Week 2 Report

Multi-parallelism. Debugging. Merchant sign-ups.

After stabilising the website and adding a merchant sign-up form, the primary focus for this week related to debugging the codebase.

There’s still plenty of optimisation in all areas of the website, but as my priority is the codebase I need to split my time accordingly. The website’s doing pretty well overall with a steady drip of sign-ups/day, now totalling approximately 160.

Engagement is pretty stable for the users that scroll past the fold (the bottom of the landing-screen you arrive on which is approximately 15%) as seen below.

Engineering tasks shifted into an interesting gear with a general implementation of parallelism for some primary services including venue and menu sorting. This involved refactoring their relative serial execution into a multi-threaded and multi-processed based implementation.

For those unfamiliar with parallelism, the idea in principle is to divide the work amongst resources. As usual, dividing the work by more constituents means a smaller workload per worker. Two principal ways of parallelising code is through multi-threading or multi-processing.

This illustrates how processing can be divided:

A thread is a set of tasks. Note how for a single-thread the execution time is dependent on the algebraic sum of the execution time for each member, whereas for thread-level parallelism, the determining factor is the maximum time within the set of tasks.

Any script exists as its own entity on a computer’s RAM (heap memory), and this is known as a process. Multi-processing involves copying this entire process into another region of RAM and continuing processing. You can see how this can be useful for applications that might involve receiving a signal and then send back a response but without shutting down the receiver.

In a similar way, multi-threading involves a division of work however there’s no copying involved. Instead it’s a granular separation of tasks, each assigned to a core on the computer. Within a computer is a CPU that can execute instructions. More CPUs means greater computing power. Unless explicitly told, a machine will mostly execute a script in a single thread (that is, a single core). But by dividing individual tasks within a script to individual cores, the work becomes parallelised and hence, mostly faster.

You can see how this shaves off massive amounts of time for processing. It’s this particular type of architecture that enables Google, Facebook, YouTube and so on to serve content so promptly.

Goals for next week are to complete most of the iOS debugging along with complete back-end integration so there’s a solid connection between client and servers.

Thanks for reading and be sure to check out Streamplate!

Bryan :)

Electrical engineering/Neuroscience at University of Sydney. Aspiring neuro-trauma surgeon with a few software/hardware goals. www.streamplate.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store