August 31

Make Google Calendar your Best Friend

We all need a best friend. At a professional level, a key attribute of a good friend is someone that elevates you and holds you accountable to your promises.

It doesn’t always take a human being to do that though. Google Calendar is absolutely wonderful if you’re someone that is either forgetful, unorganized, lack accountability or just need some structure in your routine.

I use Google Calendar religiously in my day to day life, and heavily advocate for it. Here are a few ways I use it.

Prerequisite

To use Google Calendar in the way I am suggesting, there are some requirements. First, you must have a Gmail account to be able to use Calendar. Second, you must turn on Calendar notifications with this associated account both on your browser/desktop and your mobile device – so that you are alerted promptly when action is required on your part. Finally -and this is the most element – you need to promise yourself that if you put something on the Calendar, you will adhere to it. It is crucial that you check your Calendar each day and see it as a to do list – at the exact time you allocate.

Accountability

Have to pay bills? Or follow up with mail work? Throw it weekly or monthly on your Calendar, and take care of it when its planned. I even use it to make sure I go to the Gym each weekend. No Excuses!

Oh, a friend wants to go to dinner with you? Check your calendar. Either reschedule your prior commitment or reschedule the new one.

Joint Appointments

Sometimes the Wife and I schedule back to back appointments – or even dates. It’s just well organized if I throw it on the Calendar and add her as an invitee so that we’re both on the same page…or Calendar. I go as far as adding the address too!

Reminders

One of my favorite features is Reminders. Who invented birthdays anyway? It’s hard to keep track – especially if you don’t use social media as much as you should. The cool thing about Calendar is that not only can you create new Calendars to toggle on/off, but you can set recurring reminders for things like Birthdays, Vehicle and Home scheduled routine maintenance, etc!

To-Do Lists

Instead of putting together an arbitrary To-Do list (like Google Keep), I like to kill 2 birds with one stone. Why not just throw your task directly on Calendar for when you’d like to tackle it? That way, it’s on a list…and it’s on a schedule – YOUR schedule.

With that said, I hope you made a new friend after reading this article. Give it a shot. Once you start using Calendar, you won’t go back!

July 31

THIS Makes the BEST Business Intelligence Solution

Why Data Analytics is Gaining HYPE in the 21st Century | by Rinu Gour |  Towards Data Science

Having spent most of my professional career in the business intelligence and analytics space, I am qualified to share my honest opinion of what constitutes a good business intelligence solution.

We live in a world where competition is fierce, all from the likes of various providers such as Microsoft, Qlik, Tableau (Salesforce) and much much more. I’ve spent a good share of time exploring each product.

Here’s my take on what makes a good business intelligence solution:

Seamless Data Ingestion

The ability to load data without any hiccups during the connection process, or during the automated, recurring data refresh cycles would be a god-send. BI Platforms sometimes use their own proprietary logic to transform queries and execution plans. This results in time-out or other conflict errors occasionally.

Development Scalability

I can’t stress enough the number of times I’ve made something nice and fancy that worked well when it comes to visualizations (such as a tile-view KPI), and had to replicate it. I cane to find out that there is absolutely no way of copying and pasting the component/widget. Some BI solutions require you to rebuild them from scratch. Thankfully, some do allow you the ability to copy and paste them for scalability and simply replace the values with the additional measures you’d like to add.

Report Distribution

Report Distribution is definitely a requirement for a business intelligence solution to be effective. Business users and leadership want to be able to monitor the business without having to jump through hoops. What better way to accomplish this than having periodic reports sent to their email inbox?

This, alone, does not satisfy the requirement however. In a perfect world, the dashboard/report should render exactly how the dashboard was developed or would look, should one choose to visit it themselves. The report should be a well-rendered body image that is not distorted, and additionally be attached as a file – such as a pdf – so that it is downloadable and printable.

An additional nice to have, since I’ve come across just about every business requirement you can think of, is being able to send the report distributions dynamically. That is, being able to send or not send the report only if a certain criteria is met – such as if the data is successfully refreshed as designed, or if the data presented is of significance to the itemized, respective recipient(s).

Self-service analytics

We all preach how self-serve BI tools are a given with all platforms. However, this is simply not true. There is a lot that should go into what constitutes self-service analytics.

As a first, the data and fields need to be simple and cohesive to the audience, and the audience needs to be competent enough to be able to navigate the BI platform with minimal training. A well-designed product is one that doesn’t require much training. The BI platform for self-service use should be straight forward and simple to use. Additionally, it is incumbent on developers to ensure all reporting and dashboards in the organization follow the same design language and principals, setting an organizational standard.

While keeping simplicity in mind, the platform also needs to be feature-rich. Different audiences and users have varying preferences as to how they wish to analyze information – whether it be through tables, excel sheets, or visualizations. A good BI tool must offer the ability to consume information through any of these views, as per the user’s preference. Additionally, a very effective nice-to-have would be enabling a dynamic hierarchy drill-down for the end-user to use. Want to see sales numbers by product by region? Sure. Want to mix it up and see sales numbers by region, THEN by product? You got it, just drag and drop the dimensions as you wish.

Ultimately, there is no single perfect Business Intelligence tool out there. I can confirm this as I’ve used most of them and when one feature works well, another falls short for each one. However, my qualified opinion through many years of working in this space is that the above mentioned features should be the foundation of what makes you select the best business intelligence solution for your organization.

June 30

SQL RANK, DENSE_RANK, ROW_NUMBER

SQL (Structured Query Language) is a very powerful tool to interact with databases as we all know.

Some more advanced functions, such as window functions in particular, are even more powerful! A window function, as the name may suggest, enables you to compute a row-level assignment based on a defined range or “window”. You can Google it if you’re still lost…

I want to primarily focus on 3 window functions in this write-up that are often [mistakenly] used interchangeably. It is very important to understand what differentiates RANK vs DENSE_RANK vs ROW_NUMBER, and when to use which one.

RANK

RANK assigns a sequential number to a record based on the partition and order by operators. For example, if we use RANK based on Price, this will rank each row based Price. In doing so, if multiple records share the same Price, they will share the same rank. The immediate next row with a different Price will have a different rank, however it will be incremented by the number of additional rows the prior rank had.

DENSE_RANK

DENSE_RANK works similar to the RANK function, in that it assigns a sequential number to a record based on the partition and order by operators. Where it differs is on the immediate row following a set of records that share the same rank. The DENSE_RANK window function will assign the very next sequential number of the preceding rank, unlike RANK. As a result, there are no gaps when using DENSE_RANK as opposed to RANK.

ROW_NUMBER

ROW_NUMBER brings something different and potentially more useful to the table. This assigns a sequential number (based on price for example) to each record, however no 2 records share the same row number. For example, if 2 or more records share the same price, they will still have a sequential rank (such as 1, 2, 3) instead of (1, 1, 1) if we had used RANK or DENSE_RANK instead.

Let’s suppose we have a dataset consisting of customer_id, item_id and price, and we’ve applied all 3 window functions as:

row_number() over (order by price)

rank() over (order by price)

dense_rank() over (order by price)

Here is the output we’d expect for each.

Customer_ID Item_IDPriceROW_NUMBERRANKDENSE_RANK
1228  200  111
1255200211
1212200311
2900500442

Each of these 3 window functions are powerful and useful based on different scenarios. It’s important to understand when and how to use which one for proper results.

May 31

Full Outer Join in MySQL

If you’re familiar with databases and come from the Microsoft SQL Server world, you must love the ability to perform full outer joins when putting a dataset together.

Conversely, if you’re from the MySQL world, you must have no idea what I’m talking about since you cannot use full joins in MySQL.

Some Context

Let’s first understand the benefits of a full outer join to see why this would be valuable. Feel free to look at the video below to grasp an understanding of the intended output: Full Outer Join: SQL Tutorial with Example

How to Emulate a Full Outer Join

Now, you may be interested and want to replicate the full outer join in MySQL, but since it doesn’t offer support for this type of join, you will need to develop a workaround yourself.

Luckily, here is a straight forward approach to accomplishing this, using a combination of a left join, right join and union.

SELECT A.VALUE_A, B.VALUE_B

FROM A

LEFT JOIN B

ON A.VALUE_A= B.VALUE_B

UNION

SELECT A.VALUE_A, B.VALUE_B

FROM A

RIGHT JOIN B

ON A.VALUE_A= B.VALUE_B

;

The way this approach works, obviously, is it first performs a left or right join as a dataset, and then its counterpart (right or left join, respectively) as another dataset using the same query. This will give us “everything on the first table, even if the second table doesn’t have matching records” and also “everything on the second table, even if the first table doesn’t have matching records” as a second dataset. Then, by performing a union all, we stack these 2 datasets together and effectively combine them into one comprehensive dataset to give us “everything on the first and second table, regardless of whether or not they have matching values on their counterpart table”.

From an efficiency standpoint, yes, you are in fact querying twice as opposed to once which may not be ideal, however for the right use-case, this may be the right solution.

It’s really that simple!

April 30

Correctly Storing DB Credentials In PHP

There’s a way to do things, and then there’s a right way to do things. If you’ve been accustomed to storing your database login credentials inside or adjacent to your public-facing web application, you may benefit from this quick read.

In this write-up, I will show you the recommended approach to take to securely store and access your database credentials when working with PHP.

The Config File

First thing is first: in this example, I am using my shared hosting plan, and I assume you are too. What this means is you will be storing your web files somewhere under the public_html directory, which makes it publicly accessible. Our goal is to make a “private” folder outside of public_html that will be inaccessibly publicly, but accessible by our application residing in “public_html”.

Inside the private directory, we will create a file called config.ini that will store these credentials.

Enter your database credentials in this config.ini file as follows (obviously replacing the values in quotations with your own values).

config:ini

[database]
servername = “localhost”
username = “test_user”
password = “test_p4ssw0rd!”
dbname = “test_db”

Integration

Now, we can work on our php application under the public_html directory and point to the config file to parse the database credentials. Alternatively, you can call this in a separate file and then include the separate file in your main php application file, but for this short example we’ll just call it from the app directly. For more information on the parse_ini_file function, you can refer to the php documentation here

app.php


$config = parse_ini_file(‘../private/config.ini’);

In the same file, we can use PDO and reference the elements of $config to build a connection string and query our database.

try{
$_DB = new PDO(“mysql:host={$config[‘servername’]};dbname={$config[‘dbname’]}”, “{$config[‘username’]}”, “{$config[‘password’]}”);

$getIDs = $_DB->prepare(“Select * from employee_billing”);
$getIDs->execute();
$getEmpIDs = $getIDs->fetchAll();

foreach ($getEmpIDs as $emp){
echo “{$emp[’employee_id’]}
“;
}
}
catch (PDOException $e) {
die(“Error – connection failed. Please check your credentials or contact your administrator.”);
}

And this is all it takes to get your data to appear!

If you followed along, we just stood up a php application that uses a private, secure config file to supply its database credentials for connectivity.

March 31

Why Amazon’s Shelving Practice Works

Amazon has built a pretty robust reputation for exceptional logistics and customer service.

This is partly credited to their robust, unorthodox organization – or lack thereof organization of merchandise in their warehouse shelving. Rather than clustering items by SKU, or category in a certain bin or location, they’re typically just scattered everywhere throughout a matrix of aisles. For example, you may find a toothbrush and a banana in the same bin of an aisle, rather than finding them separately in a more organized fashion.

The Irony.

One would think this is quite odd, and would lead to a very impractical and in-efficient logistic practice. On the contrary, this is likely more efficient than conventional picking systems. The reason for this is quite simple – time savings. Let’s look at things this way: Suppose you visit a grocery store, and you know you want to purchase bananas…and a toothbrush. You’d walk towards the oral care aisle somewhere in the left or right inner area of the store. Then you’d walk towards to produce section towards the outer area of the store before going to the checkout. For all we know, these 2 items may be completely on opposite sides of the store. Imagine how much walking you’d have to do just to pick these 2 items.

Now on the flip side, let’s assume this grocery store stocked toothbrushes and bananas right next to each other on aisle 5, and you presumed this would be the case before walking in. You would need to simply walk to that one aisle, and out to checkout and be done. This would translate to significantly less distance walked, and less time spent picking the items.

The Amazon picking system works sort of like this, where pickers, through their device, can determine the exact locations they need to travel to – and through these arbitrarily placed stocking methods – can minimize the number of aisles they would need to explore, as they can now pick a variety of different products in a small number of aisles, reducing picking time.

The reason I’m writing about this is because a personal experience helps me relate to this phenomenon.

Folding clothes.

When I used to live alone, I would do my laundry and leave my clothes in a pile of mess. Conventional wisdom would typically frown upon this. However, now that my significant other folds the laundry, and organizes them into their dedicated areas throughout the room and various drawers and closets, I’ve come to realize that folding and storing clothes in a neat fashion is counterproductive. It takes a substantial amount of time to not only fold clothes, but to put an outfit together and pick each piece from their dedicated locations. I know this sounds silly, but give it some thought. Alternatively, the pile of unfolded clothes was far more time-efficient. At a single glance, I could easily see all pieces of an outfit lumped together, and just pick what I need from this centralized location without having to go on a quest.

It may appear unprofessional, or irresponsible, or impractical; but the fact of the matter is, the lump of unfolded laundry is much more efficient, similar to Amazon’s unorganized shelving practice.

February 28

Why the Raspberry PI is Still Awesome

By now, I’m sure most of you have already heard of the Raspberry PI. If you haven’t, Google it.

For me, the Raspberry PI is an affordable and versatile computing device, ideal for a wide spectrum of users. I initially purchased my PI in the year 2015, and have used it over time for various different software and server development projects. Here are several great modern uses for the PI…

Desktop

The PI can be used as a normal desktop to perform normal day to day duties, such as checking E-mail, taking notes, etc. While the board does come with various ports and available hardware that can deem it a wireless device (i.e. connect a screen and integrated power supply and you can turn this bad boy into a wireless tablet), it can also be set up as a traditional desktop computer. An HDMI and multiple USB ports enable you to hook up this miniscule board to an external monitor, mouse an keyboard as you normally would with your other over-the-counter brand PC’s.

Dashboard

The PI can be used to serve as an always-on dashboard, powering an external monitor. I’ve used this setup to provide realtime visualizations and analytics of an online business I once set up.

Home Automation

The PI can be used as a home automation powerhouse. A friend and I installed and configured Hassbian to run on the PI. This build helped serve as a centralized home automation hub. We were able to control and automate when and what happened on connected home automation devices, from Smart TVs to lighting and voice assistants.

Automating Scripts

The PI can be used as a general server to run and automate scripts. I’ve personally employed the PI as a service to mine, scrape and load data. This proved to be a very low-cost but highly reliable ETL service for a variety of different projects.

Web Server

Saving the best for last, the PI can be used as a WLAN web server for testing purposes. I recently worked with a start-up where I had to develop on a local deployment rather than tampering with the production environment. It served as a great, low-cost and low resource intensive alternative to a traditional server or local deployment. I had it running in an always-on fashion, so rather than using my desktop pc as a server, or my macbook with a local server, the PI itself was the server, and I could connect remotely or access the files via FTP to work my magic from any machine of my choice within the same network.

The Verdict

To summarize, I’ve always hated raspberries. But the Raspberry PI is an extremely high-value product when you look at how much it costs and what its capabilities are, even in 2021 when nobody is talking about it anymore.

January 31

OK GOOGLE…

Home Automation has become a must-have when you think luxury, comfort and convenience inside a home. It enables the ability to connect and control, through a Wi-Fi connected pull, various compatible devices across your home with just your vocalized wish. In my home, we’ve had lightbulbs, outlet switches, thermostats, doorbells, clocks and Smart TV’s connected just to name a few. Google Home and Amazon Alexa are a couple of the most popular such products that offer this service. Though, there is much more opportunity to integrate more devices and also streamline a centralized hub that helps regulate said devices which I believe would really solidify this technology and help drive more adoption. As a tech enthusiast, I must confess I have employed both the Google Home Assistant and the Amazon Alexa.

Rant

I must argue, I am not bias towards one platform over the other, however there is a huge difference in the way we interact with Google Home Assistant versus Amazon Alexa. This is when we first attempt to awaken the assistant by greeting them with their name. For Google Home Assistant, it is a bit of a formal “Hey Google”, or “OK Google”. For Amazon Alexa, it is a simple “Alexa”. In my opinion, that unnecessary and unnatural requirement to use 2 words instead of 1 greatly diminishes the user experience when using GHA over AA. Every time I wish to interact with GHA, I HAVE to use 2 words, whereas with Alexa, I just say “Alexa”, followed by my command, which is a very natural behavior when speaking with others.

To me, while this may have its advantages, it is just bad design. It is unnecessary and thus inefficient. As a user, it requires more input from me, indefinitely as many times as I wish to awaken this device for interaction. While I haven’t heard anyone else complain, it appears as an inconvenience. At a personal level, one critical component to designing my solutions is ensuring the end user has to interact as less as possible to begin their actual journey. For example, when I build websites, I do what ever I possibly can to ensure the end user achieves the highest ROI, in that they get the best, quality content sought out but with the least amount of effort on their part. This may include caching user inputs, or reducing the amount of clicks needed to navigate to a particular page for a faster experience.

Perhaps Google’s intention with this was to reduce false alarms, since whoever in the room says “Hey Google” likely intends to awaken GHA. But maybe they could have simply created a new, unique name to prevent errors and eliminated the need to say “Hey”.

Regardless, this is just my opinion and hopefully serves as some constructive feedback for future updates. OK Google?

December 31

Naming Variables – my camelCase story

One of the many arts or etiquette of coding can be attributed to your choice of naming convention for your variables.

I say art because it’s partially aesthetics. The way you name your variable can make your line look very beautiful or very ugly. I say etiquette because it’s really just professional and respectful best practice to name your variable in a way where it’s easier for another person to follow the flow of your code.

My variables typically look like the following:

i_item_name_str

Examining this, there are typically at least 3 parts to my variables. the [i] here represents that this is an input variable. In this case, it is a parameter being processed from within a function (thus, input). So right away, the beginning of the variable can tell where this variable came from (i.e. is it an input? an output? or just a non-derived variable?) The [item_name] represents what this variable is, which in this case is the name of the item being passed. The [str] helps identify what type of variable this is, so right away I know this is a string (or varchar) field.

Now, we all know there are certain standardized means of naming variables, especially for certain languages. One of the most common methods is camel casing. That is, instead of i_item_name_str, it may look something like itemName. While this is absolutely fine and visually sexy, I’ve come across some unanticipated obstacles in my career that deemed the camel casing methodology a risky one, and one that I am hesitant to use.

As a solutions architect, I work with many different layers or technologies. A project may consist of working with a combination of MySQL and PHP. While you may be able to get away with camel casing in PHP, there was an instance where I had used it throughout my procedure names and variables in MySQL. I then, at a later time, had to migrate to another server, after which my applications all broke. When I did some diagnostics on it, I came to realize it was because the migration lowercased everything and ultimately made everything unusable since the camel casing was no longer in tact! Now that I look back, I’m almost positive there could have been a setting in the server configuration that I could have used to enable case sensitivity. But now that I’m traumatized, I’d rather never take that risk ever again and never put myself in a position to do any unnecessary work. At the end of the day, i_item_name_str is almost just as legible as itemName, but less prone to fail across platforms and migrations. That is, until we come across one that prohibits the use of underscores. Oh boy.

So tell me, what is your single most used variable naming convention and why?

November 30

3 Tests to Identify the Best Tech Solution

By profession and passion, I am, at heart, a problem solver as most of us are in various walks of life.

When it comes to solving a complex problem or overcoming an obstacle, it’s important to take 3 elements into consideration when shortlisting solutions. It’s equally important to consider these as qualifiers in the same sequence as listed below, as the first element would serve as a Minimum Viable Product (MVP), and the latter two build upon it for a more intelligent solution before we even get into optimization.

Does it solve the problem at hand?

Your solution needs to solve the problem that was presented. Maybe you innovated something marginally better than the current outgoing process, or took the extra mile and came up with an awesome solution that does fancy things, but does it solve the actual problem that was presented? Passing this test is absolutely critical, and logically that makes sense. This is why it is necessary to brainstorm “What Am I solving for?” as the very first step. Identify the actual problem and focus on the NEED before the NICE-to-HAVE.

Is it sustainable?

So now that we have a solution in mind that solves the problem presented, the next area to focus on is sustainability. How robust is your solution? Does it have multiple dependencies, such as the use of certain licensed software or human intervention which may be subject to excessive maintenance or intervention? In other words, is this a truly autonomous solution that can run on its own; that has failsafes built-in and the ability to not only alert developers during failure but to correct itself? If not, that’s OK, since the ease of doing so varies based on the nature of the problem, but this needs to be the vision when developing the solution. It needs to be designed with autonomy in mind, so that it is developed with dynamic abilities, and less dependence on manual intervention.

Is it scalable?

If you’ve come up with a solution by now that passes the first two tests presented above, passing this third one will likely deem your solution the holy grail of solutions. The third test is scalability. Again, this is subject to the nature of the problem, but in most cases, you can’t go wrong designing your solution with scalability in mind. What I mean by this is, think of your solution as not just a solution to the problem presented, but a solution that will be a dynamic enough solution to cater for other, similar problems. In doing so, your solution would serve as a re-usable template, and just by passing a few parameters or settings to it, it can become an intelligent, dynamic plug-and-play repurposed solution.

Conclusion

In summary, we’ve covered the 3 sequential steps that I consider are necessary to identifying the most optimal high-level solutions, before getting our hands dirty with development. It’s important to understand that applicability of these considerations depends widely on the scope and type of problem and the technologies you have available to solve it. Ultimately, it’s just necessary to design the solution with this mindset as much as you can, even if you can’t apply all of the principles completely.

October 31

I built a dashboard to calculate when you can retire

Using the awesome Tableau application once again, I was able to create an interactive visualization depicting the point in time an individual can expect to effectively converge or replace their active income with their passive investment income (generated from compounded earnings and investments).

One of the things I’m particularly proud of, is that this dashboard uses no data sources whatsoever. Everything is self contained in the workbook, which is simply an increasing sequence of “years” from 2020 upwards, and several calculated fields.

My goal was to keep this the best of all worlds between: simple, interactive and informative.

Users of this dashboards likely have varying levels of income and varying spending habits, so at the very least this dashboard should require 2 inputs: Gross Income and Annual Savings, to better personalize results.

The output in terms of the visualization should be, in its simplest form, 2 lines that intersect: Gross Annual Income and Change in Net Worth(synonymous to earnings from passive income/growth). What makes this challenging is that in a perfect world, neither the gross annual income – nor the passive income that is compounding will be stagnant throughout time. Both tend to trend upwards due to income raises and accumulated investments, respectively, over time. In short, the following is how the two output lines were generated.

Annual Adjusted Income

This is the income generated from work. It will take the input, Annual Gross Income, and apply a raise of 2.25% compounded annually. There is also a somewhat arbitrary inverse of “Year Multiplier” I factored in, intended to reduce or taper-off that 2.25% raise over time as income from wages tends to slope downward late in careers (i.e. finding a new job as you approach retirement years).

Net worth Gains

Before we get to Net worth Gains, we have to define what our Net Worth is. This will be a running sum of our Annual Savings (the input provided by the user), multiplied by the Rate of Return of investments (defaulted to 7%).

For our Networth Gains calculation, this will be our principal starting balance of Net Worth of the current year, multiplied by the Rate of Return of investments. In our case, we defaulted the Rate of Return to 7%.

With that being said, our “NW Gains” and “Annual Adjusted Income” as dual-axis measures, in aggregation, across number of years, as dimensions, enable us to achieve a converging visualization depicting the year in which our investment returns will begin to exceed our earned income.

Hope you enjoyed this small write-up as much as I did building it!

Leave a comment below – when can you retire?

September 30

Time-Traveling with VR

Virtual Reality and Augmented Reality have been gaining traction and adoption at exponential rates. In my opinion, they will be major disrupters for many, if not all, industries, particularly gaming, fitness and on-hands skill training for example.

But one area that particularly interests me is video recording. I enjoy recording family events, precious moments and even my driving trips (for liability protection, with the use of a dash cam). When dash cameras first started becoming popular in USA about a decade ago, I felt they did the job but didn’t necessarily capture the entire picture. At best, cameras claim to be wide-angle, and unfortunately that’s their hard limitation.

Since then, we’ve seen 360 cameras being introduced. Most of them work by means of utilizing dual cameras, 180 degrees each, that can then stitch the videos for 360 degree interactive viewing.

It would be exciting to take that another step further. Imagine, you are at a family event, or on an adventure. Your friends or family or subjects of interest are not necessarily in the primary focus area of the camera. Perhaps they are behind you, to the left. Or at the very edge of the 180degree seams of your 360 degree surrounding. When you go to recollect that moment, it’ll be far from the actual, original view. There are obvious reasons for this. By Using just 2 cameras, even with the modern sophisticated software to stitch frames together, you’ll find that the image is skewed, distorted or even relatively lower resolution in certain areas. What you REALLY want, is to be able to relive that very moment that you have on video without the obvious missing gaps and distortions.

Luckily, this is possible, with the integration of 2 technologies. A VR Kit and a better 360 Camera. The solution would be to increase the number of cameras from 2, to maybe 4 in the 360 degree camera unit. Now, each camera has a 90 degree reach. Although this may require more processing power to seam the frames together, the image would come out cleaner with less distortions, and less of a need for complex computations to account for and render the angles to fit a square frame. So, now you have a 360 degree camera, perfectly capable of capturing in full resolution, all angles in your immediate 360 degree radius.

Couple this with a VR headset and sensors, and what you get is the ability to view your recording as if you’re back in time of that recording. As you turn your head to your right, or left, or turn around completely, you gain the ability to literally re-live what that moment was like.

This ability, to view all the expressions and interactions of people around you, from first person view, on-demand, will certainly be a game-changer for both the future of VR and 360 cameras.

August 31

The End of Humans.

***I apologize in advance if the language used in this article is hurtful to those using or affiliated with those using any sort of implants or prosthetics, or those with any sort of disability. My intention is solely to pose a thought-provoking question. These are just my thoughts, after all. Reader discretion is advised***

We’ve heard many times – one way or another – that human life is at risk. We’ve seen it in movies and we’ve seen some of the greatest minds in the world express how Artificial Intelligence needs to be handled with care, or it WILL pose a risk to mankind.

but that’s not what this write-up is in reference to.

This is about a question I couldn’t help ponder. Just a couple days ago (08/28/20), Elon Musk debuted a demonstration of the Neuralink brain implant. Long-story-short, he demonstrated his experiment on a pig, but the underlying idea is that a coin sized chip implanted into a brain can govern your state of mental wellness. That’s HUGE! But that’s when it got me thinking. This isn’t the first case of using software or robotics, albeit an insanely advanced one, to improve human life. There are numerous other examples. We have witnessed the use of prosthetics and orthotics to replace and reinstate the function(s) of human organs and limbs. This has not only resulted in improved mobility and quality of life for humans, but has also significantly improved human longevity.

But wait, That’s cheating..

This is when my mind started to wander. The advancement of technology from prosthetics to brain implanted chips could one day mean humans would be invulnerable to most types of disabilities and medical or mental health issues which are awfully common today. But this is inhumane. Humans, by nature, as mammals, as animals are meant to be part of a natural life cycle. We are born, we grow, we live, we diminish and we die. As part of our gift of being humans, we can innovate and build technology to improve these stages of life.

But, once we instead cross the line and start using technology to alter or prolong our stages of life artificially, we aren’t humans anymore. We’re Cyborgs.

The underlying question.

This begs the question: If chip implantation (and I’m not just talking about as-needed brain implants, I am talking about mandated chip implants at birth) and prosthetics become widely adopted, which I have a strong feeling they will, and nearly the entire world population at that point has one form or another of technology implanted in them, does that era mark the end of man-kind, and in turn propel us into a Cyborg society?

July 31

The 2020 Micro-wave

The year 2020 has been a unique one thus far, to say the least.

The COVID-19 pandemic has wreaked havoc among the planet’s population. It has proved to be an eye-opener and shown that mankind is and always will be at the mercy of nature, despite all the technology we have today to combat it when we need to.

Coronavirus is absolutely terrible and evidently life-threatening to those vulnerable. However, it has also done some good. It has accelerated the inevitable push for flexible work environments. That is, the ability of employees to work from wherever (home). Of course, this isn’t applicable everywhere in the world – predominantly in established countries with the technical infrastructure and culture to support it. More and more employers are now allowing temporary or permanent work from home ability, and this will only grow with time. I can go over the benefits in another post, but working from home is a no-brainer for many employees. You save commute time, you can complete errands during breaks, you can often wear almost anything without being judged, and you have a sense of not being watched. It’s almost like you’re your own boss, as long as you work your hours and meet and exceed expectations!

There isn’t necessarily a need to feel like you’re not being watched, but part of the joy of being an autonomous employee (should your role allow it), is not having someone breathing down your neck and watching your every move.

The unfortunate news for us ethical, responsible employees who don’t take things for granted (this is why we can’t have nice things…) is that many workers are abusing these privileges. Currently, even at my current employer, I’m hearing nothing but positive feedback all around. The employees are happy for the most part, and our leadership is becoming more open to allowing this flexibility down the road since everyone is doing such a good job and also cutting some corporate costs (i.e. travel).

However, I have a very strong feeling that performance of remote workers will plateau and curve down to low productivity levels as time goes by. I sense this has already happened, and this is also why many organizations will begin to really crack down and install software on the employees’ machines that track productivity. To what extent the following is legal, I do not know. But do not be surprised if any or all of your microphone, camera, screen, keystrokes, mouse movement, VPN use, idle time and internet usage are tracked and monitored.

You thought being confronted about walking into the office by fifteen minutes late used to be bad? Try getting confronted about what you were doing between 3:05 and 3:25 PM on a Monday since no keyboard activity was recorded during that window. Yeah, THAT is bad. And it is inevitable.

“Remember to block your breaks and lunches and tasks on your calendar and share them with me. Oh and I see you’ve been idle for 15 minutes, do you need help with something?”

-Your Supervisor

What was once a dream-job type of work environment for many may quickly become a modern age sweat-shop through technology. It’s even easier to micro-manage employee activity through technology than it was in the old-fashion office. Maybe it isn’t significant enough to notice yet, but it will quickly become a reality.

Welcome to this new wave of micro-management.

The 2020 Micro-wave.