July 31

THIS Makes the BEST Business Intelligence Solution

Why Data Analytics is Gaining HYPE in the 21st Century | by Rinu Gour |  Towards Data Science

Having spent most of my professional career in the business intelligence and analytics space, I am qualified to share my honest opinion of what constitutes a good business intelligence solution.

We live in a world where competition is fierce, all from the likes of various providers such as Microsoft, Qlik, Tableau (Salesforce) and much much more. I’ve spent a good share of time exploring each product.

Here’s my take on what makes a good business intelligence solution:

Seamless Data Ingestion

The ability to load data without any hiccups during the connection process, or during the automated, recurring data refresh cycles would be a god-send. BI Platforms sometimes use their own proprietary logic to transform queries and execution plans. This results in time-out or other conflict errors occasionally.

Development Scalability

I can’t stress enough the number of times I’ve made something nice and fancy that worked well when it comes to visualizations (such as a tile-view KPI), and had to replicate it. I cane to find out that there is absolutely no way of copying and pasting the component/widget. Some BI solutions require you to rebuild them from scratch. Thankfully, some do allow you the ability to copy and paste them for scalability and simply replace the values with the additional measures you’d like to add.

Report Distribution

Report Distribution is definitely a requirement for a business intelligence solution to be effective. Business users and leadership want to be able to monitor the business without having to jump through hoops. What better way to accomplish this than having periodic reports sent to their email inbox?

This, alone, does not satisfy the requirement however. In a perfect world, the dashboard/report should render exactly how the dashboard was developed or would look, should one choose to visit it themselves. The report should be a well-rendered body image that is not distorted, and additionally be attached as a file – such as a pdf – so that it is downloadable and printable.

An additional nice to have, since I’ve come across just about every business requirement you can think of, is being able to send the report distributions dynamically. That is, being able to send or not send the report only if a certain criteria is met – such as if the data is successfully refreshed as designed, or if the data presented is of significance to the itemized, respective recipient(s).

Self-service analytics

We all preach how self-serve BI tools are a given with all platforms. However, this is simply not true. There is a lot that should go into what constitutes self-service analytics.

As a first, the data and fields need to be simple and cohesive to the audience, and the audience needs to be competent enough to be able to navigate the BI platform with minimal training. A well-designed product is one that doesn’t require much training. The BI platform for self-service use should be straight forward and simple to use. Additionally, it is incumbent on developers to ensure all reporting and dashboards in the organization follow the same design language and principals, setting an organizational standard.

While keeping simplicity in mind, the platform also needs to be feature-rich. Different audiences and users have varying preferences as to how they wish to analyze information – whether it be through tables, excel sheets, or visualizations. A good BI tool must offer the ability to consume information through any of these views, as per the user’s preference. Additionally, a very effective nice-to-have would be enabling a dynamic hierarchy drill-down for the end-user to use. Want to see sales numbers by product by region? Sure. Want to mix it up and see sales numbers by region, THEN by product? You got it, just drag and drop the dimensions as you wish.

Ultimately, there is no single perfect Business Intelligence tool out there. I can confirm this as I’ve used most of them and when one feature works well, another falls short for each one. However, my qualified opinion through many years of working in this space is that the above mentioned features should be the foundation of what makes you select the best business intelligence solution for your organization.

March 31

Why Amazon’s Shelving Practice Works

Amazon has built a pretty robust reputation for exceptional logistics and customer service.

This is partly credited to their robust, unorthodox organization – or lack thereof organization of merchandise in their warehouse shelving. Rather than clustering items by SKU, or category in a certain bin or location, they’re typically just scattered everywhere throughout a matrix of aisles. For example, you may find a toothbrush and a banana in the same bin of an aisle, rather than finding them separately in a more organized fashion.

The Irony.

One would think this is quite odd, and would lead to a very impractical and in-efficient logistic practice. On the contrary, this is likely more efficient than conventional picking systems. The reason for this is quite simple – time savings. Let’s look at things this way: Suppose you visit a grocery store, and you know you want to purchase bananas…and a toothbrush. You’d walk towards the oral care aisle somewhere in the left or right inner area of the store. Then you’d walk towards to produce section towards the outer area of the store before going to the checkout. For all we know, these 2 items may be completely on opposite sides of the store. Imagine how much walking you’d have to do just to pick these 2 items.

Now on the flip side, let’s assume this grocery store stocked toothbrushes and bananas right next to each other on aisle 5, and you presumed this would be the case before walking in. You would need to simply walk to that one aisle, and out to checkout and be done. This would translate to significantly less distance walked, and less time spent picking the items.

The Amazon picking system works sort of like this, where pickers, through their device, can determine the exact locations they need to travel to – and through these arbitrarily placed stocking methods – can minimize the number of aisles they would need to explore, as they can now pick a variety of different products in a small number of aisles, reducing picking time.

The reason I’m writing about this is because a personal experience helps me relate to this phenomenon.

Folding clothes.

When I used to live alone, I would do my laundry and leave my clothes in a pile of mess. Conventional wisdom would typically frown upon this. However, now that my significant other folds the laundry, and organizes them into their dedicated areas throughout the room and various drawers and closets, I’ve come to realize that folding and storing clothes in a neat fashion is counterproductive. It takes a substantial amount of time to not only fold clothes, but to put an outfit together and pick each piece from their dedicated locations. I know this sounds silly, but give it some thought. Alternatively, the pile of unfolded clothes was far more time-efficient. At a single glance, I could easily see all pieces of an outfit lumped together, and just pick what I need from this centralized location without having to go on a quest.

It may appear unprofessional, or irresponsible, or impractical; but the fact of the matter is, the lump of unfolded laundry is much more efficient, similar to Amazon’s unorganized shelving practice.

February 28

Why the Raspberry PI is Still Awesome

By now, I’m sure most of you have already heard of the Raspberry PI. If you haven’t, Google it.

For me, the Raspberry PI is an affordable and versatile computing device, ideal for a wide spectrum of users. I initially purchased my PI in the year 2015, and have used it over time for various different software and server development projects. Here are several great modern uses for the PI…

Desktop

The PI can be used as a normal desktop to perform normal day to day duties, such as checking E-mail, taking notes, etc. While the board does come with various ports and available hardware that can deem it a wireless device (i.e. connect a screen and integrated power supply and you can turn this bad boy into a wireless tablet), it can also be set up as a traditional desktop computer. An HDMI and multiple USB ports enable you to hook up this miniscule board to an external monitor, mouse an keyboard as you normally would with your other over-the-counter brand PC’s.

Dashboard

The PI can be used to serve as an always-on dashboard, powering an external monitor. I’ve used this setup to provide realtime visualizations and analytics of an online business I once set up.

Home Automation

The PI can be used as a home automation powerhouse. A friend and I installed and configured Hassbian to run on the PI. This build helped serve as a centralized home automation hub. We were able to control and automate when and what happened on connected home automation devices, from Smart TVs to lighting and voice assistants.

Automating Scripts

The PI can be used as a general server to run and automate scripts. I’ve personally employed the PI as a service to mine, scrape and load data. This proved to be a very low-cost but highly reliable ETL service for a variety of different projects.

Web Server

Saving the best for last, the PI can be used as a WLAN web server for testing purposes. I recently worked with a start-up where I had to develop on a local deployment rather than tampering with the production environment. It served as a great, low-cost and low resource intensive alternative to a traditional server or local deployment. I had it running in an always-on fashion, so rather than using my desktop pc as a server, or my macbook with a local server, the PI itself was the server, and I could connect remotely or access the files via FTP to work my magic from any machine of my choice within the same network.

The Verdict

To summarize, I’ve always hated raspberries. But the Raspberry PI is an extremely high-value product when you look at how much it costs and what its capabilities are, even in 2021 when nobody is talking about it anymore.

January 31

OK GOOGLE…

Home Automation has become a must-have when you think luxury, comfort and convenience inside a home. It enables the ability to connect and control, through a Wi-Fi connected pull, various compatible devices across your home with just your vocalized wish. In my home, we’ve had lightbulbs, outlet switches, thermostats, doorbells, clocks and Smart TV’s connected just to name a few. Google Home and Amazon Alexa are a couple of the most popular such products that offer this service. Though, there is much more opportunity to integrate more devices and also streamline a centralized hub that helps regulate said devices which I believe would really solidify this technology and help drive more adoption. As a tech enthusiast, I must confess I have employed both the Google Home Assistant and the Amazon Alexa.

Rant

I must argue, I am not bias towards one platform over the other, however there is a huge difference in the way we interact with Google Home Assistant versus Amazon Alexa. This is when we first attempt to awaken the assistant by greeting them with their name. For Google Home Assistant, it is a bit of a formal “Hey Google”, or “OK Google”. For Amazon Alexa, it is a simple “Alexa”. In my opinion, that unnecessary and unnatural requirement to use 2 words instead of 1 greatly diminishes the user experience when using GHA over AA. Every time I wish to interact with GHA, I HAVE to use 2 words, whereas with Alexa, I just say “Alexa”, followed by my command, which is a very natural behavior when speaking with others.

To me, while this may have its advantages, it is just bad design. It is unnecessary and thus inefficient. As a user, it requires more input from me, indefinitely as many times as I wish to awaken this device for interaction. While I haven’t heard anyone else complain, it appears as an inconvenience. At a personal level, one critical component to designing my solutions is ensuring the end user has to interact as less as possible to begin their actual journey. For example, when I build websites, I do what ever I possibly can to ensure the end user achieves the highest ROI, in that they get the best, quality content sought out but with the least amount of effort on their part. This may include caching user inputs, or reducing the amount of clicks needed to navigate to a particular page for a faster experience.

Perhaps Google’s intention with this was to reduce false alarms, since whoever in the room says “Hey Google” likely intends to awaken GHA. But maybe they could have simply created a new, unique name to prevent errors and eliminated the need to say “Hey”.

Regardless, this is just my opinion and hopefully serves as some constructive feedback for future updates. OK Google?

December 31

Naming Variables – my camelCase story

One of the many arts or etiquette of coding can be attributed to your choice of naming convention for your variables.

I say art because it’s partially aesthetics. The way you name your variable can make your line look very beautiful or very ugly. I say etiquette because it’s really just professional and respectful best practice to name your variable in a way where it’s easier for another person to follow the flow of your code.

My variables typically look like the following:

i_item_name_str

Examining this, there are typically at least 3 parts to my variables. the [i] here represents that this is an input variable. In this case, it is a parameter being processed from within a function (thus, input). So right away, the beginning of the variable can tell where this variable came from (i.e. is it an input? an output? or just a non-derived variable?) The [item_name] represents what this variable is, which in this case is the name of the item being passed. The [str] helps identify what type of variable this is, so right away I know this is a string (or varchar) field.

Now, we all know there are certain standardized means of naming variables, especially for certain languages. One of the most common methods is camel casing. That is, instead of i_item_name_str, it may look something like itemName. While this is absolutely fine and visually sexy, I’ve come across some unanticipated obstacles in my career that deemed the camel casing methodology a risky one, and one that I am hesitant to use.

As a solutions architect, I work with many different layers or technologies. A project may consist of working with a combination of MySQL and PHP. While you may be able to get away with camel casing in PHP, there was an instance where I had used it throughout my procedure names and variables in MySQL. I then, at a later time, had to migrate to another server, after which my applications all broke. When I did some diagnostics on it, I came to realize it was because the migration lowercased everything and ultimately made everything unusable since the camel casing was no longer in tact! Now that I look back, I’m almost positive there could have been a setting in the server configuration that I could have used to enable case sensitivity. But now that I’m traumatized, I’d rather never take that risk ever again and never put myself in a position to do any unnecessary work. At the end of the day, i_item_name_str is almost just as legible as itemName, but less prone to fail across platforms and migrations. That is, until we come across one that prohibits the use of underscores. Oh boy.

So tell me, what is your single most used variable naming convention and why?

November 30

3 Tests to Identify the Best Tech Solution

By profession and passion, I am, at heart, a problem solver as most of us are in various walks of life.

When it comes to solving a complex problem or overcoming an obstacle, it’s important to take 3 elements into consideration when shortlisting solutions. It’s equally important to consider these as qualifiers in the same sequence as listed below, as the first element would serve as a Minimum Viable Product (MVP), and the latter two build upon it for a more intelligent solution before we even get into optimization.

Does it solve the problem at hand?

Your solution needs to solve the problem that was presented. Maybe you innovated something marginally better than the current outgoing process, or took the extra mile and came up with an awesome solution that does fancy things, but does it solve the actual problem that was presented? Passing this test is absolutely critical, and logically that makes sense. This is why it is necessary to brainstorm “What Am I solving for?” as the very first step. Identify the actual problem and focus on the NEED before the NICE-to-HAVE.

Is it sustainable?

So now that we have a solution in mind that solves the problem presented, the next area to focus on is sustainability. How robust is your solution? Does it have multiple dependencies, such as the use of certain licensed software or human intervention which may be subject to excessive maintenance or intervention? In other words, is this a truly autonomous solution that can run on its own; that has failsafes built-in and the ability to not only alert developers during failure but to correct itself? If not, that’s OK, since the ease of doing so varies based on the nature of the problem, but this needs to be the vision when developing the solution. It needs to be designed with autonomy in mind, so that it is developed with dynamic abilities, and less dependence on manual intervention.

Is it scalable?

If you’ve come up with a solution by now that passes the first two tests presented above, passing this third one will likely deem your solution the holy grail of solutions. The third test is scalability. Again, this is subject to the nature of the problem, but in most cases, you can’t go wrong designing your solution with scalability in mind. What I mean by this is, think of your solution as not just a solution to the problem presented, but a solution that will be a dynamic enough solution to cater for other, similar problems. In doing so, your solution would serve as a re-usable template, and just by passing a few parameters or settings to it, it can become an intelligent, dynamic plug-and-play repurposed solution.

Conclusion

In summary, we’ve covered the 3 sequential steps that I consider are necessary to identifying the most optimal high-level solutions, before getting our hands dirty with development. It’s important to understand that applicability of these considerations depends widely on the scope and type of problem and the technologies you have available to solve it. Ultimately, it’s just necessary to design the solution with this mindset as much as you can, even if you can’t apply all of the principles completely.

September 30

Time-Traveling with VR

Virtual Reality and Augmented Reality have been gaining traction and adoption at exponential rates. In my opinion, they will be major disrupters for many, if not all, industries, particularly gaming, fitness and on-hands skill training for example.

But one area that particularly interests me is video recording. I enjoy recording family events, precious moments and even my driving trips (for liability protection, with the use of a dash cam). When dash cameras first started becoming popular in USA about a decade ago, I felt they did the job but didn’t necessarily capture the entire picture. At best, cameras claim to be wide-angle, and unfortunately that’s their hard limitation.

Since then, we’ve seen 360 cameras being introduced. Most of them work by means of utilizing dual cameras, 180 degrees each, that can then stitch the videos for 360 degree interactive viewing.

It would be exciting to take that another step further. Imagine, you are at a family event, or on an adventure. Your friends or family or subjects of interest are not necessarily in the primary focus area of the camera. Perhaps they are behind you, to the left. Or at the very edge of the 180degree seams of your 360 degree surrounding. When you go to recollect that moment, it’ll be far from the actual, original view. There are obvious reasons for this. By Using just 2 cameras, even with the modern sophisticated software to stitch frames together, you’ll find that the image is skewed, distorted or even relatively lower resolution in certain areas. What you REALLY want, is to be able to relive that very moment that you have on video without the obvious missing gaps and distortions.

Luckily, this is possible, with the integration of 2 technologies. A VR Kit and a better 360 Camera. The solution would be to increase the number of cameras from 2, to maybe 4 in the 360 degree camera unit. Now, each camera has a 90 degree reach. Although this may require more processing power to seam the frames together, the image would come out cleaner with less distortions, and less of a need for complex computations to account for and render the angles to fit a square frame. So, now you have a 360 degree camera, perfectly capable of capturing in full resolution, all angles in your immediate 360 degree radius.

Couple this with a VR headset and sensors, and what you get is the ability to view your recording as if you’re back in time of that recording. As you turn your head to your right, or left, or turn around completely, you gain the ability to literally re-live what that moment was like.

This ability, to view all the expressions and interactions of people around you, from first person view, on-demand, will certainly be a game-changer for both the future of VR and 360 cameras.

August 31

The End of Humans.

***I apologize in advance if the language used in this article is hurtful to those using or affiliated with those using any sort of implants or prosthetics, or those with any sort of disability. My intention is solely to pose a thought-provoking question. These are just my thoughts, after all. Reader discretion is advised***

We’ve heard many times – one way or another – that human life is at risk. We’ve seen it in movies and we’ve seen some of the greatest minds in the world express how Artificial Intelligence needs to be handled with care, or it WILL pose a risk to mankind.

but that’s not what this write-up is in reference to.

This is about a question I couldn’t help ponder. Just a couple days ago (08/28/20), Elon Musk debuted a demonstration of the Neuralink brain implant. Long-story-short, he demonstrated his experiment on a pig, but the underlying idea is that a coin sized chip implanted into a brain can govern your state of mental wellness. That’s HUGE! But that’s when it got me thinking. This isn’t the first case of using software or robotics, albeit an insanely advanced one, to improve human life. There are numerous other examples. We have witnessed the use of prosthetics and orthotics to replace and reinstate the function(s) of human organs and limbs. This has not only resulted in improved mobility and quality of life for humans, but has also significantly improved human longevity.

But wait, That’s cheating..

This is when my mind started to wander. The advancement of technology from prosthetics to brain implanted chips could one day mean humans would be invulnerable to most types of disabilities and medical or mental health issues which are awfully common today. But this is inhumane. Humans, by nature, as mammals, as animals are meant to be part of a natural life cycle. We are born, we grow, we live, we diminish and we die. As part of our gift of being humans, we can innovate and build technology to improve these stages of life.

But, once we instead cross the line and start using technology to alter or prolong our stages of life artificially, we aren’t humans anymore. We’re Cyborgs.

The underlying question.

This begs the question: If chip implantation (and I’m not just talking about as-needed brain implants, I am talking about mandated chip implants at birth) and prosthetics become widely adopted, which I have a strong feeling they will, and nearly the entire world population at that point has one form or another of technology implanted in them, does that era mark the end of man-kind, and in turn propel us into a Cyborg society?