All opinions expressed are those of the authors and not necessarily those of, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by (Ben Witten) on 2017-06-16 20:44:00 in the "cesium" category
This past week, End Point attended GEOINT Symposium to showcase Liquid Galaxy as an immersive panoramic GIS solution to GEOINT attendees and exhibitors alike.

At the show, we showcased Cesium integrating with ArcGIS and WMS, Google Earth, Street View, Sketchfab, Unity, and panoramic video. Using our Content Management System, we created content around these various features so visitors to our booth could take in the full spectrum of capabilities that the Liquid Galaxy provides.

Additionally, we were able to take data feeds for multiple other booths and display their content during the show! Our work served to show everyone at the conference that the Liquid Galaxy is a data-agnostic immersive platform that can handle any sort of data stream and offer data in a brilliant display. This can be used to show your large complex data sets in briefing rooms, conference rooms, or command centers.

Given the incredible draw of the Liquid Galaxy, the GEOINT team took special interest in our system and formally interviewed Ben Goldstein in front of the system to learn more! You can view the video of the interview here:

We look forward to developing the relationships we created at GEOINT, and hope to participate further in this great community moving forward. If you would like to learn more please visit our website or email


published by Eugenia on 2017-06-15 01:03:45 in the "Metaphysics" category
Eugenia Loli-Queru

It was sometime in October of 2000, about a month after I’ve come back from my first visit to the US. I was living in Guildford, UK at the time, working as a web developer and database analyst.

I was sleeping, and suddenly I woke up. I don’t know what time it was, but if I was to make a guess, I’d probably say it was about 2:30 AM or so. I immediately got completely awake, and alarmed. I woke up because I felt something moving on my feet on the bed, on top of the blankets. Whatever it was, it did not have the full weight of a human, it felt more like a dog or cat (there were no pets in the house where I was renting a room at the time, and my door was locked).

I got extremely scared, since I couldn’t understand what that was. I didn’t make a peep, neither I moved. I don’t know if they had paralyzed me, but my idea at the time was that I consciously decided not to move, as to supposedly mimic that I was still sleeping. I didn’t open my eyes either.

Within 20-25 seconds of all that happening, something else touched my head. It seems there was more than one of them in the room. As it touched me, within 2-3 seconds, I was back at sleep like nothing had happened. Which is crazy to think that I’d go back to sleep so easily, since I was wide awake and in sheer terror.

I do not believe this to be a case of sleep paralysis. I have had a few cases of sleep paralysis over the years, and they all happened while in an altered state (in a lucid dream state which clearly differs from my waking state). Instead, that night I was 100% awake.

The next morning, the alarm clock woke me up, at my usual time at 7 AM. I sprang out of the bed, and I was feeling completely OFF. Like, REALLY off. You know how you might feel sometimes when you’re really tired and you need to sleep 8 hours to recover, but you only sleep about 2 hours or so, and so you wake up and you’re kind in between of two states, feeling really off? That’s how I felt.

I got ready and went to work. I was feeling sick and had pain in my abdomen.

For a month, I was extremely scared to sleep at night. I was sleeping along a picture of Virgin Mary that my mother had bought for me years before to… protect me (lol). I was shaking and shivering for at least an hour every night until sleep would finally arrive. The fear went away gradually after talking to a monk (I believe he was Catholic, but I’m not sure), who used to setup two chairs in the middle of Guildford’s High Street on Saturdays, waiting for people to talk to him if needed (he never spoke in the middle of the street as some fanatics do, he just patiently waited all day for anyone who might needed to talk). I approached him as he almost started to pack to leave, at the end of the day. He listened to me, he prayed with me, and he told me to not fear. That’s all he did, but he helped me. I thanked him, and I asked him if he talked to many more people that day. He said I was the only one!

Since then, I haven’t experienced anything else like that consciously. I never had any missing time. If they’re still coming for me, they definitely don’t screw it up anymore. ‘Cause if they’ll screw it up again, I will break their big, fat skulls.

Subconsciously though, it’s another story. I’ve had many dreams and lucid dreams about aliens and UFOs. The most recent one, just a couple of weeks ago, where I was lifted off my house where I currently live, along a few others, and we were placed on a hangar of a big UFO. We were all waiting for the other shoe to drop when the aliens arrived (about 20 of them), but they had altered our perception to not see them with their true form. So the Greys looked like this kinda. Big wig, big head, large eyes, feminine. I immediately realized that they were putting screen memories instead of revealing themselves in their true form, and I thought that they looked ridiculous.

I don’t quite find the Greys dreams that interesting though. They’re kind of same-old, same-old. What’s more interesting is that I’ve seen 2-3 times in dreams the same type of spaceships appearing out of nowhere on our skies worldwide, and kinda stay there without communicating. Their shapes are very irregular, without any symmetry, they’re not like saucers or triangles that we normally associate with the Greys. They’re just strange design-wise, like random scraps of metal. The closest I could find can be seen here. I don’t know who the occupants of these spaceships are, I never seen them.


published by (Greg Sabino Mullane) on 2017-06-06 21:18:00 in the "Amazon" category

Many of our clients at End Point are using the incredible Amazon Relational Database Service (RDS), which allows for quick setup and use of a database system. Despite minimizing many database administration tasks, some issues still exist, one of which is upgrading. Getting to a new version of Postgres is simple enough with RDS, but we've had clients use Bucardo to do the upgrade, rather than Amazon's built-in upgrade process. Some of you may be exclaiming "A trigger-based replication system just to upgrade?!"; while using it may seem unintuitive, there are some very good reasons to use Bucardo for your RDS upgrade:

Minimize application downtime

Many businesses are very sensitive to any database downtime, and upgrading your database to a new version always incurs that cost. Although RDS uses the ultra-fast pg_upgrade --links method, the whole upgrade process can take quite a while - or at least too long for the business to accept. Bucardo can reduce the application downtime from around seven minutes to ten seconds or less.

Upgrade more than one version at once

As of this writing (June 2017), RDS only allows upgrading of one major Postgres version at a time. Since pg_upgrade can easily handle upgrading older versions, this limitation will probably be fixed someday. Still, it means even more application downtime - to the tune of seven minutes for each major version. If you are going from 9.3 to 9.6 (via 9.4 and 9.5), that's at least 21 minutes of application downtime, with many unnecessary steps along the way. The total time for Bucardo to jump from 9.3 to 9.6 (or any major version to another one) is still under ten seconds.

Application testing with live data

The Bucardo upgrade process involves setting up a second RDS instance running the newer version, copying the data from the current RDS server, and then letting Bucardo replicate the changes as they come in. With this system, you can have two "live" databases you can point your applications to. With RDS, you must create a snapshot of your current RDS, upgrade *that*, and then point your application to the new (and frozen-in-time) database. Although this is still useful for testing your application against the newer version of the database, it is not as useful as having an automatically-updated version of the database.

Control and easy rollback

With Bucardo, the initial setup costs, and the overhead of using triggers on your production database, is balanced a bit by ensuring you have complete control over the upgrade process. The migration can happen when you want, at a pace you want, and can even happen in stages as you point some of the applications in your stack to the new version, while keeping some pointed at the old. And rolling back is as simple as pointing apps back at the older version. You could even set up Bucardo as "master-master", such that both new and old versions can write data at the same time (although this step is rarely necessary).

Database bloat removal

Although the pg_upgrade program that Amazon RDS uses for upgrading is extraordinarily fast and efficient, the data files are seldom, if ever, changed at all, and table and index bloat is never removed. On the other hand, an upgrade system using Bucardo creates the tables from scratch on the new database, and thus completely removes all historical bloat. (Indeed, one time a client thought something had gone wrong, as the new version's total database size had shrunk radically - but it was simply removal of all table bloat!).

Statistics remain in place

The pg_upgrade program currently has a glaring flaw - no copying of the information in the pg_statistic table. Which means that although an Amazon RDS upgrade completes in about seven minutes, the performance will range somewhere from slightly slow to completely unusable, until all those statistics are regenerated on the new version via the ANALYZE command. How long this can take depends on a number of factors, but in general, the larger your database, the longer it will take - a database-wide analyze can take hours on very large databases. As mentioned above, upgrading via Bucardo relies on COPYing the data to a fresh copy of the table. Although the statistics also need to be created when using Bucardo, the time cost for this does NOT apply to the upgrade time, as it can be done any time earlier, making the effective cost of generating statistics zero.

Upgrading RDS the Amazon way

Having said all that, the native upgrade system for RDS is very simple and fast. If the drawbacks above do not apply to you - or can be suffered with minimal business pain - then this way should always be the upgrade approach to use. Here is a quick walk through of how an Amazon RDS upgrade is done.

For this example, we will create a new Amazon RDS instance. The creation is amazingly simple: just log into, choose RDS, choose PostgreSQL (always the best choice!), and then fill in a few details, such as preferred version, server size, etc. The "DB Engine Version" was set as PostgreSQL 9.3.16-R1", the "DB Instance Class" as db.t2.small -- 1 vCPU, 2 GiB RAM, and "Multi-AZ Deployment" as no. All other choices are the default. To finish up this section of the setup, "DB Instance Identifier" was set to gregtest, the "Master Username" to greg, and the "Master Password" to b5fc93f818a3a8065c3b25b5e45fec19

Clicking on "Next Step" brings up more options, but the only one that needs to change is to specify the "Database Name" as gtest. Finally, the "Launch DB Instance" button. The new database is on the way! Select "View your DB Instance" and then keep reloading until the "Status" changes to Active.

Once the instance is running, you will be shown a connection string that looks like this: That standard port is not a problem, but who wants to ever type that hostname out, or even have to look at it? The pg_service.conf file comes to the rescue with this new entry inside the ~/.pg_service.conf file:


Now we run a quick test to make sure psql is able to connect, and that the database is an Amazon RDS database:

$ psql service=gtest -Atc "show rds.superuser_variables"

We want to use the pgbench program to add a little content to the database, just to give the upgrade process something to do. Unfortunately, we cannot simply feed the "service=gtest" line to the pgbench program, but a little environment variable craftiness gets the job done:

$ export PGSERVICEFILE=/home/greg/.pg_service.conf PGSERVICE=gtest
$ pgbench -i -s 4
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 400000 tuples (25%) done (elapsed 0.66 s, remaining 0.72 s)
200000 of 400000 tuples (50%) done (elapsed 1.69 s, remaining 0.78 s)
300000 of 400000 tuples (75%) done (elapsed 4.83 s, remaining 0.68 s)
400000 of 400000 tuples (100%) done (elapsed 7.84 s, remaining 0.00 s)
set primary keys...

At 68MB in size, this is still not a big database - so let's create a large table, then create a bunch of databases, to make pg_upgrade work a little harder:

## Make the whole database 1707 MB:
$ psql service=gtest -c "CREATE TABLE extra AS SELECT * FROM pgbench_accounts"
SELECT 400000
$ for i in {1..5}; do psql service=gtest -qc "INSERT INTO extra SELECT * FROM extra"; done

## Make the whole cluster about 17 GB:
$ for i in {1..9}; do psql service=gtest -qc "CREATE DATABASE gtest$i TEMPLATE gtest" ; done
$ psql service=gtest -c "SELECT pg_size_pretty(sum(pg_database_size(oid))) FROM pg_database WHERE datname ~ 'gtest'"
17 GB

To start the upgrade, we log into the AWS console, and choose "Instance Actions", then "Modify". Our only choices for instances are "9.4.9" and "9.4.11", plus some older revisions in the 9.3 branch. Why anything other than the latest revision in the next major branch (i.e. 9.4.11) is shown, I have no idea! Choose 9.4.11, scroll down to the bottom, choose "Apply Immediately", then "Continue", then "Modify DB Instance". The upgrade has begun!

How long will it take? All one can do is keep refreshing to see when your new database is ready. As mentioned above, 7 minutes and 30 seconds is the total time. The logs show how things break down:

11:52:43 DB instance shutdown
11:55:06 Backing up DB instance
11:56:12 DB instance shutdown
11:58:42 The parameter max_wal_senders was set to a value incompatible with replication. It has been adjusted from 5 to 10.
11:59:56 DB instance restarted
12:00:18 Updated to use DBParameterGroup default.postgres9.4

How much of that time is spent on upgrading though? Surprisingly little. We can do a quick local test to see how long the same database takes to upgrade from 9.3 to 9.4 using pg_upgrade --links: 20 seconds! Ideally Amazon will improve upon the total downtime at some point.

Upgrading RDS with Bucardo

As an asynchronous, trigger-based replication system, Bucardo is perfect for situations like this where you need to temporarily sync up two concurrent versions of Postgres. The basic process is to create a new Amazon RDS instance of your new Postgres version (e.g. 9.6), install the Bucardo program on a cheap EC2 box, and then have Bucardo replicate from the old Postgres version (e.g. 9.3) to the new one. Once both instances are in sync, just point your application to the new version and shut the old one down. One way to perform the upgrade is detailed below.

Some of the steps are simplified, but the overall process is intact. First, find a temporary box for Bucardo to run on. It doesn't have to be powerful, or have much disk space, but as network connectivity is important, using an EC2 box is recommended. Install Postgres (9.6 or better, because of pg_dump) and Bucardo (latest or HEAD recommended), then put your old and new RDS databases into your pg_service.conf file as "rds93" and "rds96" to keep things simple.

The next step is to make a copy of the database on the new Postgres 9.6 RDS database. We want the bare minimum schema here: no data, no triggers, no indexes, etc. Luckily, this is simple using pg_dump:

$ pg_dump service=rds93 --section=pre-data | psql -q service=rds96

From this point forward, no DDL should be run on the old server. We take a snapshot of the post-data items right away and save it to a file for later:

$ pg_dump service=rds93 --section=post-data -f

Time to get Bucardo ready. Recall that Bucardo can only replicate tables that have a primary key or unique index. But if those tables are small enough, you can simply copy them over at the final point of migration later.

$ bucardo install
$ bucardo add db A dbservice=rds93
$ bucardo add db B dbservice=rds96
## Create a sync and name it 'migrate_rds':
$ bucardo add sync migrate_rds tables=all dbs=A,B

That's it! The current database will now have triggers that are recording any changes made, so we may safely do a bulk copy to the new database. This step might take a very long time, but that's not a problem.

$ pg_dump service=rds93 --section=data | psql -q service=rds96

Before we create the indexes on the new server, we start the Bucardo sync to copy over any rows that were changed while the pg_dump was going on. After that, the indexes, primary keys, and other items can be created:

$ bucardo start
$ tail -f log.bucardo ## Wait until the sync finishes once
$ bucardo stop
$ psql service=rds96 -q -f 

For the final migration, we simply stop anything from writing to the 9.3 database, have Bucardo perform a final sync of any changed rows, and then point your application to the 9.6 database. The whole process can happen very quickly: well under a minute for most cases.

Upgrading major Postgres versions is never a trivial task, but both Bucardo and pg_upgrade allow it to be orders of magnitude faster and easier than the old method of using the pg_dump utility. Upgrading your Amazon AWS Postgres instance is fast and easy using the AWS pg_upgrade method, but it has limitations, so having Bucardo help out can be a very useful option.


published by (Jon Jensen) on 2017-06-02 03:58:00 in the ".NET" category

End Point has the pleasure to announce some very big news!

After an amicable wooing period, End Point has purchased the software consulting company Series Digital, a NYC-based firm that designs and builds custom software solutions. Over the past decade, Series Digital has automated business processes, brought new ideas to market, and built large-scale dynamic infrastructure.

Series Digital website snapshotSeries Digital launched in 2006 in New York City. From the start, Series Digital managed large database installations for financial services clients such as Goldman Sachs, Merrill Lynch, and Citigroup. They also worked with startups including, Byte, Mode Analytics, Domino, and Brewster.

These growth-focused, data-intensive businesses benefited from Series Digital?s expertise in scalable infrastructure, project management, and information security. Today, Series Digital supports clients across many major industry sectors and has focused its development efforts on the Microsoft .NET ecosystem. They have strong design and user experience expertise. Their client list is global.

The Series Digital team began working at End Point on April 3rd, 2017.

The CEO of Series Digital is Jonathan Blessing. He joins End Point?s leadership team as Director of Client Engagements. End Point has had a relationship with Jonathan since 2010, and looks forward with great anticipation to the role he will play expanding End Point?s consulting business.

To help support End Point?s expansion into .NET solutions, End Point has hired Dan Briones, a 25-year veteran of IT infrastructure engineering, to serve as Project and Team Manager for the Series Digital group. Dan started working with End Point at the end of March.

The End Point leadership team is very excited by the addition of Dan, Jonathan, and the rest of the talented Series Digital team: Jon Allen, Ed Huott, Dylan Wooters, Vasile Laur, Liz Flyntz, Andrew Grosser, William Yeack, and Ian Neilsen.

End Point?s reputation has been built upon its excellence in e-commerce, managed infrastructure, and database support. We are excited by the addition of Series Digital, which both deepens those abilities, and allows us to offer new services.

Talk to us to hear about the new ways we can help you!


published by Eugenia on 2017-06-01 23:32:50 in the "Politics" category
Eugenia Loli-Queru

Donald Trump is becoming the Great Deconstructor. By pulling out of international politics in many ways, the US loses influence. And while doing so, it alienates the beach-front liberal states, making them push ahead alone too. Basically, not only he pulls out of the international scene, but he weakens the federal government inside the US as well. Some might think that this is what Republicans wanted anyway, however, I think The Donald is doing so by mistake, without realizing that he’s doing it. Because that definitely doesn’t make America “great again”, unless they’re referring to the America of 1830.

So what’s to happen in the global future? Probably some international chaos for a while, or, another country stepping in to fill the US shoes. Most say it’d be China, but my money is on Brazil. The EU won’t accept China, so since they can’t fill up these shoes themselves due to be drowning in bureaucracy, they will probably pick a rather big, but neutral power. Brazil fits that bill.

The Polluter - Art by Eugenia Loli


published by Eugenia on 2017-05-31 17:33:36 in the "Metaphysics" category
Eugenia Loli-Queru

David Jacobs, PhD is the foremost hypnotist in the UFO world, and the first one who asserted that since early 2000s, the alien hybrids already live among us. His opinion is that we’re been invaded “from the inside” by alien beings that happened to discover our world a couple of centuries ago. He finds the whole deal extremely negative.

While I fully agree with Dr Jacobs on the “how”, the mechanics of the abduction phenomena, I disagree with him on the “why”.

Jacobs: They’re invading us.

Me: In a way, they are. However, when the current 1st gen earth hybrids interbreed, their children won’t necessarily see themselves as alien. I’m sure you don’t see yourself as British, even if your ancestors generations ago might have arrived to the US from there. They might even forget their origins, and consider themselves fully human — despite the upgraded abilities of theirs.

Jacobs: We’ll be a second class species.

Me: Yes, that will suck for a few generations. I give you that. That’s the only truly negative I see in this whole thing, how the silent transition will be dealt as. It might be possible to bridge the “ability gap” with technology anyway, until things normalize in the population.

But then again, you must consider the various sub-species of the Greys as well. There are the tall ones, the short ones, the ultra-short ones, the old-looking ones, the praying mantis ones, etc etc. Surely, not all of these types have the same level of consciousness and abilities. And yet, I didn’t see the powerful mantis exterminating the short Greys, or the ultra-short ones. I also never heard of lower class Greys getting abused, or being unhappy about what they do. Each sub-species has found its place in the system and it does the job that it’s intellectually equipped to do, and no more. You might argue that this is because they’re not as individualistic as humans are, but have you put any thought on the possibility that humans are so chaotic exactly because they’re so individualistic and self-centered, and that part of us might need a bit of taming?

Remember your own words: what Greys are afraid most in the humans, is violence.

Jacobs: They arrived a couple of centuries ago.

Me: VERY unlikely. Statistically speaking, this is almost impossible to be here for only so little time, for only their species to find us all this time, and only to find us right as we entered our industrial & technological era. These are too many assumptions to be true at the same time. A more logical explanation would be that they’re here for much longer, and they have intervened the same way in the past, and they do so again now.

Jacobs: The abductions are purely physical.

Me: I disagree. While a large number of them are physical, not all of them are. Discounting as “confabulation” so many people’s reports that they could see their sleeping body left behind while pulled away by the Greys, is closed minded. Another thing to consider: The scope of daily abductions on the planet is massive, and yet, very few sightings are been reported, in comparison. That could mean that a lot of these abductions just don’t happen in our space-time. It’s in fact this inability of yours to think past the material that has left you thinking that:

Jacobs: They’re sinister and they have no right.

Me: If you look at it from a local point of view, they’re sinister alright. Monsters who abuse innocent people (and cows)!

But if you look at it from outer space, they are doing the best for the planet. If you haven’t noticed, humans aren’t the only species on the planet, and all the other lifeforms here are suffering because of us.

I don’t believe it’s a coincidence that their program intensified right after our atomic age. In fact, I find this to be the strongest clue as to “why”, and “why now”. To me, that’s a dead ringer.

See, landing on the White House lawn and dictating policy will not work, because humans can’t comprehend what would be asked of them (e.g. only eat pastured or no animals, stop cutting more trees, stop consuming etc etc), and so they would see this as a literal invasion, and a dictatorship. Terrorism and guerrilla fighting would ensue, just like you see in the traditional alien invasion movies and TV shows.

The only way to fix our predicament, is to fix our species from the inside out. If we’re not part of the solution, we’re part of the problem. And homo sapiens is THE problem, because it’s as far as it can go intellectually to clean up its own mess. We need a new, derivative species to be able to tackle the problems that homo sapiens created.

And yes, eventually, even that new species will be replaced out with something even more capable. I don’t doubt that.

After a species becomes technologically advanced, it should be technologically evolved (via transhumanism), rather than evolving naturally, because nature simply doesn’t work as fast as technological progress does. This creates an imbalance: byproducts of the new technological advancements that aren’t dealt with, because the consciousness level of the species hasn’t evolved in unison with its technology. The two must be paired together to balance themselves out. Right now, we’re not capable of evolving ourselves in any major way, and so the Greys are doing it for us. Eventually, we will bastardize ourselves, just like they have done with their own species. It’s either that, or destruction of the species via its inability to control its own advancements.

Let me put it another way: let’s say all countries in the world come together, and they amass about 20 trillion dollars to fight one of the two things:
1. Fight the alien invasion that replaces homo sapiens, or
2. Reverse global warming, and change the way we live to be sustainable

Analysts would find that sharing 10 trillion for each wouldn’t work, the whole amount would be required to fight one or another. So, the question arises: what is more important of the two to pursue?

If we choose #1, we’d be nothing but selfish pricks, and there’s no guarantee that we’d win anyway. If anything, global deterioration will continue at a higher rate, while creating technologies to fight these guys.

If we choose #2, we fix the planet and we ensure its good health, and we prove these guys evil for not believing in us in the first place. We do get replaced, but we go out with our head held high.

In the first case, we engage in planetary destruction, while in the second case, we engage in selfless healing. I’m with #2.


published by (Kamil Ciemniewski) on 2017-05-30 18:18:00 in the "computer vision" category
Previous in series:
In the previous two posts on machine learning, I presented a very basic introduction of an approach called "probabilistic graphical models". In this post I'd like to take a tour of some different techniques while creating code that will recognize handwritten digits.

The handwritten digits recognition is an interesting topic that has been explored for many years. It is now considered one of the best ways to start the journey into the world of machine learning.

Taking the Kaggle challenge

We'll take the "digits recognition" challenge as presented in Kaggle. It is an online platform with challenges for data scientists. Most of the challenges have their prizes expressed in real money to win. Some of them are there to help us out in our journey on learning data science techniques ? so is the "digits recognition" contest.

The challenge

As explained on Kaggle:

MNIST ("Modified National Institute of Standards and Technology") is the de facto ?hello world? dataset of computer vision.

The "digits recognition" challenge is one of the best ways to get acquainted with machine learning and computer vision. The so-called "MNIST" dataset consists of 70k images of handwritten digits - each one grayscaled and of a 28x28 size. The Kaggle challenge is about taking a subset of 42k of them along with labels (what actual number does the image show) and "training" the computer on that set. The next step is to take the rest 28k of images without the labels and "predict" which actual number they present.

Here's a short overview of how the digits in a set really look like (along with the numbers they represent):

I have to admit that for some of them I have a really hard time recognizing the actual numbers on my own :)

The general approach to supervised learning

Learning from labelled data is what is called "supervised learning". It's supervised because we're taking the computer by hand through the whole training data set and "teaching" it how the data that is linked with different labels "looks" like.

In all such scenarios we can express the data and labels as:
Y ~ X1, X2, X3, X4, ..., Xn
The Y is called a dependent variable while each Xn are independent variables. This formula holds both for classification problems as well as regressions.

Classification is when the dependent variable Y is so called categorical ? taking values from a concrete set without a meaningful order. Regression is when the Y is not categorical ? most often continuous.

In the digits recognition challenge we're faced with the classification task. The dependent variable takes values from the set:
Y = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }
I'm sure the question you might be asking yourself now is: what are the independent variables Xn? It turns out to be the crux of the whole problem to solve :)

The plan of attack

A good introduction to computer vision techniques is a book by J. R Parker - "Algorithms for Image Processing and Computer Vision". I encourage the reader to buy that book. I took some ideas from it while having fun with my own solution to the challenge.

The book outlines the ideas revolving around computing image profiles ? for each side. For each row of pixels, a number representing the distance of the first pixel from the edge is computed. This way we're getting our first independent variables. To capture even more information about digit shapes, we'll also capture the differences between consecutive row values as well as their global maxima and minima. We'll also compute the width of the shape for each row.

Because the handwritten digits vary greatly in their thickness, we will first preprocess the images to detect so-called skeletons of the digit. The skeleton is an image representation where the thickness of the shape has been reduced to just one.

Having the image thinned will also allow us to capture some more info about the shapes. We will write an algorithm that walks the skeleton and records the direction change frequencies.

Once we'll have our set of independent variables Xn, we'll use a classification algorithm to first learn in a supervised way (using the provided labels) and then to predict the values of the test data set. Lastly we'll submit our predictions to Kaggle and see how well did we do.

Having fun with languages

In the data science world, the lingua franca still remains to be the R programming language. In the last years Python has also came close in popularity and nowadays we can say it's the duo of R and Python that rule the data science world (not counting high performance code written e. g. in C++ in production systems).

Lately a new language designed with data scientists in mind has emerged - Julia. It's a language with characteristics of both dynamically typed scripting languages as well as strictly typed compiled ones. It compiles its code into efficient native binary via LLVM ? but it's using it in a JIT fashion - inferring the types when needed on the go.

While having fun with the Kaggle challenge I'll use Julia and Python for the so called feature extraction phase (the one in which we're computing information about our Xn variables). I'll then turn towards R for doing the classification itself. Note that I might use any of those languages at each step getting very similar results. The purpose of this series of articles is to be a bird eye fun overview so I decided that this way will be much more interesting.

Feature Extraction

The end result of this phase is the data frame saved as a CSV file so that we'll be able to load it in R and do the classification.

First let's define the general function in Julia that takes the name of the input CSV file and returns a data frame with features of given images extracted into columns:
using DataFrames

function get_data(name :: String, include_label = true)
  println("Loading CSV file into a data frame...")
  table = readtable(string(name, ".csv"))
  extract(table, include_label)
Now the extract function looks like the following:
Extracts the features from the dataframe. Puts them into
separate columns and removes all other columns except the

The features:

* Left and right profiles (after fitting into the same sized rect):
  * Min
  * Max
  * Width[y]
  * Diff[y]
* Paths:
  * Frequencies of movement directions
  * Simplified directions:
    * Frequencies of 3 element simplified paths
function extract(frame :: DataFrame, include_label = true)
  println("Reshaping data...")
  function to_image(flat :: Array{Float64}) :: Array{Float64}
    dim      = Base.isqrt(length(flat))
    reshape(flat, (dim, dim))'
  from = include_label ? 2 : 1
  frame[:pixels] = map((i) -> convert(Array{Float64}, frame[i, from:end]) |> to_image, 1:size(frame, 1))
  images = frame[:, :pixels] ./ 255
  data = Array{Array{Float64}}(length(images))
  @showprogress 1 "Computing features..." for i in 1:length(images)
    features = pixels_to_features(images[i])
    data[i] = features_to_row(features)
  start_column = include_label ? [:label] : []
  columns = vcat(start_column, features_columns(images[1]))
  result = DataFrame()
  for c in columns
    result[c] = []

  for i in 1:length(data)
    if include_label
      push!(result, vcat(frame[i, :label], data[i]))
      push!(result, vcat([],               data[i]))

A few nice things to notice here about Julia itself are:
  • The function documentation is written in Markdown
  • We can nest functions inside other functions
  • The language is statically and strongly typed
  • Types can be inferred from the context
  • It is often desirable to provide the concrete types to improve performance (but that an advanced Julia related topic)
  • Arrays are indexed from 1
  • There's the nice |> operator found e. g. In Elixir (which I absolutely love)
The above code converts the images to be arrays of Float64 and converts the values to be within 0 and 1 (instead of 0..255 originally).

A thing to notice is that in Julia we can vectorize operations easily and we're using this fact to tersely convert our number:
images = frame[:, :pixels] ./ 255
We are referencing the pixels_to_features function which we define as:
Returns ImageFeatures struct for the image pixels
given as an argument
function pixels_to_features(image :: Array{Float64})
  dim      = Base.isqrt(length(image))
  skeleton = compute_skeleton(image)
  bounds   = compute_bounds(skeleton)
  resized  = compute_resized(skeleton, bounds, (dim, dim))
  left     = compute_profile(resized, :left)
  right    = compute_profile(resized, :right)
  width_min, width_max, width_at = compute_widths(left, right, image)
  frequencies, simples = compute_transitions(skeleton)

  ImageStats(dim, left, right, width_min, width_max, width_at, frequencies, simples)
This in turn uses the ImageStats structure:
immutable ImageStats
  image_dim             :: Int64
  left                  :: ProfileStats
  right                 :: ProfileStats
  width_min             :: Int64
  width_max             :: Int64
  width_at              :: Array{Int64}
  direction_frequencies :: Array{Float64}

  # The following adds information about transitions
  # in 2 element simplified paths:
  simple_direction_frequencies :: Array{Float64}

immutable ProfileStats
  min :: Int64
  max :: Int64
  at  :: Array{Int64}
  diff :: Array{Int64}
The pixels_to_features function first gets the skeleton of the digit shape as an image and then uses other functions passing that skeleton to them. The function returning the skeleton utilizes the fact that in Julia it's trivially easy to use Python libraries. Here's its definition:
using PyCall

@pyimport skimage.morphology as cv

Thin the number in the image by computing the skeleton
function compute_skeleton(number_image :: Array{Float64}) :: Array{Float64}
  convert(Array{Float64}, cv.skeletonize_3d(number_image))
It uses the scikit-image library's function skeletonize3d by using the @pyimport macro and using the function as if it was just a regular Julia code.

Next the code crops the digit itself from the 28x28 image and resizes it back to 28x28 so that the edges of the shape always "touch" the edges of the image. For this we need the function that returns the bounds of the shape so that it's easy to do the cropping:
function compute_bounds(number_image :: Array{Float64}) :: Bounds
  rows = size(number_image, 1)
  cols = size(number_image, 2)

  saw_top = false
  saw_bottom = false

  top = 1
  bottom = rows
  left = cols
  right = 1

  for y = 1:rows
    saw_left = false
    row_sum = 0

    for x = 1:cols
      row_sum += number_image[y, x]

      if !saw_top && number_image[y, x] > 0
        saw_top = true
        top = y

      if !saw_left && number_image[y, x] > 0 && x < left
        saw_left = true
        left = x

      if saw_top && !saw_bottom && x == cols && row_sum == 0
        saw_bottom = true
        bottom = y - 1

      if number_image[y, x] > 0 && x > right
        right = x
  Bounds(top, right, bottom, left)
Resizing the image is pretty straight-forward:
using Images

function compute_resized(image :: Array{Float64}, bounds :: Bounds, dims :: Tuple{Int64, Int64}) :: Array{Float64}
  cropped = image[bounds.left:bounds.right,]
  imresize(cropped, dims)
Next, we need to compute the profile stats as described in our plan of attack:
function compute_profile(image :: Array{Float64}, side :: Symbol) :: ProfileStats
  @assert side == :left || side == :right

  rows = size(image, 1)
  cols = size(image, 2)

  columns = side == :left ? collect(1:cols) : (collect(1:cols) |> reverse)
  at = zeros(Int64, rows)
  diff = zeros(Int64, rows)
  min = rows
  max = 0

  min_val = cols
  max_val = 0

  for y = 1:rows
    for x = columns
      if image[y, x] > 0
        at[y] = side == :left ? x : cols - x + 1

        if at[y] < min_val
          min_val = at[y]
          min = y

        if at[y] > max_val
          max_val = at[y]
          max = y
    if y == 1
      diff[y] = at[y]
      diff[y] = at[y] - at[y - 1]

  ProfileStats(min, max, at, diff)
The widths of shapes can be computed with the following:
function compute_widths(left :: ProfileStats, right :: ProfileStats, image :: Array{Float64}) :: Tuple{Int64, Int64, Array{Int64}}
  image_width = size(image, 2)
  min_width = image_width
  max_width = 0
  width_ats = length( |> zeros

  for row in 1:length(
    width_ats[row] = image_width - ([row] - 1) - ([row] - 1)

    if width_ats[row] < min_width
      min_width = width_ats[row]

    if width_ats[row] > max_width
      max_width = width_ats[row]

  (min_width, max_width, width_ats)
And lastly, the transitions:
function compute_transitions(image :: Image) :: Tuple{Array{Float64}, Array{Float64}}
  history = zeros((size(image,1), size(image,2)))

  function next_point() :: Nullable{Point}
    point = Nullable()

    for row in 1:size(image, 1) |> reverse
      for col in 1:size(image, 2) |> reverse
        if image[row, col] > 0.0 && history[row, col] == 0.0
          point = Nullable((row, col))
          history[row, col] = 1.0

          return point

  function next_point(point :: Nullable{Point}) :: Tuple{Nullable{Point}, Int64}
    result = Nullable()
    trans = 0

    function direction_to_moves(direction :: Int64) :: Tuple{Int64, Int64}
      # for frequencies:
      # 8 1 2
      # 7 - 3
      # 6 5 4
       ( -1,  0 ),
       ( -1,  1 ),
       (  0,  1 ),
       (  1,  1 ),
       (  1,  0 ),
       (  1, -1 ),
       (  0, -1 ),
       ( -1, -1 ),

    function peek_point(direction :: Int64) :: Nullable{Point}
      actual_current = get(point)

      row_move, col_move = direction_to_moves(direction)

      new_row = actual_current[1] + row_move
      new_col = actual_current[2] + col_move

      if new_row <= size(image, 1) && new_col <= size(image, 2) &&
         new_row >= 1 && new_col >= 1
        return Nullable((new_row, new_col))
        return Nullable()

    for direction in 1:8
      peeked = peek_point(direction)

      if !isnull(peeked)
        actual = get(peeked)
        if image[actual[1], actual[2]] > 0.0 && history[actual[1], actual[2]] == 0.0
          result = peeked
          history[actual[1], actual[2]] = 1
          trans = direction

    ( result, trans )

  function trans_to_simples(transition :: Int64) :: Array{Int64}
    # for frequencies:
    # 8 1 2
    # 7 - 3
    # 6 5 4

    # for simples:
    # - 1 -
    # 4 - 2
    # - 3 -
      [ 1 ],
      [ 1, 2 ],
      [ 2 ],
      [ 2, 3 ],
      [ 3 ],
      [ 3, 4 ],
      [ 4 ],
      [ 1, 4 ]

  transitions     = zeros(8)
  simples         = zeros(16)
  last_simples    = [ ]
  point           = next_point()
  num_transitions = .0
  ind(r, c) = (c - 1)*4 + r

  while !isnull(point)
    point, trans = next_point(point)

    if isnull(point)
      point = next_point()
      current_simples = trans_to_simples(trans)
      transitions[trans] += 1
      for simple in current_simples
        for last_simple in last_simples
          simples[ind(last_simple, simple)] +=1
      last_simples = current_simples
      num_transitions += 1.0

  (transitions ./ num_transitions, simples ./ num_transitions)
All those gathered features can be turned into rows with:
function features_to_row(features :: ImageStats)
  lefts       = [ features.left.min,  features.left.max  ]
  rights      = [ features.right.min, features.right.max ]

  left_ats    = [[i]  for i in 1:features.image_dim ]
  left_diffs  = [ features.left.diff[i]  for i in 1:features.image_dim ]
  right_ats   = [[i] for i in 1:features.image_dim ]
  right_diffs = [ features.right.diff[i]  for i in 1:features.image_dim ]
  frequencies = features.direction_frequencies
  simples     = features.simple_direction_frequencies

  vcat(lefts, left_ats, left_diffs, rights, right_ats, right_diffs, frequencies, simples)
Similarly we can construct the column names with:
function features_columns(image :: Array{Float64})
  image_dim   = Base.isqrt(length(image))

  lefts       = [ :left_min,  :left_max  ]
  rights      = [ :right_min, :right_max ]

  left_ats    = [ Symbol("left_at_",  i) for i in 1:image_dim ]
  left_diffs  = [ Symbol("left_diff_",  i) for i in 1:image_dim ]
  right_ats   = [ Symbol("right_at_", i) for i in 1:image_dim ]
  right_diffs = [ Symbol("right_diff_", i) for i in 1:image_dim ]
  frequencies = [ Symbol("direction_freq_", i)   for i in 1:8 ]
  simples     = [ Symbol("simple_trans_", i)   for i in 1:4^2 ]

  vcat(lefts, left_ats, left_diffs, rights, right_ats, right_diffs, frequencies, simples)
The data frame constructed with the get_data function can be easily dumped into the CSV file with the writeable function from the DataFrames package.

You can notice that gathering / extracting features is a lot of work. All this was needed to be done because in this article we're focusing on the somewhat "classical" way of doing machine learning. You might have heard about algorithms existing that mimic how the human brain learns. We're not focusing on them here. This we will explore in some future article.

We use the mentioned writetable on data frames computed for both training and test datasets to store two files: processed_train.csv and processed_test.csv.

Choosing the model

For the task of classifying I decided to use the XGBoost library which is somewhat a hot new technology in the world of machine learning. It's an improvement over the so-called Random Forest algorithm. The reader can read more about XGBoost on its website:

Both random forest and xgboost revolve around the idea called ensemble learning. In this approach we're not getting just one learning model ? the algorithm actually creates many variations of models and uses them to collectively come up with better results. This is as much as can be written as a short description as this article is already quite lengthy.

Training the model

The training and classification code in R is very simple. We first need to load the libraries that will allow us to load data as well as to build the classification model:
Loading the data into data frames is equally straight-forward:
processed_train <- read_csv("processed_train.csv")
processed_test <- read_csv("processed_test.csv")
We then move on to preparing the vector of labels for each row as well as the matrix of features:
labels = processed_train$label
features = processed_train[, 2:141]
features = scale(features)
features = as.matrix(features)

The train-test split

When working with models, one of the ways of evaluating their performance is to split the data into so-called train and test sets. We train the model on one set and then we predict the values from the test set. We then calculate the accuracy of predicted values as the ratio between the number of correct predictions and the number of all observations.

Because Kaggle provides the test set without labels, for the sake of evaluating the model's performance without the need to submit the results, we'll split our Kaggle-training set into local train and test ones. We'll use the amazing caret library which provides a wealth of tools for doing machine learning:

index <- createDataPartition(processed_train$label, p = .8, 
                             list = FALSE, 
                             times = 1)

train_labels <- labels[index]
train_features <- features[index,]

test_labels <- labels[-index]
test_features <- features[-index,]
The above code splits the set uniformly based on the labels so that the train set is approximately 80% in size of the whole data set.

Using XGBoost as the classification model

We can now make our data digestible by the XGBoost library:
train <- xgb.DMatrix(as.matrix(train_features), label = train_labels)
test  <- xgb.DMatrix(as.matrix(test_features),  label = test_labels)
The next step is to make the XGBoost learn from our data. The actual parameters and their explanations are beyond the scope of this overview article, but the reader can look them up on the XGBoost pages:
model <- xgboost(train,
                 max_depth = 16,
                 nrounds = 600,
                 eta = 0.2,
                 objective = "multi:softmax",
                 num_class = 10)
It's critically important to pass the objective as "multi:softmax" and num_class as 10.

Simple performance evaluation with confusion matrix

After waiting a while (couple of minutes) for the last batch of code to finish computing, we now have the classification model ready to be used. Let's use it to predict the labels from our test set:
predicted = predict(model, test)
This returns the vector of predicted values. We'd now like to check how well our model predicts the values. One of the easiest ways is to use the so-called confusion matrix.

As per Wikipedia, confusion matrix is simply:

(...) also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each column of the matrix represents the instances in a predicted class while each row represents the instances in an actual class (or vice versa). The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabelling one as another).

The caret library provides a very easy to use function for examining the confusion matrix and statistics derived from it:
confusionMatrix(data=predicted, reference=labels)
The function returns an R list that gets pretty printed to the R console. In our case it looks like the following:
Confusion Matrix and Statistics

Prediction   0   1   2   3   4   5   6   7   8   9
         0 819   0   3   3   1   1   2   1  10   5
         1   0 923   0   4   5   1   5   3   4   5
         2   4   2 766  26   2   6   8  12   5   0
         3   2   0  15 799   0  22   2   8   0   8
         4   5   2   1   0 761   1   0  15   4  19
         5   1   3   0  13   2 719   3   0   9   6
         6   5   3   4   1   6   5 790   0  16   2
         7   1   7  12   9   2   3   1 813   4  16
         8   6   2   4   7   8  11   8   5 767  10
         9   5   2   1  13  22   6   1  14  14 746

Overall Statistics
               Accuracy : 0.9411         
                 95% CI : (0.9358, 0.946)
    No Information Rate : 0.1124         
    P-Value [Acc > NIR] : < 2.2e-16      
                  Kappa : 0.9345         
 Mcnemar's Test P-Value : NA             

Each column in the matrix represents actual labels while rows represent what our algorithms predicted this value to be. There's also the accuracy rate printed for us and in this case it equals 0.9411. This means that our code was able to predict correct values of handwritten digits for 94.11% of observations.

Submitting the results

We got 0.9411 of an accuracy rate for our local test set and it turned out to be very close to the one we got against the test set coming from Kaggle. After predicting the competition values and submitting them, the accuracy rate computed by Kaggle was 0.94357. That's quite okay given the fact that we're not using here any of the new and fancy techniques.

Also, we haven't done any parameter tuning which could surely improve the overall accuracy. We could also revisit the code from the features extraction phase. One improvement I can think of would be to first crop and resize back - and only then compute the skeleton which might preserve more information about the shape. We could also use the confusion matrix and taking the number that was being confused the most, look at the real images that we failed to recognize. This could lead us to conclusions about improvements to our feature extraction code. There's always a way to extract more information.

Nowadays, Kagglers from around the world were successfully using advanced techniques like Convolutional Neural Networks getting accuracy scores close to 0.999. Those live in somewhat different branch of the machine learning world though. Using this type of neural networks we don't need to do the feature extraction on our own. The algorithm includes the step that automatically gathers features that it later on feeds into the network itself. We will take a look at them in some of the future articles.

See also


published by Eugenia on 2017-05-28 18:44:11 in the "Metaphysics" category
Eugenia Loli-Queru

I saw a UFO when I was 17 years old, I’ve written about that before on this blog and elsewhere.

What I never wrote publicly before, was that in the Fall of 2000 (must have been mid-October), when I was still living in Guildford, UK, I had a weird, night incident that resembled an alien abduction story. I have no full memory of it apart from its initial stage; I never attempted hypnosis about it. It created a lot of fear in me, and could not sleep the subsequent nights, until I finally talked to a monk about it a month later, and he helping me overcome it.

Anyway, that’s when my curiosity about the whole UFO thing really peaked. I only left trails about that on my blog through the years, never wide-openly making it a big deal on my blog, and only speaking about it with “what-ifs”. But I feel that now I can talk about it openly, because I have overcome the fear of the situation. I believe that I see the whole alien agenda a bit more objectively, rather than showered in fear. FYI, except these two occasions, I’ve never had an alien-related conscious memory, not before, and not after.

Below is my personal opinion as to what might be happening with these aliens and abductions, in the form of Q&A.

Grey alien

Q: So, you’re saying that “they’re here”?

A: Yes, they’re here, and they’ve been here for a long time already.

Think of it this way: If NONE of the other civilizations haven’t already reached us in all of these millions of years, it means that we would never reach any of them either. Simple statistical logic. That would make this universe of ours, simply, not worth living, as it would make it a lonely, sad place. A literal death trap with nowhere to go.

But I don’t believe that the Universe is such a thing. I believe that it’s teeming with life, and some of that life, has managed to make the big jump. And while we might be a passing curiosity for many of these species, it seems that the ones that look the most like us humanoids (like the Greys, in our case), probably have stuck around for a while. We would do the same thing after all.

Q: What are they doing here?

A: They’re preparing the replacement of homo sapiens with a hybrid species that looks like human, but thinks like a Grey.

Q: Wait, whaaat? Why?

A: Because homo sapiens has run its course. We have managed to acquire great technology, but we can’t use it wisely. The fact that two atomic bombs were dropped on other human beings, and nuclear testing went on for so many years, it shows that homo sapiens is capable of creating, but not of wisely managing its creations. The average human isn’t capable of thinking in a macro-scale, and our politics or even our businesses aren’t designed to take action for more than 2-3 years in the future (generally speaking).

The way we have destroyed our environment and other species in our planet, is another cue. I mean, it’s rather pointy when abductees are told to “take care of the environment”, as far back as the 1970s, right after all the invasive medical procedures. First they harvest your eggs or sperm, and then they tell you to “please don’t litter”? What’s up with that? Obviously, they want the planet intact. They’re not destroyers. They’re a cleanup crew, with respect towards all life. “Fixing” us, would ensure that other species in our planet could go on too without the ever-present human danger.

Q: How are they going to replace us?

A: Using a hybrid program. Several levels of hybridization have been reported. The latest reports speak of hybrids that contain a large part of the Grey intellect (complete with sharp logic, control of emotions, telepathy), in an externally human-looking body. These upgrades are encoded in the hybrid’s DNA, but not switched ON yet. Hence the so called abduction phenomena, and why they mostly abduct people from the same families and lineages — because they carry the DNA they are interested in working with.

Oh, don’t be so sad about it. I’m sure you didn’t bat an eye when you read about the demise of the Neanderthal man. Life moved on. And I’m sure you’re ok with the hundreds of breeds of cats, dogs, cows, chickens, trees, vegetables, and everything in between. All re-made by man. So why should THEY feel guilty doing us? The homo sapiens arrogance, double-standards, and hypocrisy is without an end on this issue.

Q: Wouldn’t it be simpler to just militarily invade us then, instead of this elaborate conspiratorial plan?

A: No. They’re not interested in our resources, and they’re not interested in killing us. In fact, they need us. Water and minerals can easily be extracted from other dead planets or asteroids given the right technology — technology they already have. What’s in short supply instead, is compatible consciousness itself.

To understand why they would prefer to upgrade us, instead of taking us out, or invading us, you must view the whole thing from a higher perspective. A civilization that has managed to survive for a long time, understands evolution and the process of life better than toddler civilizations like ours. With understanding, it comes respect. With respect, they allow evolution take its course. And when there are hiccups (like our suicidal civilization), they offer help. That’s their brand of help.

Other civilizations might help differently. For example, the so-called Pleiadians, advise us to sit down our ass, meditate all day, and go vegan. Find our own way via enlightenment. Obviously, the Greys are a bit more hands-on and practical than spewing hippie shit.

Q: Why don’t they just tell us what to do instead of replacing us?

A: Because they know of our mental limitations. If suddenly your selfish child-self were to move in with a different family that you share no familial bond with, and who they would tell you to make a perfect bed, clean up your room daily, brush your teeth and walk the dog twice a day, and take on a number of household chores, what would the result be? Resentment, that’s what it would be. You’d grow to hate them for not letting you “be free”: wasteful, lazy, un-thoughtful etc. So the main new feature of the replacement species is meant to be responsibility, so such “chores” are not seen as chores anymore, but as necessary acts to keep the planet tidy.

Q: So you’re saying that they’re doing it out of the goodness of their hearts?

A: Yes, and no. They definitely get something out of it too, let’s not be naive here. This is two-way beneficial, it’s not just for us. My understanding is that this is how they propagate their species and civilization: by breeding with compatible, local alpha-species, which gives them the ability to actually live in the various planetary environments. It’s in fact, the smart way of “invading”.

Q: So, they ARE invading!

A: In a way, yes, they are. But in another way, they’re not. It’s an intervention more than an invasion. If they wanted to just plainly invade and take us over, they’d have done so thousands of years ago (apparently, according to reports, they found us a long time ago). Instead, they chose, and were ALLOWED to do so only after the human species reached its end of its rope evolutionarily-speaking, and stupidly started using nuclear weapons. In fact, the program kickstarted in a serious manner exactly then. That timing is no coincidence, and it’s the biggest clue as to “why” and “why now”.

Q: Who allowed them to do so?

A: If we have a United Nations and a NATO, rest assured there’s something equivalent on a galactic scale. The Grey program seems to be at least one of the programs that is active in our planet. Some other alien factions have expressed support for the Grey program, and some others dismay. Just like in our UN, nobody can agree on everything, but probably majority rules (or it might be a hierarchical system, who knows).

Q: Why aren’t other alien species revealing themselves?

A: Because no one wants to deal with children who just found the box of matches. There is a type of prime directive, but unlike Star Trek, it’s not about the invention of a warp drive. Instead, that prime directive, is requirement of undeniable proof that the young species isn’t suicidal. Because if they are suicidal, it means that they would be equally hostile. So, they would never show up on our doorstep and introduce themselves, in order to protect their own. You don’t go knock the door and introduce yourself & your kids to the registered sex offender next door, now do you? If a species goes through successfully that crucial phase on its evolutionary development (I’m guessing most civilizations do through that), and it irons out any destructive inklings, then, they would reveal themselves.

Since we FAILED to do so (with the nuclear bombings and wars), “help” was sanctioned, and approved.

Q: Does our government know about this?

A: Of course they do, at least since 1947 via the Roswell crash, or maybe a tiny bit earlier (the so-called “foo fighters”, an early name for flying saucers, were reported by both Allies and Axis pilots). Although there’s no clear proof of it, Eisenhower is said to be the first and only president who saw these beings in the flesh in 1954, and inked a deal after the Washington DC UFO incidents in 1952. At some level, the government cooperated at least up to the 1970s — then information gets a bit murky, as the deal supposedly fell through, after the government realized the extensive abduction phenomena.

Q: Why don’t they disclose it?

A: They can’t. Both because the aliens don’t want them to, but also because it makes absolutely no sense to disclose it. And no, it’s NOT because as many people before have mentioned, that it would destroy our socio-economic system and our religious beliefs. That is hogwash, humans are ready for an alien life announcement from that point of view. The REAL danger, is that they CAN’T adequately explain to citizens that homo sapiens will eventually get replaced with a hybrid version, and that people get abducted against their will to that end. That’s simply something that you CAN’T tell the average Joe, because their intellect would only go as far as: INVASION (for liberals), or, DEMONS FROM HELL (for conservatives). And while it is a TYPE of invasion (from one point of view), it’s one that’s BENEFICIAL, and NEEDED by us. But humans are so RACIST, that they would never see it like that. They’ll see it as WAR. And that’s a war that no Earth government can win. So there’s *absolutely nothing* to tell the public, but express denial and laugh it off.

I personally agree with the US government, and the aliens, that disclosure SHOULD NOT happen (neither I believe it will). Homo sapiens is simply too stupid to see the bigger picture in an objective manner. That’s why it’s getting replaced in the first place! Duh!

Q: Are you a government misinformation shill?

A: No. I’m just an artist. One with lots of imagination. And rather good analytical skills, if I may say so myself.

Q: Why can’t our government fight them at all?

A: Let’s replace “government” with CIA/NSA + military. The “government” as you know it, is inadequate to act on such issues, so they’re kept in the dark (correctly or not) since the JFK times. In fact, let’s go one step further and replace CIA/NSA + military with the private sector. A lot of these project has been moved on since to the private sector, the big government contractors.

A number of previously top-secret documents of CIA experimenting with LSD and remote viewing are now public. The public somehow thinks that they experimented with them in order to achieve “mind control”. Which is a completely false theory, since no one on LSD can be “controlled” in any way. One of the reasons why CIA explored these, was so it could understand and access parallel realities, and access information vital to counter-intelligence (which ended in failure, since the aliens have technology that can block junior consciousness like ours from accessing their ships).

You see, most of the abductions don’t even happen in our physical realm. They usually happen in our neighboring “sheet of reality”/dimension (“astral world”). Even in the event that they appear physically, they often only abduct the etheric body and not the physical one (despite erroneous reports by abductees who can’t tell the difference, or hypnotists who are staunch atheists and they don’t accept the idea of multi-dimensional bodies). Hence the very little-to-none proof of abduction, the fact that we have no memory of it (due to altered state required to inhabit a different dimension), or the foggy and out of focus pictures of UFOs (there are technical reasons for that btw). Yes, they can operate in our reality, and they often do, but I postulate that they mostly work in a neighboring reality, where humans can enter only as an altered state. Impregnation, implants can be inserted on the etheric body, and still influence the physical body, although such acts mostly are done in our physical realm.

I want to Believe

Q: Wait, you lost me. Don’t mix New Age crap in this please. At least, keep it sci-fi.

A: Sorry to rain on your parade, but in order to understand how these guys came here in the first place (it’s easier to punch through realities than break the speed of light), or how they operate, some of these “New Age” beliefs about dimensions and higher selves, and souls, must be mixed in to the whole story.

This is where I had my biggest blockage in accepting the whole story for years too. It was because when someone was mixing New Age “spirituality” in all this, I hated it and I was shutting it down, because I was an atheist (I still am, but of a different kind). I believed that aliens would simply be from another planet, and that’s about it.

Only when I started studying psychedelics I realized that what we call “spiritual”, is nothing but science that we haven’t understood yet. There’s nothing spiritual about souls, “guardian angels”, or “negative entities”. These are just naive words we use because we don’t understand the nature of reality at that level yet. Eventually, we could understand how a soul is just a piece of consciousness that derives from a larger piece, how spirit guides are nothing but evolved entities that help junior entities in their own path of evolution, and that dark entities are just entities who evolved to not give a shit about anything, so we simply perceive them as dark, because our toddler brain has only evolved as far as interpreting danger at a rudimentary level, and it doesn’t always see things the way they truly are (meaning, what we might perceive as a “demon” might physically and emotionally look very different to a more advanced brain or entity).

Q: So you’re saying we’re not dead… after death?

A: Our ego/persona dies (because it was just a construct to keep us alive, and not fundamental), but our consciousness continues its evolution into another vessel/body. This is why the Greys themselves refer to physical bodies as “containers”. Evolution happens both in the consciousness level, and in the physical level. Absolutely nothing stays the same (physical or immaterial), and as everything changes, it strives for novelty. Hence “evolution”. As for “God”, it’s nothing but the sum of everything manifested and unmanifested, trying to understand what itself is. In absolute reality, all is one, there’s no separation. But in order to understand itself, these sparks of consciousness are needed, in order to understand itself “from the inside” (since there’s nothing outside the Everything). “God” is not a guy, and has no persona. It’s simply pure Being. It doesn’t care if you pray to it or not either. You’re in it, and you’re it.

Q: Ok… so the Greys know all that “religious” stuff, but they approach them scientifically instead?

A: Yes. And they’re in fact, in contact with these higher entities (seen the size of their head lately? these guys can reach them via meditation easily I reckon). There have been reports of actual souls (which look like balls of light) entering abducted pregnant women, to give life to their babies. Greys have also showed future probabilistic scenes, they’ve shown past life information, and other such things, that we would naively call “spiritual”. It’s just crazy-advanced science, that’s all.

Very telling are constant reports of abductees pleading to be left alone, and the Greys coldly replying: “we have every right, you have already agreed to this prior to coming to this life”. And they’re probably right.

Q: Are they demons? Do they steal souls?

A: Erm… no… these are religious stupidities that humans believe. They just want to propagate in the cosmos, just like all alive species do. Merging with native species that easily survive without cloning in their natural environment (that Greys aren’t adapted to), is their way of propagating and evolving at a fast pace. It’s rather brilliant actually.

Q: So… are they good, or are they bad?

A: They’re rather neutral, like we are. They are doing us a favor, they’re doing themselves a favor, and at the same time, they’re inflicting pain anyway (ask the mutilated cows if they agreed to that prior coming to this life — har, har). At the end of the day, it’s a point of view. If you focus on the positives, they’re good, if you focus on the negatives, they’re bad. The current schism in Ufology these days, with half the people claiming that they’re good, and the other half claiming they’re bad, is kinda laughable actually. Both sides are short-sighted. I’d say that the agenda itself is “good”, but they, individually, can be either, like we are.

Q: What’s their end game?

A: Their plan is long-term, but they also prepare for the worst, in case they have to act quickly. Basically, I believe that in their best case scenario, the switch will happen without anyone realizing that it did. Hybrid abductees from old lineages of hybrids would just blend, make babies, and at some point, gradually get “switched on” by the Greys to make use of their higher functions (e.g. previously blocked memories, extra scientific knowledge, telepathy etc). By that time, they’d be a lot of them to take over the planet slowly but surely (some ufologists say that up to 5% of the population today has been abducted at some point in their life).

The worst case scenario is if world war with nuclear probability starts soon, or if the governments go on the offensive. In that case, these abductees/hybrids would have to be switched on early, to try and tame the situation and take the upper hand. In fact, abductees have been shown VR scenarios where they have to play a role in a chaotic situation. That was preparation, just in case.

Obviously, the Greys hope for the first case, where the transition happens smoothly (as it is probable that has happened in our far past too), and without the population realizing it, or without viewing it as an invasion. But they also prepare for the alternative.

Q: When is the switch going to happen?

A: It already has. According to David Jacobs, PhD, it started early in the last decade. It’s a program that started in the 19th century, or a bit earlier, and it intensified in the 1940s, with the atomic age. Humans didn’t realize that abductions were happening until mid-1960s, when hypnosis became available.

Q: Now what?

A: Now nothing. When these beings take over in the future in one way or another, it won’t be pretty for homo sapiens. Not that they would inflict a genocide or anything, but it’s not going to be nice to be the inferior species of the two. When these guys are able to speak telepathically between them and we can’t, that gives them an advantage in life.

Q: What about crop circles?

A: I just see them as teasers. At least the ones that are authentic, and not made by prankster humans.

Q: Who are the Men in Black?

A: Could be anyone. Human agents, hybrids, or Greys “wearing” a human cloned body. Depends on the mission, I guess.

Q: What about the Reptilians, Nazis on the moon, secret US bases on Mars?

A: These conspiracies are the silliest ever. “The Queen of England and most politicians are Reptilians”, they say. Oh, really? If advanced alien Reptilians had taken over Trump’s body, then Trump would have ceased to be such a buffoon the day he stepped into White House. That’s proof right there that these conspiracies are stupid beyond belief.

I don’t doubt that two souls can be housed in a single container, but I don’t think that this would be allowed without a pre-agreement. I’d consider it to be a rarity.

Q: Ok now… surely you don’t believe that Greys exist, right?

A: I’m certain they do. People have seen them not only consciously, or via hypnosis, but also via meditation and psychedelics.

Break-through psychedelics are not hallucinations per se. Psychedelics turn off the brain’s filter (which helps us navigate and survive our reality), and so we experience reality in a rough, unfiltered way (hence the colors and fractals). When entering other concrete neighboring realities via psychedelics, we can see everything in crisp detail (and that’s how Greys were seen). The “higher” we go in terms of dimensions, the more abstract things become (because they’re further from our set-in-stone reality). In these higher dimensions, our brain can’t comprehend the non-Eucleidian math found there, a great article from a Stanford researcher can be read here about that subject.

Q: It must just be sleep paralysis. They can’t be real.

A: What’s “real”? Our reality is only real to us because our senses telling us so, while our brain filters out anything we don’t need to survive. On an altered state, by definition you tune in to another reality, because you shift your senses elsewhere.

As for sleep paralysis, that has always been such a cop-out explanation that it’s not even funny anymore. How about all the people who were not sleeping when abducted, or when seeing UFOs, often in mass sightings?

I’ll tell you this: from ALL the other supernatural conspiracies out there (e.g. bigfoot, Loch ness, chupakabra, ghosts etc), the UFO conspiracy is the only one that has both survived the test of time, AND it is the only one that is backed up by testimonies of credible people: from cops on the job, to ex-military personnel, to trained pilots, and of course, astronauts that have gone public (like Buzz Aldrin), and a US president being a witness, Jimmy Carter. Google it.

Q: Can I do something about all that?

A: No. Whatever’s going to happen, it’s going to happen. Who cares, really? I mean, even if we get reincarnated here, we’d still have bodies to inhabit, only upgraded ones. I find that a positive thing. If you take the racist approach of “pure-blood human”, you will get disappointed: we’re probably never been pure blood, and in any way, only your ego would take such sides of “us and them”. A higher self, which can reincarnate into anything, doesn’t give a shit if its next body is a human, a Grey, a human-grey hybrid, or an ant.

So live your life in peace and happiness while you’re still human, and you have the gift of thought to appreciate creation.


published by (Peter Hankiewicz) on 2017-05-27 22:00:00 in the "browsers" category

The last week was really interesting for me. I attended the infoShare 2017, the biggest tech conference in central-eastern Europe. The agenda was impressive, but that?s not everything. There was a startup competition going on and really, I?m totally impressed.

infoShare in numbers:

  • 5500 attendees
  • 133 speakers
  • 250 startups
  • 122 hours of speeches
  • 12 side events
Let?s go through each speech I was attending.

Day 1

Why Fast Matters by Harry Roberts from

Harry tried to convince us that performance is important.

Great speech, showing that it?s an interesting problem not only from a financial point of view. You must see it, link to his presentation:

Dirty Little Tricks From The Dark Corners of Front-End by Vitaly Friedman from

It was magic! I work a lot with CSS, but this speech showed me some new ideas and reminded me that the simplest solution is maybe not the best solution usually and that we should reuse CSS between components as much as possible.

Keep it DRY!

One of these tricks is a quantity query CSS selector. It?s a pretty complex selector that can apply your styles to elements based on the number of siblings. (

The Art of Debugging (browsers) by Remy Sharp

It was great to see some other developer and see his workflow during debugging. I usually work from home and it?s not easy to do it in my case.

Remy is a very experienced JavaScript developer and showed us his skills and tricks, especially interesting Chrome developer console integration.

I always thought that using the developer console for programming is not the best idea, maybe it?s not? It looked pretty neat.

Desktop Apps with JavaScript by Felix Rieseberg from Slack

Felix from Slack presented and show the power of desktop hybrid apps. He used a framework called Electron. Using Electron you can build native, cross-system desktop apps using HTML, JavaScript and CSS. I don?t think that it?s the best approach for more complex applications and probably takes more system memory than native-native applications, but for simpler apps it can a way to go!

Github uses it to build their desktop app, so maybe it?s not so slow? :)

RxJava in existing projects by Tomasz Nurkiewicz from Allegro

Tomasz Nurkiewicz from Allegro showed us his high programming skills and provided some practical RxJava examples. RxJava is a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

Definitely something to read about.

Day 2

What does a production ready Kubernetes application look like? by Carter Morgan from Google

Carter Morgan from Google showed us practical uses of Kubernetes.

Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications. It was originally designed by Google developers and I think that they really want to popularize it. It looked that Kubernetes has a low learning curve, but devops agents I spoke after the presentation were sceptical, saying that if you know how to use Docker Swarm then you don?t really need Kubernetes.

Vue.js and Service Workers become Realtime by Blake Newman from Sainsbury's

Blake Newman is a JavaScript developer, member of the core Vue.js (trending, hot JavaScript framework) team. He explained how to use Vue.js with service workers.

The service workers are scripts that your browser runs in the background. Nice to see how it fits together, even though it?s not yet supported by every popular browser.



Listen to your application and sleep by Gianluca Arbezzano from InfluxData

Gianluca showed us his modern and flexible monitoring stack. Great tips and mostly discussing and recommending InfluxDB and Telegraf, we use it a lot in End Point.

He was right that it?s easy to configure, open-source and really useful. Great speech!


Amazing two days. All the presentations will be available on Youtube soon:

I can fully recommend this conference, see you next time!


published by (Peter Hankiewicz) on 2017-05-26 22:00:00 in the "backend" category

Here at End Point, we had the pleasure to be a part of multiple Drupal 6, 7 and 8 projects. Most of our clients wanted to use the latest Drupal version, to have a long term support, stable platform.

A few years ago, I already had big experience with PHP itself and other, various PHP frameworks like WordPress, Joomla! or TYPO3. I was happy to use all of them, but then one of our clients asked us for a simple Drupal 6 task. That?s how I started my Drupal journey which continues until now.

To be honest, I had a difficult start, it was different, new and pretty inscrutable for me. After a few days of reading documentation and playing with the system I was ready to do some simple work. Here, I wanted to share my thoughts about Drupal and tell you why I LOVE! it.

Low learning curve

It took, of course, a few months until I was ready to build something more complex, but it really takes a few days only to be ready for simple development. It?s not only about Drupal, but also PHP, it?s much cheaper to maintain and extend a project. Maybe it?s not so important with smaller projects, but definitely important for massive code bases. Programmers can jump in and start being productive really quick.

Great documentation

Drupal documentation is well structured and constantly developed, usually you can find what you need within a few minutes. It?s critical and must have for any other framework and not so common unfortunately.

Big community

The Drupal community is one of the biggest IT communities I have ever encountered. They extend, fix and document the Drupal core regularly. Most of them have their other jobs and work on this project just for fun and with passion.

It?s free

It?s an open source project, that?s one of the biggest pros here. You can get it for free, you can get support for free, you can join the community for free too (:)).


On the official Drupal website you can find tons of free plugins/modules. It?s a time and money saver, you don?t need to reinvent the wheel for every new widget on your website and focus on fireworks.

Usually you can just go there and find a proper component. E-commerce shop? Slideshow? Online classifieds website? No problem! It?s all there.

PHP7 support

I can often hear from other developers that PHP is slow, well, it?s not the Road Runner, but come on, unless you are Facebook (and I think that they, correct me if I?m wrong, still use PHP :)) it?s just OK to use PHP.

Drupal fully supports PHP7.

With PHP7 it?s much faster, better and safer. To learn more:

In the infographic you can see that PHP7 is much faster than Ruby, Perl and Python when you try to render a Mandelbrot fractal. In general, you definitely can?t say that PHP is slow, same as Drupal.

REST API support

Drupal has the built in, ready to use API system. In a few moments you can spawn a new API endpoint for you application. You don?t need to implement a whole API by yourself, I did it a few times in multiple languages, believe me, it?s problematic.

Perfect for a backend system

Drupal is a perfect candidate for a backend system. Let?s imagine that you want to build a beautiful, mobile application. You want to let editors, other people to edit content. You want to grab this content through the API. It?s easy as pie with Drupal.

Drupal?s web interface is stable and easy to use.

Power of taxonomies

Taxonomies are, really basically, just dictionaries. The best thing about taxonomies is that you don?t need to touch code to play with them.

Let?s say that on your website you want to create a list of states in the USA. Using most of the frameworks you need to ask your developer/technical person to do so. With taxonomies you just need a few clicks and that?s it, you can put in on your website. That?s sweet, not only for non technical person, but for us, developers as well. Again, you can focus on actually making the website attractive, rather than spending time on things that can be automated.


Of course, Drupal is not perfect, but it?s undeniably a great tool. Mobile application, single page application, corporate website - there are no limits for this content management system. And actually, it is, in my opinion, the best tool to manage your content and it does not mean that you need to use Drupal to present it. You can create a mobile, ReactJS, AngularJS, VueJS application and combine it with Drupal easily.

I hope that you?ve had a good reading and wish to hear back from you! Thanks.


published by (Muhammad Najmi Ahmad Zabidi) on 2017-05-25 12:59:00 in the "Malaysia" category
A three days Malaysia Open Source Conference (MOSC) ended last week. MOSC is an open source conference which is held annually and this year it reaches its 10 years anniversary. I managed to attend the conference with a selective focus on system administration related presentations, computer security and web application development.

The First Day

The first day's talks were occupied with keynotes from the conference sponsors and major IT brands. After the opening speech and a lightning talk from the community, Mr Julian Gordon delivered his speech which regards to the Hyperledger project, a blockchain technology based ledger. Later Mr Sanjay delivered his speech on the open source implementation in the financial sector in Malaysia. Before lunch break we then listened to Mr Jay Swaminathan from Microsoft whom presented his talks on Azure based service for blockchain technology.

For the afternoon part of the first day I then attended a talk by Mr Shak Hassan on the Electron based application development. You can read his slides here. I personally used Electron based application for Zulip so basically as a non web developer I already have a mental picture what Electron is prior to the talk, but the speaker's session enlightened me more on what was happening at the background of the application. Finally for the first day before I went back I attended a slot delivered by Intel Corp on Yocto Project - in which we could automate the process of creating a bootable Linux image to any platform - whether it is an Intel x86/x86_64 platform or ARM based platform.

The Second Day

The second day of the conference was started with a talk from Malaysia Digital Hub. The speaker, Diana, presented the state of Malaysian-based startups which are currently shaped and assisted by Malaysia Digital Hub and also the ones which already matured and able to stand by themselves. Later, a presenter from Google - Mr Dambo Ren - presented a talk on Google cloud projects.

He also pointed out several major services which are available on the cloud, for example - the TensorFlow. After that I chose to enter the Scilab software slot. Dr Khatim who is an academician shared his experience on using Scilab - an open source software which is similar to Matlab - to be used in his research and for his students. Later I entered a speaking slot with a title "Electronic Document Management System with Open Source Tools".

Here two speakers from Cyber Security Malaysia (an agency within the Malaysia's Ministry of Science and Technology) presented their studies on two open source document management software - OpenDocMan and LogicalDoc. The evaluation matrices were based from the following elements - the access easiness, costs, centralized repo, disaster recovery and the security features. From their observation LogicalDoc managed to get higher scores compared to OpenDocMan.

Later after that I attended a talk by Mr Kamarul on his experience using R language and R studio in his university for medical-based research. After the lunch break then it was my turn on delivering a workshop. Basically my talk was targeted upon the entry level system administration, in which I shared pretty much my experiences using tmux/screen, git, AIDE to monitor file changes on our machines and Ansible in order to automate common tasks as much as possible within the system administration context. I demonstrated the use of Ansible with multiple Linux distros - CentOS, Debian/Ubuntu in order to show how Ansible would handle heterogeneous Linux distribution after the command execution. Most of the presented stuffs were "live" during the workshop, but I also created a slides in order to help the audience and the public to get the basic ideas of the tools which I presented. You can read about them here [PDF].

The Third Day (Finale)

On the third day I came into the workshop slot which was delivered by a speaker with his pseudonym - Wak Arianto (not his original name though). He explained Suricata, a tool which has an almost similar syntax for pattern matching with the well known Snort IDS. Mr Wak explained OS fingerprinting concepts, flowbits and later how to create rules with Suricata. It was an interesting talk as I could see how to quarantine suspicious files captured from the network (let's say - possible malware) to a sandbox for further analysis. As far as I understood from the demo and from my extra readings, flowbits is a syntax which being used to grab the state of the session which being used by Suricata that works primarily with TCP in order to detect. You can read an article about flowbits here. It's being called a flowbits because it does the parsing on the TCP flows. I can see that we can parse the state of the TCP (for example, if it is established) based from the writings here.

I have a chance to listen to FreeBSD developer's slot too. We were lucky to have Mr Martin Wilke who is living in Malaysia and actively advocating FreeBSD to the local community. Together with Mr Muhammad Moinur Rahman - another FreeBSD developer they presented the FreeBSD development ecosystem and the current state of the operating system.

Possibly we preserved the best thing at the last - I attended a Wi-Fi security workshop which was presented by Mr Matnet and Mr Jep (both are pseudonyms). This workshop began with the theoretical foundations on the wireless technology and later the development of encryption around it.

The outline of the talks were outlined here. The speakers introduced the frame types of 802.11 protocols, which includes Control Frame, Data Frame and Management Frame. Management Frame is unencrypted so the attacking tools were developed to concentrate on this part.

The Management Frames is susceptible to the following attacks:
  • Deauthentication Attacks
  • Beacon Injection Attacks
  • Karma/MANA Wifi Attacks
  • EvilTwin AP Attacks

    Matnet and Jep also showed a social engineering tool called as "WiFi Phisher" in which it could be used as (according to the developer's page in GitHub) a "security tool that mounts automated victim-customized phishing attacks against WiFi clients in order to obtain credentials or infect the victims with malwares". It works together with the EvilTwin AP attacks by putting its role after achieving a man-in-the-middle position - Wifiphisher will redirect all HTTP requests to an attacker-controlled phishing page. Matnet told us the safest way to work within the WiFi environment is either using 802.11w supported device (which is yet to be widely found - at least in Malaysia). I found some infos on 802.11w that possibly could help to understand a bit on this protocol here.


    For me this is considered the most anticipated annual event where I could meet professionals from different backgrounds and keeping my knowledge up to date with the latest development of the open source tools in the industry. The organizer surely had done a good job by organizing this event and I hope to attend this event again next year! Thank you for giving me opportunity to talk within this conference (and for the nice swag too!)

    Apart from MOSC I also planned to attend the annual Python Conference (Pycon) in which this year it is going to be special as it will be organized at the Asia Pacific (APAC) level. You can read more about Pycon APAC 2017 here (in case you probably would like to attend this event).

  • published by (Ben Witten) on 2017-05-22 19:11:00 in the "360" category
    End Point Liquid Galaxy will be coming to San Antonio to participate in GEOINT 2017 Symposium. We are excited to demonstrate our geospatial capabilities on an immersive and panoramic 7 screen Liquid Galaxy system. We will be exhibiting at booth #1012 from June 4-7.

    On the Liquid Galaxy, complex data sets can be explored and analyzed in a 3D immersive fly-through environment. Presentations can highlight specific data layers combined with video, 3D models, and browsers for maximum communications efficiency. The end result is a rich, highly immersive, and engaging way to experience your data.

    Liquid Galaxy?s extensive capabilities include ArcGIS, Cesium, Google Maps, Google Earth, LIDAR point clouds, realtime data integration, 360 panoramic video, and more. The system always draws huge crowds at conferences; people line up to try out the system for themselves.

    End Point has deployed Liquid Galaxy systems around the world. This includes many high profile clients, such as Google, NOAA, CBRE, National Air & Space Museum, Hyundai, and Barclays. Our clients utilize our content management system to create immersive and interactive presentations that tell engaging stories to their users.

    GEOINT is hosted and produced by the United States Geospatial Intelligence Foundation (USGIF). It is the nation?s largest gathering of industry, academia, and government to include Defense, Intelligence and Homeland Security communities as well as commercial, Fed/Civil, State and Local geospatial intelligence stakeholders.

    We look forward to meeting you at booth #1012 at GEOINT. In the meantime, if you have any questions please visit our website or email


    published by (Kiel) on 2017-05-22 15:59:00 in the "bash" category

    You want your script to run a command only if elapsed-time for a given process is greater than X?

    Well, bash does not inherently understand a time comparison like:

    if [ 01:23:45 -gt 00:05:00 ]; then

    However, bash can compare timestamps of files using -ot and -nt for "older than" and "newer than", respectively. If the launch of our process includes creation of a PID file, then we are in luck! At the beginning of our loop, we can create a file with a specific age and use that for quick and simple comparison.

    For example, if we only want to take action when the process we care about was launched longer than 24 hours ago, try:

    touch -t $(date --date=yesterday +%Y%m%d%H%M.%S) $STAMPFILE

    Then, within your script loop, compare the PID file with the $STAMPFILE, like this:

    if [ $PIDFILE -ot $STAMPFILE ]; then

    And of course if you want to be sure you're working with the PID file of a process which is actually responding, you can try to send it signal 0 to check:

    if kill -0 `cat $PIDFILE`; then


    published by (Jon Jensen) on 2017-05-10 04:57:00 in the "ecommerce" category

    We do a lot of ecommerce development at End Point. You know the usual flow as a customer: Select products, add to the shopping cart, then check out. Checkout asks questions about the buyer, payment, and delivery, at least. Some online sales are for ?soft goods?, downloadable items that don?t require a delivery address. Much of online sales are still for physical goods to be delivered to an address. For that, a postal code or zip code is usually required.

    No postal code?

    I say usually because there are some countries that do not use postal codes at all. An ecommerce site that expects to ship products to buyers in one of those countries needs to allow for an empty postal code at checkout time. Otherwise, customers may leave thinking they aren?t welcome there. The more creative among them will make up something to put in there, such as ?00000? or ?99999? or ?NONE?.

    Someone has helpfully assembled and maintains a machine-readable (in Ruby, easily convertible to JSON or other formats) list of the countries that don?t require a postal code. You may be surprised to see on the list such countries as Hong Kong, Ireland, Panama, Saudi Arabia, and South Africa. Some countries on the list actually do have postal codes but do not require them or commonly use them.

    Do you really need the customer?s address?

    When selling both downloadable and shipped products, it would be nice to not bother asking the customer for an address at all. Unfortunately even when there is no shipping address because there?s nothing to ship, the billing address is still needed if payment is made by credit card through a normal credit card payment gateway ? as opposed to PayPal, Amazon Pay, Venmo, Bitcoin, or other alternative payment methods.

    The credit card Address Verification System (AVS) allows merchants to ask a credit card issuing bank whether the mailing address provided matches the address on file for that credit card. Normally only two parts are checked: (1) the street address numeric part, for example, ?123? if ?123 Main St.? was provided; (2) the zip or postal code, normally only the first 5 digits for US zip codes, and often non-US postal code AVS doesn?t work at all with non-US banks.

    Before sending the address to AVS, validating the format of postal codes is simple for many countries: 5 digits in the US (allowing an optional -nnnn for ZIP+4), and 4 or 5 digits in most others countries ? see the Wikipedia List of postal codes in various countries for a high-level view. Canada is slightly more complicated: 6 characters total, alternating a letter followed by a number, formally with a space in the middle, like K1A 0B1 as explained in Wikipedia?s components of a Canadian postal code.

    So most countries? postal codes can be validated in software with simple regular expressions, to catch typos such as transpositions and missing or extra characters.

    UK postcodes

    The most complicated postal codes I have worked with is the United Kingdom?s, because they can be from 5 to 7 characters, with an unpredictable mix of letters and numbers, normally formatted with a space in the middle. The benefit they bring is that they encode a lot of detail about the address, and it?s possible to catch transposed character errors that would be missed in a purely numeric postal code. The Wikipedia article Postcodes in the United Kingdom has the gory details.

    It is common to use a regular expression to validate UK postcodes in software, and many of these regexes are to some degree wrong. Most let through many invalid postcodes, and some disallow valid codes.

    We recently had a client get a customer report of a valid UK postcode being rejected during checkout on their ecommerce site. The validation code was using a regex that is widely copied in software in the wild:


    (This example removes support for the odd exception GIR 0AA for simplicity?s sake.)

    The customer?s valid postcode that doesn?t pass that test was W1F 0DP, in London, which the Royal Mail website confirms is valid. The problem is that the regex above doesn?t allow for F in the third position, as that was not valid at the time the regex was written.

    This is one problem with being too strict in validations of this sort: The rules change over time, usually to allow things that once were not allowed. Reusable, maintained software libraries that specialize in UK postal codes can keep up, but there is always lag time between when updates are released and when they?re incorporated into production software. And copied or customized regexes will likely stay the way they are until someone runs into a problem.

    The ecommerce site in question is running on the Interchange ecommerce platform, which is based on Perl, so the most natural place to look for an updated validation routine is on CPAN, the Perl network of open source library code. There we find the nice module Geo::UK::Postcode which has a more current validation routine and a nice interface. It also has a function to format a UK postcode in the canonical way, capitalized (easy) and with the space in the correct place (less easy).

    It also presents us with a new decision: Should we use the basic ?valid? test, or the ?strict? one? This is where it gets a little trickier. The ?valid? check uses a regex validation approach will still let through some invalid postcodes, because it doesn?t know what all the current valid delivery destinations are. This module has a ?strict? check that uses a comprehensive list of all the ?outcode? data ? which as you can see if you look at that source code, is extensive.

    The bulkiness of that list, and its short shelf life ? the likelihood that it will become outdated and reject a future valid postcode ? makes strict validation checks like this of questionable value for basic ecommerce needs. Often it is better to let a few invalid postcodes through now so that future valid ones will also be allowed.

    The ecommerce site I mentioned also does in-browser validation via JavaScript before ever submitting the order to the server. Loading a huge list of valid outcodes would waste a lot of bandwidth and slow down checkout loading, especially on mobile devices. So a more lax regex check there is a good choice.

    When Christmas comes

    There?s no Christmas gift of a single UK postal code validation solution for all needs, but there are some fun trivia notes in the Wikipedia page covering Non-geographic postal codes:

    A fictional address is used by UK Royal Mail for letters to Santa Claus:

    Santa?s Grotto
    Reindeerland XM4 5HQ

    Previously, the postcode SAN TA1 was used.

    In Finland the special postal code 99999 is for Korvatunturi, the place where Santa Claus (Joulupukki in Finnish) is said to live, although mail is delivered to the Santa Claus Village in Rovaniemi.

    In Canada the amount of mail sent to Santa Claus increased every Christmas, up to the point that Canada Post decided to start an official Santa Claus letter-response program in 1983. Approximately one million letters come in to Santa Claus each Christmas, including from outside of Canada, and they are answered in the same languages in which they are written. Canada Post introduced a special address for mail to Santa Claus, complete with its own postal code:


    In Belgium bpost sends a small present to children who have written a letter to Sinterklaas. They can use the non-geographic postal code 0612, which refers to the date Sinterklaas is celebrated (6 December), although a fictional town, street and house number are also used. In Dutch, the address is:

    Spanjestraat 1
    0612 Hemel

    This translates as ?1 Spain Street, 0612 Heaven?. In French, the street is called ?Paradise Street?:

    Rue du Paradis 1
    0612 Ciel

    That UK postcode for Santa doesn?t validate in some of the regexes, but the simpler Finnish, Canadian, and Belgian ones do, so if you want to order something online for Santa, you may want to choose one of those countries for delivery. :)


    published by (Matt Galvin) on 2017-05-04 13:00:00 in the "training" category

    This blog post is for people like me who are interested in improving their knowledge about computers, software and technology in general but are inundated with an abundance of resources and no clear path to follow. Many of the courses online tend to not have any real structure. While it's great that this knowledge is available to anyone with access to the internet, it often feels overwhelming and confusing. I always enjoy a little more structure to study, much like in a traditional college setting. So, to that end I began to look at MIT's OpenCourseWare and compare it to their actual curriculum.

    I'd like to begin by acknowledging that some time ago Scott Young completed the MIT Challenge where he "attempted to learn MIT?s 4-year computer science curriculum without taking classes". My friend Najmi here at End Point also shared a great website with me to "Teach Yourself Computer Science". So, this is not the first post to try to make sense of all the free resources available to you, it's just one which tries to help organize a coherent plan of study.


    I wanted to mimic MIT's real CS curriculum. I also wanted to limit my studies to Computer Science only, while stripping out anything not strictly related. It's not that I am not interested in things like speech classes or more advanced mathematics and physics, but I wanted to be pragmatic about the amount of time I have each week to put in to study outside of my normal (very busy) work week. I imagine anyone reading this would understand and very likely agree.

    I examined MIT's course catalog. They have 4 undergraduate programs in the Department of Electrical Engineering and Computer Science:

    • 6-1 program: Leads to the Bachelor of Science in Electrical Science and Engineering. (Electrical Science and Engineering)
    • 6-2 program: Leads to the Bachelor of Science in Electrical Engineering and Computer Science and is for those whose interests cross this traditional boundary.
    • 6-3 program: Leads to the Bachelor of Science in Computer Science and Engineering.(Computer Science and Engineering)
    • 6-7 program: Is for students specializing in computer science and molecular biology.
    Because I wanted to stick what I believed would be most practical for my work at End Point, I selected the 6-3 program. With my intended program selected, I also decided that the full course load for a bachelor's degree was not really what I was interested in. Instead, I just wanted to focus on the computer science related courses (with maybe some math and physics only if needed to understand any of the computer courses).

    So, looking at the requirements, I began to determine which classes I'd require. Once I had this, I could then begin to search the MIT OpenCourseWare site to ensure the classes are offered, or find suitable alternatives on Coursera or Udemy. As is typical, there are General Requirements and Departmental Requirements. So, beginning with the General Institute Requirements, lets start designing a computer science program with all the fat (non-computer science) cut out.

    General Requirements:

    I removed that which was not computer science related. As I mentioned, I was aware I may need to add some math/science. So, for the time being this left me with:

    Notice that it says

    one subject can be satisfied by 6.004 and 6.042[J] (if taken under joint number 18.062[J]) in the Department Program

    It was unclear to me what "if taken under joint number 18.062[J]" meant (nor could I find clarification) but as will be shown later, 6.004 and 6.042[J] are in the departmental requirements, so let's commit to taking those two which would leave the requirement of one more REST course. After some Googling I found the list of REST courses here. So, if you're reading this to design your own program, please remember that later we will commit to 6.004 and 6.042[J] and go here to select a course.

    So, now on to the General Institute Requirements Laboratory Requirement. We only need to choose one of three:

    • - 6.01: Introduction to EECS via Robot Sensing, Software and Control
    • - 6.02: Introduction to EECS via Communications Networks
    • - 6.03: Introduction to EECS via Medical Technology

    So, to summarize the general requirements we will take 4 courses:

    Major (Computer Science) Requirements:

    In keeping with the idea that we want to remove non-essential, and non-CS courses, let's remove the speech class. So here we have a nice summary of what we discovered above in the General Requirements, along with details of the computer science major requirements:

    As stated, let's look at the list of Advanced Undergraduate Subjects and Independent Inquiry Subjects so that we may select one from each of them:

    Lastly, it's stated that we must

    Select one subject from the departmental list of EECS subjects

    a link is provided to do so, however it brings you here and I cannot find a list of courses. I believe that this link no longer takes you to the intended location. A Google search brought up a similar page, but with a list of courses, as can be seen here. So, I will pick one from that page.

    The next step was to find the associated courses on MIT OpenCourseWare

    Sample List of Classes

    So, now you will be able to follow the links I provided above to select your classes. I was not always able to find courses that matched by exact name and/or course number. Sometimes I had to read the description and look through several courses which seemed similar. I will provide my own list in case you'd just like to us mine:


    So there you have it, please feel free to comment with any of your favorite resources.