All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by Eugenia on 2017-05-28 18:44:11 in the "Metaphysics" category
Eugenia Loli-Queru

I saw a UFO when I was 17 years old, I’ve written about that before on this blog and elsewhere.

What I never wrote publicly before, was that in the Fall of 2000 (must have been mid-October), when I was still living in Guildford, UK, I had a weird, night incident that resembled an alien abduction story. I have no full memory of it apart from its initial stage; I never attempted hypnosis about it. It created a lot of fear in me, and could not sleep the subsequent nights, until I finally talked to a monk about it a month later, and he helping me overcome it.

Anyway, that’s when my curiosity about the whole UFO thing really peaked. I only left trails about that on my blog through the years, never wide-openly making it a big deal on my blog, and only speaking about it with “what-ifs”. But I feel that now I can talk about it openly, because I have overcome the fear of the situation. I believe that I see the whole alien agenda a bit more objectively, rather than showered in fear. FYI, except these two occasions, I’ve never had an alien-related conscious memory, not before, and not after.

Below is my personal opinion as to what might be happening with these aliens and abductions, in the form of Q&A.

Grey alien

Q: So, you’re saying that “they’re here”?

A: Yes, they’re here, and they’ve been here for a long time already.

Think of it this way: If NONE of the other civilizations haven’t already reached us in all of these millions of years, it means that we would never reach any of them either. Simple statistical logic. That would make this universe of ours, simply, not worth living, as it would make it a lonely, sad place. A literal death trap with nowhere to go.

But I don’t believe that the Universe is such a thing. I believe that it’s teeming with life, and some of that life, has managed to make the big jump. And while we might be a passing curiosity for many of these species, it seems that the ones that look the most like us humanoids (like the Greys, in our case), probably have stuck around for a while. We would do the same thing after all.

Q: What are they doing here?

A: They’re preparing the replacement of homo sapiens with a hybrid species that looks like human, but thinks like a Grey.

Q: Wait, whaaat? Why?

A: Because homo sapiens has run its course. We have managed to acquire great technology, but we can’t use it wisely. The fact that two atomic bombs were dropped on other human beings, and nuclear testing went on for so many years, it shows that homo sapiens is capable of creating, but not of wisely managing its creations. The average human isn’t capable of thinking in a macro-scale, and our politics or even our businesses aren’t designed to take action for more than 2-3 years in the future (generally speaking).

The way we have destroyed our environment and other species in our planet, is another cue. I mean, it’s rather pointy when abductees are told to “take care of the environment”, as far back as the 1970s, right after all the invasive medical procedures. First they harvest your eggs or sperm, and then they tell you to “please don’t litter”? What’s up with that? Obviously, they want the planet intact. They’re not destroyers. They’re a cleanup crew, with respect towards all life. “Fixing” us, would ensure that other species in our planet could go on too without the ever-present human danger.

Q: How are they going to replace us?

A: Using a hybrid program. Several levels of hybridization have been reported. The latest reports speak of hybrids that contain a large part of the Grey intellect (complete with sharp logic, control of emotions, telepathy), in an externally human-looking body. These upgrades are encoded in the hybrid’s DNA, but not switched ON yet. Hence the so called abduction phenomena, and why they mostly abduct people from the same families and lineages — because they carry the DNA they are interested in working with.

Oh, don’t be so sad about it. I’m sure you didn’t bat an eye when you read about the demise of the Neanderthal man. Life moved on. And I’m sure you’re ok with the hundreds of breeds of cats, dogs, cows, chickens, trees, vegetables, and everything in between. All re-made by man. So why should THEY feel guilty doing us? The homo sapiens arrogance, double-standards, and hypocrisy is without an end on this issue.

Q: Wouldn’t it be simpler to just militarily invade us then, instead of this elaborate conspiratorial plan?

A: No. They’re not interested in our resources, and they’re not interested in killing us. In fact, they need us. Water and minerals can easily be extracted from other dead planets or asteroids given the right technology — technology they already have. What’s in short supply instead, is compatible consciousness itself.

To understand why they would prefer to upgrade us, instead of taking us out, or invading us, you must view the whole thing from a higher perspective. A civilization that has managed to survive for a long time, understands evolution and the process of life better than toddler civilizations like ours. With understanding, it comes respect. With respect, they allow evolution take its course. And when there are hiccups (like our suicidal civilization), they offer help. That’s their brand of help.

Other civilizations might help differently. For example, the so-called Pleiadians, advise us to sit down our ass, meditate all day, and go vegan. Find our own way via enlightenment. Obviously, the Greys are a bit more hands-on and practical than spewing hippie shit.

Q: Why don’t they just tell us what to do instead of replacing us?

A: Because they know of our mental limitations. If suddenly your selfish child-self were to move in with a different family that you share no familial bond with, and who they would tell you to make a perfect bed, clean up your room daily, brush your teeth and walk the dog twice a day, and take on a number of household chores, what would the result be? Resentment, that’s what it would be. You’d grow to hate them for not letting you “be free”: wasteful, lazy, un-thoughtful etc. So the main new feature of the replacement species is meant to be responsibility, so such “chores” are not seen as chores anymore, but as necessary acts to keep the planet tidy.

Q: So you’re saying that they’re doing it out of the goodness of their hearts?

A: Yes, and no. They definitely get something out of it too, let’s not be naive here. This is two-way beneficial, it’s not just for us. My understanding is that this is how they propagate their species and civilization: by breeding with compatible, local alpha-species, which gives them the ability to actually live in the various planetary environments. It’s in fact, the smart way of “invading”.

Q: So, they ARE invading!

A: In a way, yes, they are. But in another way, they’re not. If they wanted to just plainly invade and take us over, they’d have done so thousands of years ago (apparently, according to reports, they found us a long time ago). Instead, they chose, and were ALLOWED to do so only after the human species reached its end of its rope evolutionarily-speaking, and stupidly started using nuclear weapons. In fact, the program kickstarted in a serious manner exactly then. That timing is no coincidence, and it’s the biggest clue as to “why” and “why now”.

Q: Who allowed them to do so?

A: If we have a United Nations and a NATO, rest assured there’s something equivalent on a galactic scale. The Grey program seems to be at least one of the programs that is active in our planet. Some other alien factions have expressed support for the Grey program, and some others dismay. Just like in our UN, nobody can agree on everything, but probably majority rules (or it might be a hierarchical system, who knows).

Q: Why aren’t other alien species revealing themselves?

A: Because no one wants to deal with children who just found the box of matches. There is a type of prime directive, but unlike Star Trek, it’s not about the invention of a warp drive. Instead, that prime directive, is requirement of undeniable proof that the young species isn’t suicidal. Because if they are suicidal, it means that they would be equally hostile. So, they would never show up on our doorstep and introduce themselves, in order to protect their own. You don’t go knock the door and introduce yourself & your kids to the registered sex offender next door, now do you? If a species goes through successfully that crucial phase on its evolutionary development (I’m guessing most civilizations do), and it irons out any destructive inklings, then, they would reveal themselves.

Since we FAILED to do so (with the nuclear bombings and wars), “help” was sanctioned, and approved.

Q: Does our government know about this?

A: Of course they do, at least since 1947 via the Roswell crash, or maybe a tiny bit earlier (the so-called “foo fighters”, an early name for flying saucers, were reported by both Allies and Axis pilots). Although there’s no clear proof of it, Eisenhower is said to be the first and only president who saw these beings in the flesh in 1954, and inked a deal after the Washington DC UFO incidents in 1952. At some level, the government cooperated at least up to the 1970s — then information gets a bit murky, as the deal supposedly fell through, after the government realized the extensive abduction phenomena.

Q: Why don’t they disclose it?

A: They can’t. Both because the aliens don’t want them to, but also because it makes absolutely no sense to disclose it. And no, it’s NOT because as many people before have mentioned, that it would destroy our socio-economic system and our religious beliefs. That is hogwash, humans are ready for an alien life announcement from that point of view. The REAL danger, is that they CAN’T adequately explain to citizens that homo sapiens will eventually get replaced with a hybrid version, and that people get abducted against their will to that end. That’s simply something that you CAN’T tell the average Joe, because their intellect would only go as far as: INVASION (for liberals), or, DEMONS FROM HELL (for conservatives). And while it is a TYPE of invasion (from one point of view), it’s one that’s BENEFICIAL, and NEEDED by us. But humans are so RACIST, that they would never see it like that. They’ll see it as WAR. And that’s a war that no Earth government can win. So there’s *absolutely nothing* to tell the public, but express denial and laugh it off.

I personally agree with the US government, and the aliens, that disclosure SHOULD NOT happen (neither I believe it will). Homo sapiens is simply too stupid to see the bigger picture in an objective manner. That’s why it’s getting replaced in the first place! Duh!

Q: Are you a government misinformation shill?

A: No. I’m just an artist. One with lots of imagination. And rather good analytical skills, if I may say so myself.

Q: Why can’t our government fight them at all?

A: Let’s replace “government” with CIA/NSA + military. The “government” as you know it, is inadequate to act on such issues, so they’re kept in the dark (correctly or not) since the JFK times.

A number of previously top-secret documents of CIA experimenting with LSD and remote viewing are now public. The public somehow thinks that they experimented with them in order to achieve “mind control”. Which is a completely false theory, since no one on LSD can be “controlled” in any way. One of the reasons why CIA explored these, was so it could understand and access parallel realities, and access information vital to counter-intelligence (which ended in failure, since the aliens have technology that can block junior consciousness like ours from accessing their ships).

You see, most of the abductions don’t even happen in our physical realm. They usually happen in our neighboring “sheet of reality”/dimension (“astral world”). Even in the event that they appear physically, they often only abduct the etheric body and not the physical one (despite erroneous reports by abductees who can’t tell the difference). Hence the very little-to-none proof of abduction, the fact that we have no memory of it (due to altered state), or the foggy and out of focus pictures of UFOs. They can operate in our reality, but I postulate that they mostly work in a neighboring reality, where humans can enter only as an altered state (hence the memory loss). Impregnation, implants can be inserted on the etheric body, and still influence the physical body.

I want to Believe

Q: Wait, you lost me. Don’t mix New Age crap in this please. At least, keep it sci-fi.

A: Sorry to rain on your parade, but in order to understand how these guys came here in the first place (it’s easier to punch through realities than break the speed of light), or how they operate, some of these “New Age” beliefs about dimensions and higher selves, and souls, must be mixed in to the whole story.

This is where I had my biggest blockage in accepting the whole story for years too. It was because when someone was mixing New Age “spirituality” in all this, I hated it and was shutting it down, because I was an atheist (I still am, but of a different kind). I believed that aliens would simply be from another planet, and that’s about it.

Only when I started studying psychedelics I realized that what we call “spiritual”, is nothing but science that we haven’t understood yet. There’s nothing spiritual about souls, “guardian angels”, or “negative entities”. These are just naive words we use because we don’t understand the nature of reality at that level yet. Eventually, we could understand how a soul is just a piece of consciousness that derives from a larger piece, how spirit guides are nothing but evolved entities that help junior entities in their own path of evolution, and that dark entities are just entities who evolved to not give a shit about anything, so we simply perceive them as dark, because our toddler brain has only evolved as far as interpreting danger at a rudimentary level, and it doesn’t always see things the way they truly are (meaning, what we might perceive as a “demon” might physically and emotionally look very different to a more advanced brain or entity).

Q: So you’re saying we’re not dead… after death?

A: Our ego/persona dies (because it was just a construct to keep us alive, and not fundamental), but our consciousness continues its evolution into another vessel/body. This is why the Greys themselves refer to physical bodies as “containers”. Evolution happens both in the consciousness level, and in the physical level. Absolutely nothing stays the same (physical or immaterial), and as everything changes, it strives for novelty. Hence “evolution”. As for “God”, it’s nothing but the sum of everything manifested and unmanifested, trying to understand what itself is. In absolute reality, all is one, there’s no separation. But in order to understand itself, these sparks of consciousness are needed, in order to understand itself “from the inside” (since there’s nothing outside the Everything). “God” is not a guy, and has no persona. It’s simply pure Being. It doesn’t care if you pray to it or not either. You’re in it, and you’re it.

Q: Ok… so the Greys know all that “religious” stuff, but they approach them scientifically instead?

A: Yes. And they’re in fact, in contact with these higher entities (seen the size of their head lately? these guys can reach them via meditation easily I reckon). There have been reports of actual souls (which look like balls of light) entering abducted pregnant women, to give life to their babies. Greys have also showed future probabilistic scenes, they’ve shown past life information, and other such things, that we would naively call “spiritual”. It’s just crazy-advanced science, that’s all.

Very telling are constant reports of abductees pleading to be left alone, and the Greys coldly replying: “we have every right, you have already agreed to this prior to coming to this life”. And they’re probably right.

Q: Are they demons? Do they steal souls?

A: Erm… no… these are religious stupidities that humans believe. They just want to propagate in the cosmos, just like all alive species do. Merging with native species that easily survive without cloning in their natural environment (that Greys aren’t adapted to), is their way of propagating and evolving at a fast pace. It’s rather brilliant actually.

Q: So… are they good, or are they bad?

A: They’re rather neutral. They are doing us a favor, they’re doing themselves a favor, and at the same time, they’re inflicting pain anyway (ask the mutilated cows if they agreed to that prior coming to this life — har, har). At the end of the day, it’s a point of view. If you focus on the positives, they’re good, if you focus on the negatives, they’re bad. The current schism in Ufology these days, with half the people claiming that they’re good, and the other half claiming they’re bad, is kinda laughable actually. Both sides are short-sighted.

Q: What’s their end game?

A: Their plan is long-term, but they also prepare for the worst, in case they have to act quickly. Basically, I believe that in their best case scenario, the switch will happen without anyone realizing that it did. Hybrid abductees from old lineages of hybrids would just blend, make babies, and at some point, gradually get “switched on” by the Greys to make use of their higher functions (e.g. previously blocked memories, extra scientific knowledge, telepathy etc). By that time, they’d be a lot of them to take over the planet slowly but surely (some ufologists say that up to 5% of the population today has been abducted at some point in their life).

The worst case scenario is if world war with nuclear probability starts soon, or if the governments go on the offensive. In that case, these abductees/hybrids would have to be switched on early, to try and tame the situation and take the upper hand. In fact, abductees have been shown VR scenarios where they have to play a role in a chaotic situation. That was preparation, just in case.

Obviously, the Greys hope for the first case, where the transition happens smoothly (as it is probable that has happened in our far past too), and without the population realizing it, or without viewing it as an invasion. But they also prepare for the alternative.

Q: What about crop circles?

A: I just see them as teasers. At least the ones that are authentic, and not made by prankster humans.

Q: Who are the Men in Black?

A: Could be anyone. Human agents, hybrids, or Greys “wearing” a human cloned body. Depends on the mission, I guess.

Q: What about the Reptilians, Nazis on the moon, secret US bases on Mars?

A: These conspiracies are the silliest ever. “The Queen of England and most politicians are Reptilians”, they say. Oh, really? If advanced alien Reptilians had taken over Trump’s body, then Trump would have ceased to be such a buffoon the day he stepped into White House. That’s proof right there that these conspiracies are stupid beyond belief.

I don’t doubt that two souls can be housed in a single container, but I don’t think that this would be allowed without a pre-agreement. I’d consider it to be a rarity.

Q: Ok now… surely you don’t believe that Greys exist, right?

A: I’m certain they do. People have seen them not only consciously, or via hypnosis, but also via meditation and psychedelics.

Break-through psychedelics are not hallucinations per se. Psychedelics turn off the brain’s filter (which helps us navigate and survive our reality), and so we experience reality in a rough, unfiltered way (hence the colors and fractals). When entering other concrete neighboring realities via psychedelics, we can see everything in crisp detail (and that’s how Greys were seen). The “higher” we go in terms of dimensions, the more abstract things become (because they’re further from our set-in-stone reality). In these higher dimensions, our brain can’t comprehend the non-Eucleidian math found there, a great article from a Stanford researcher can be read here about that subject.

Q: It must just be sleep paralysis. They can’t be real.

A: What’s “real”? Our reality is only real to us because our senses telling us so, while our brain filters out anything we don’t need to survive. On an altered state, by definition you tune in to another reality, because you shift your senses elsewhere.

As for sleep paralysis, that has always been such a cop-out explanation that it’s not even funny anymore. How about all the people who were not sleeping when abducted, or when seeing UFOs, often in a mass sighting?

I’ll tell you this: from ALL the other supernatural conspiracies out there (e.g. bigfoot, Loch ness, chupakabra, ghosts etc), the UFO conspiracy is the only one that has both survived the test of time, AND it is the only one that is backed up by testimonies of credible people: from cops on the job, to ex-military personnel, to trained pilots, and of course, astronauts that have gone public (like Buzz Aldrin), and a US president being a witness, Jimmy Carter. Google it.

Q: Can I do something about all that?

A: No, nothing. Whatever’s going to happen, it’s going to happen. Who cares, really? I mean, even if we get reincarnated here, we’d still have bodies to inhabit, only upgraded ones. I find that a positive thing. If you take the racist approach of “pure-blood human”, you will get disappointed: we’re probably never been pure blood, and in any way, only your ego would take such sides of “us and them”. A higher self, which can reincarnate into anything, doesn’t give a shit if its next body is a human, a Grey, a human-grey hybrid, or an ant.

So live your life in peace and happiness while you’re still human, and you have the gift of thought to appreciate creation.


Comments

published by noreply@blogger.com (Peter Hankiewicz) on 2017-05-27 22:00:00 in the "browsers" category

The last week was really interesting for me. I attended the infoShare 2017, the biggest tech conference in central-eastern Europe. The agenda was impressive, but that?s not everything. There was a startup competition going on and really, I?m totally impressed.

infoShare in numbers:

  • 5500 attendees
  • 133 speakers
  • 250 startups
  • 122 hours of speeches
  • 12 side events
Let?s go through each speech I was attending.

Day 1

Why Fast Matters by Harry Roberts from csswizardry.com

Harry tried to convince us that performance is important.


Great speech, showing that it?s an interesting problem not only from a financial point of view. You must see it, link to his presentation: https://speakerdeck.com/csswizardry/why-fast-matters

Dirty Little Tricks From The Dark Corners of Front-End by Vitaly Friedman from smashingmagazine.com

It was magic! I work a lot with CSS, but this speech showed me some new ideas and reminded me that the simplest solution is maybe not the best solution usually and that we should reuse CSS between components as much as possible.

Keep it DRY!

One of these tricks is a quantity query CSS selector. It?s a pretty complex selector that can apply your styles to elements based on the number of siblings. (http://quantityqueries.com/)

The Art of Debugging (browsers) by Remy Sharp

It was great to see some other developer and see his workflow during debugging. I usually work from home and it?s not easy to do it in my case.

Remy is a very experienced JavaScript developer and showed us his skills and tricks, especially interesting Chrome developer console integration.

I always thought that using the developer console for programming is not the best idea, maybe it?s not? It looked pretty neat.

Desktop Apps with JavaScript by Felix Rieseberg from Slack

Felix from Slack presented and show the power of desktop hybrid apps. He used a framework called Electron. Using Electron you can build native, cross-system desktop apps using HTML, JavaScript and CSS. I don?t think that it?s the best approach for more complex applications and probably takes more system memory than native-native applications, but for simpler apps it can a way to go!

Github uses it to build their desktop app, so maybe it?s not so slow? :)

RxJava in existing projects by Tomasz Nurkiewicz from Allegro

Tomasz Nurkiewicz from Allegro showed us his high programming skills and provided some practical RxJava examples. RxJava is a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

Definitely something to read about.

Day 2

What does a production ready Kubernetes application look like? by Carter Morgan from Google

Carter Morgan from Google showed us practical uses of Kubernates.

Kubernates is an open-source system for automating deployment, scaling and management of containerized applications. It was originally designed by Google developers and I think that they really want to popularize it. It looked that Kubernates has a low learning curve, but devops agents I spoke after the presentation were sceptical, saying that if you know how to use Docker Swarm then you don?t really need Kubernates.

Vue.js and Service Workers become Realtime by Blake Newman from Sainsbury's

Blake Newman is a JavaScript developer, member of the core Vue.js (trending, hot JavaScript framework) team. He explained how to use Vue.js with service workers.

The service workers are scripts that your browser runs in the background. Nice to see how it fits together, even though it?s not yet supported by every popular browser.

 

 

Listen to your application and sleep by Gianluca Arbezzano from InfluxData

Gianluca showed us his modern and flexible monitoring stack. Great tips and mostly discussing and recommending InfluxDB and Telegraf, we use it a lot in End Point.

He was right that it?s easy to configure, open-source and really useful. Great speech!

Summary

Amazing two days. All the presentations will be available on Youtube soon: https://www.youtube.com/user/infoSharePL/videos.

I can fully recommend this conference, see you next time!


Comments

published by noreply@blogger.com (Peter Hankiewicz) on 2017-05-26 22:00:00 in the "backend" category

In Endpoint, we had the pleasure to be a part of multiple Drupal 6, 7 and 8 projects. Most of our clients wanted to use the latest Drupal version, to have a long term support, stable platform.

A few years ago, I already had big experience with PHP itself and other, various PHP frameworks like Wordpress, Joomla or Typo3. I was happy to use all of them, but then one of our clients asked us for a simple Drupal 6 task. That?s how I started my Drupal journey which continues until now.

To be honest, I had a difficult start, it was different, new and pretty inscrutable for me. After a few days of reading documentation and playing with the system I was ready to do some simple work. Here, I wanted to share my thoughts about Drupal and tell you why I LOVE! it.

Low learning curve

It took, of course, a few months until I was ready to build something more complex, but it really takes a few days only to be ready for simple development. It?s not only about Drupal, but also PHP, it?s much cheaper to maintain and extend a project. Maybe it?s not so important with smaller projects, but definitely important for massive code bases. Programmers can jump in and start being productive really quick.

Great documentation

Drupal documentation is well structured and constantly developed, usually you can find what you need within a few minutes. It?s critical and must have for any other framework and not so common unfortunately.

Big community

The Drupal community is one of the biggest IT communities I have ever encountered. They extend, fix and document the Drupal core regularly. Most of them have their other jobs and work on this project just for fun and with passion.

It?s free

It?s an open source project, that?s one of the biggest pros here. You can get it for free, you can get support for free, you can join the community for free too (:)).

Modules

On the official Drupal website you can find tons of free plugins/modules. It?s a time and money saver, you don?t need to reinvent the wheel for every new widget on your website and focus on fireworks.

Usually you can just go there and find a proper component. E-commerce shop? Slideshow? Online classifieds website? No problem! It?s all there.

PHP7 support

I can often hear from other developers that PHP is slow, well, it?s not the Road Runner, but come on, unless you are Facebook (and I think that they, correct me if I?m wrong, still use PHP :)) it?s just OK to use PHP.

Drupal fully supports PHP7.

With PHP7 it?s much faster, better and safer. To learn more: https://pages.zend.com/rs/zendtechnologies/images/PHP7-Performance%20Infographic.pdf.

In the infographic you can see that PHP7 is much faster than Ruby, Perl and Python when you try to render a Mandelbrot fractal. In general, you definitely can?t say that PHP is slow, same as Drupal.

REST API support

Drupal has the built in, ready to use API system. In a few moments you can spawn a new API endpoint for you application. You don?t need to implement a whole API by yourself, I did it a few times in multiple languages, believe me, it?s problematic.

Perfect for a backend system

Drupal is a perfect candidate for a backend system. Let?s imagine that you want to build a beautiful, mobile application. You want to let editors, other people to edit content. You want to grab this content through the API. It?s easy as pie with Drupal.

Drupal?s web interface is stable and easy to use.

Power of taxonomies

Taxonomies are, really basically, just dictionaries. The best thing about taxonomies is that you don?t need to touch code to play with them.

Let?s say that on your website you want to create a list of states in the USA. Using most of the frameworks you need to ask your developer/technical person to do so. With taxonomies you just need a few clicks and that?s it, you can put in on your website. That?s sweet, not only for non technical person, but for us, developers as well. Again, you can focus on actually making the website attractive, rather than spending time on things that can be automated.

Summary

Of course, Drupal is not perfect, but it?s undeniably a great tool. Mobile application, single page application, corporate website - there are no limits for this content management system. And actually, it is, in my opinion, the best tool to manage your content and it does not mean that you need to use Drupal to present it. You can create a mobile, ReactJS, AngularJS, VueJS application and combine it with Drupal easily.

I hope that you?ve had a good reading and wish to hear back from you! Thanks.


Comments

published by noreply@blogger.com (Muhammad Najmi Ahmad Zabidi) on 2017-05-25 12:59:00 in the "Malaysia" category
A three days Malaysia Open Source Conference (MOSC) ended last week. MOSC is an open source conference which is held annually and this year it reaches its 10 years anniversary. I managed to attend the conference with a selective focus on system administration related presentations, computer security and web application development.

The First Day

The first day's talks were occupied with keynotes from the conference sponsors and major IT brands. After the opening speech and a lightning talk from the community, Mr Julian Gordon delivered his speech which regards to the Hyperledger project, a blockchain technology based ledger. Later Mr Sanjay delivered his speech on the open source implementation in the financial sector in Malaysia. Before lunch break we then listened to Mr Jay Swaminathan from Microsoft whom presented his talks on Azure based service for blockchain technology.




For the afternoon part of the first day I then attended a talk by Mr Shak Hassan on the Electron based application development. You can read his slides here. I personally used Electron based application for Zulip so basically as a non web developer I already have a mental picture what Electron is prior to the talk, but the speaker's session enlightened me more on what was happening at the background of the application. Finally for the first day before I went back I attended a slot delivered by Intel Corp on Yocto Project - in which we could automate the process of creating a bootable Linux image to any platform - whether it is an Intel x86/x86_64 platform or ARM based platform.



The Second Day

The second day of the conference was started with a talk from Malaysia Digital Hub. The speaker, Diana, presented the state of Malaysian-based startups which are currently shaped and assisted by Malaysia Digital Hub and also the ones which already matured and able to stand by themselves. Later, a presenter from Google - Mr Dambo Ren - presented a talk on Google cloud projects.



He also pointed out several major services which are available on the cloud, for example - the TensorFlow. After that I chose to enter the Scilab software slot. Dr Khatim who is an academician shared his experience on using Scilab - an open source software which is similar to Matlab - to be used in his research and for his students. Later I entered a speaking slot with a title "Electronic Document Management System with Open Source Tools".


Here two speakers from Cyber Security Malaysia (an agency within the Malaysia's Ministry of Science and Technology) presented their studies on two open source document management software - OpenDocMan and LogicalDoc. The evaluation matrices were based from the following elements - the access easiness, costs, centralized repo, disaster recovery and the security features. From their observation LogicalDoc managed to get higher scores compared to OpenDocMan.

Later after that I attended a talk by Mr Kamarul on his experience using R language and R studio in his university for medical-based research. After the lunch break then it was my turn on delivering a workshop. Basically my talk was targeted upon the entry level system administration, in which I shared pretty much my experiences using tmux/screen, git, AIDE to monitor file changes on our machines and Ansible in order to automate common tasks as much as possible within the system administration context. I demonstrated the use of Ansible with multiple Linux distros - CentOS, Debian/Ubuntu in order to show how Ansible would handle heterogeneous Linux distribution after the command execution. Most of the presented stuffs were "live" during the workshop, but I also created a slides in order to help the audience and the public to get the basic ideas of the tools which I presented. You can read about them here [PDF].


The Third Day (Finale)

On the third day I came into the workshop slot which was delivered by a speaker with his pseudonym - Wak Arianto (not his original name though). He explained Suricata, a tool which has an almost similar syntax for pattern matching with the well known Snort IDS. Mr Wak explained OS fingerprinting concepts, flowbits and later how to create rules with Suricata. It was an interesting talk as I could see how to quarantine suspicious files captured from the network (let's say - possible malware) to a sandbox for further analysis. As far as I understood from the demo and from my extra readings, flowbits is a syntax which being used to grab the state of the session which being used by Suricata that works primarily with TCP in order to detect. You can read an article about flowbits here. It's being called a flowbits because it does the parsing on the TCP flows. I can see that we can parse the state of the TCP (for example, if it is established) based from the writings here.

I have a chance to listen to FreeBSD developer's slot too. We were lucky to have Mr Martin Wilke who is living in Malaysia and actively advocating FreeBSD to the local community. Together with Mr Muhammad Moinur Rahman - another FreeBSD developer they presented the FreeBSD development ecosystem and the current state of the operating system.



Possibly we preserved the best thing at the last - I attended a Wi-Fi security workshop which was presented by Mr Matnet and Mr Jep (both are pseudonyms). This workshop began with the theoretical foundations on the wireless technology and later the development of encryption around it.



The outline of the talks were outlined here. The speakers introduced the frame types of 802.11 protocols, which includes Control Frame, Data Frame and Management Frame. Management Frame is unencrypted so the attacking tools were developed to concentrate on this part.



The Management Frames is susceptible to the following attacks:
  • Deauthentication Attacks
  • Beacon Injection Attacks
  • Karma/MANA Wifi Attacks
  • EvilTwin AP Attacks

    Matnet and Jep also showed a social engineering tool called as "WiFi Phisher" in which it could be used as (according to the developer's page in GitHub) a "security tool that mounts automated victim-customized phishing attacks against WiFi clients in order to obtain credentials or infect the victims with malwares". It works together with the EvilTwin AP attacks by putting its role after achieving a man-in-the-middle position - Wifiphisher will redirect all HTTP requests to an attacker-controlled phishing page. Matnet told us the safest way to work within the WiFi environment is either using 802.11w supported device (which is yet to be widely found - at least in Malaysia). I found some infos on 802.11w that possibly could help to understand a bit on this protocol here.

    Conclusion

    For me this is considered the most anticipated annual event where I could meet professionals from different backgrounds and keeping my knowledge up to date with the latest development of the open source tools in the industry. The organizer surely had done a good job by organizing this event and I hope to attend this event again next year! Thank you for giving me opportunity to talk within this conference (and for the nice swag too!)

    Apart from MOSC I also planned to attend the annual Python Conference (Pycon) in which this year it is going to be special as it will be organized at the Asia Pacific (APAC) level. You can read more about Pycon APAC 2017 here (in case you probably would like to attend this event).


    Comments
  • published by noreply@blogger.com (Ben Witten) on 2017-05-22 19:11:00 in the "360" category
    End Point Liquid Galaxy will be coming to San Antonio to participate in GEOINT 2017 Symposium. We are excited to demonstrate our geospatial capabilities on an immersive and panoramic 7 screen Liquid Galaxy system. We will be exhibiting at booth #1012 from June 4-7.

    On the Liquid Galaxy, complex data sets can be explored and analyzed in a 3D immersive fly-through environment. Presentations can highlight specific data layers combined with video, 3D models, and browsers for maximum communications efficiency. The end result is a rich, highly immersive, and engaging way to experience your data.

    Liquid Galaxy?s extensive capabilities include ArcGIS, Cesium, Google Maps, Google Earth, LIDAR point clouds, realtime data integration, 360 panoramic video, and more. The system always draws huge crowds at conferences; people line up to try out the system for themselves.

    End Point has deployed Liquid Galaxy systems around the world. This includes many high profile clients, such as Google, NOAA, CBRE, National Air & Space Museum, Hyundai, and Barclays. Our clients utilize our content management system to create immersive and interactive presentations that tell engaging stories to their users.

    GEOINT is hosted and produced by the United States Geospatial Intelligence Foundation (USGIF). It is the nation?s largest gathering of industry, academia, and government to include Defense, Intelligence and Homeland Security communities as well as commercial, Fed/Civil, State and Local geospatial intelligence stakeholders.

    We look forward to meeting you at booth #1012 at GEOINT. In the meantime, if you have any questions please visit our website or email ask@endpoint.com.


    Comments

    published by noreply@blogger.com (Kiel) on 2017-05-22 15:59:00 in the "bash" category

    You want your script to run a command only if elapsed-time for a given process is greater than X?

    Well, bash does not inherently understand a time comparison like:

    if [ 01:23:45 -gt 00:05:00 ]; then
        foo
    fi
    

    However, bash can compare timestamps of files using -ot and -nt for "older than" and "newer than", respectively. If the launch of our process includes creation of a PID file, then we are in luck! At the beginning of our loop, we can create a file with a specific age and use that for quick and simple comparison.

    For example, if we only want to take action when the process we care about was launched longer than 24 hours ago, try:

    touch -t $(date --date=yesterday +%Y%m%d%H%M.%S) $STAMPFILE
    

    Then, within your script loop, compare the PID file with the $STAMPFILE, like this:

    if [ $PIDFILE -ot $STAMPFILE ]; then
        foo
    fi
    

    And of course if you want to be sure you're working with the PID file of a process which is actually responding, you can try to send it signal 0 to check:

    if kill -0 `cat $PIDFILE`; then
        foo
    fi
    

    Comments

    published by noreply@blogger.com (Jon Jensen) on 2017-05-10 04:57:00 in the "ecommerce" category

    We do a lot of ecommerce development at End Point. You know the usual flow as a customer: Select products, add to the shopping cart, then check out. Checkout asks questions about the buyer, payment, and delivery, at least. Some online sales are for ?soft goods?, downloadable items that don?t require a delivery address. Much of online sales are still for physical goods to be delivered to an address. For that, a postal code or zip code is usually required.

    No postal code?

    I say usually because there are some countries that do not use postal codes at all. An ecommerce site that expects to ship products to buyers in one of those countries needs to allow for an empty postal code at checkout time. Otherwise, customers may leave thinking they aren?t welcome there. The more creative among them will make up something to put in there, such as ?00000? or ?99999? or ?NONE?.

    Someone has helpfully assembled and maintains a machine-readable (in Ruby, easily convertible to JSON or other formats) list of the countries that don?t require a postal code. You may be surprised to see on the list such countries as Hong Kong, Ireland, Panama, Saudi Arabia, and South Africa. Some countries on the list actually do have postal codes but do not require them or commonly use them.

    Do you really need the customer?s address?

    When selling both downloadable and shipped products, it would be nice to not bother asking the customer for an address at all. Unfortunately even when there is no shipping address because there?s nothing to ship, the billing address is still needed if payment is made by credit card through a normal credit card payment gateway ? as opposed to PayPal, Amazon Pay, Venmo, Bitcoin, or other alternative payment methods.

    The credit card Address Verification System (AVS) allows merchants to ask a credit card issuing bank whether the mailing address provided matches the address on file for that credit card. Normally only two parts are checked: (1) the street address numeric part, for example, ?123? if ?123 Main St.? was provided; (2) the zip or postal code, normally only the first 5 digits for US zip codes, and often non-US postal code AVS doesn?t work at all with non-US banks.

    Before sending the address to AVS, validating the format of postal codes is simple for many countries: 5 digits in the US (allowing an optional -nnnn for ZIP+4), and 4 or 5 digits in most others countries ? see the Wikipedia List of postal codes in various countries for a high-level view. Canada is slightly more complicated: 6 characters total, alternating a letter followed by a number, formally with a space in the middle, like K1A 0B1 as explained in Wikipedia?s components of a Canadian postal code.

    So most countries? postal codes can be validated in software with simple regular expressions, to catch typos such as transpositions and missing or extra characters.

    UK postcodes

    The most complicated postal codes I have worked with is the United Kingdom?s, because they can be from 5 to 7 characters, with an unpredictable mix of letters and numbers, normally formatted with a space in the middle. The benefit they bring is that they encode a lot of detail about the address, and it?s possible to catch transposed character errors that would be missed in a purely numeric postal code. The Wikipedia article Postcodes in the United Kingdom has the gory details.

    It is common to use a regular expression to validate UK postcodes in software, and many of these regexes are to some degree wrong. Most let through many invalid postcodes, and some disallow valid codes.

    We recently had a client get a customer report of a valid UK postcode being rejected during checkout on their ecommerce site. The validation code was using a regex that is widely copied in software in the wild:

    [A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]?[0-9][ABD-HJLN-UW-Z]{2}

    (This example removes support for the odd exception GIR 0AA for simplicity?s sake.)

    The customer?s valid postcode that doesn?t pass that test was W1F 0DP, in London, which the Royal Mail website confirms is valid. The problem is that the regex above doesn?t allow for F in the third position, as that was not valid at the time the regex was written.

    This is one problem with being too strict in validations of this sort: The rules change over time, usually to allow things that once were not allowed. Reusable, maintained software libraries that specialize in UK postal codes can keep up, but there is always lag time between when updates are released and when they?re incorporated into production software. And copied or customized regexes will likely stay the way they are until someone runs into a problem.

    The ecommerce site in question is running on the Interchange ecommerce platform, which is based on Perl, so the most natural place to look for an updated validation routine is on CPAN, the Perl network of open source library code. There we find the nice module Geo::UK::Postcode which has a more current validation routine and a nice interface. It also has a function to format a UK postcode in the canonical way, capitalized (easy) and with the space in the correct place (less easy).

    It also presents us with a new decision: Should we use the basic ?valid? test, or the ?strict? one? This is where it gets a little trickier. The ?valid? check uses a regex validation approach will still let through some invalid postcodes, because it doesn?t know what all the current valid delivery destinations are. This module has a ?strict? check that uses a comprehensive list of all the ?outcode? data ? which as you can see if you look at that source code, is extensive.

    The bulkiness of that list, and its short shelf life ? the likelihood that it will become outdated and reject a future valid postcode ? makes strict validation checks like this of questionable value for basic ecommerce needs. Often it is better to let a few invalid postcodes through now so that future valid ones will also be allowed.

    The ecommerce site I mentioned also does in-browser validation via JavaScript before ever submitting the order to the server. Loading a huge list of valid outcodes would waste a lot of bandwidth and slow down checkout loading, especially on mobile devices. So a more lax regex check there is a good choice.

    When Christmas comes

    There?s no Christmas gift of a single UK postal code validation solution for all needs, but there are some fun trivia notes in the Wikipedia page covering Non-geographic postal codes:

    A fictional address is used by UK Royal Mail for letters to Santa Claus:

    Santa?s Grotto
    Reindeerland XM4 5HQ

    Previously, the postcode SAN TA1 was used.

    In Finland the special postal code 99999 is for Korvatunturi, the place where Santa Claus (Joulupukki in Finnish) is said to live, although mail is delivered to the Santa Claus Village in Rovaniemi.

    In Canada the amount of mail sent to Santa Claus increased every Christmas, up to the point that Canada Post decided to start an official Santa Claus letter-response program in 1983. Approximately one million letters come in to Santa Claus each Christmas, including from outside of Canada, and they are answered in the same languages in which they are written. Canada Post introduced a special address for mail to Santa Claus, complete with its own postal code:

    SANTA CLAUS
    NORTH POLE H0H 0H0

    In Belgium bpost sends a small present to children who have written a letter to Sinterklaas. They can use the non-geographic postal code 0612, which refers to the date Sinterklaas is celebrated (6 December), although a fictional town, street and house number are also used. In Dutch, the address is:

    Sinterklaas
    Spanjestraat 1
    0612 Hemel

    This translates as ?1 Spain Street, 0612 Heaven?. In French, the street is called ?Paradise Street?:

    Saint-Nicolas
    Rue du Paradis 1
    0612 Ciel

    That UK postcode for Santa doesn?t validate in some of the regexes, but the simpler Finnish, Canadian, and Belgian ones do, so if you want to order something online for Santa, you may want to choose one of those countries for delivery. :)


    Comments

    published by noreply@blogger.com (Matt Galvin) on 2017-05-04 13:00:00 in the "training" category

    This blog post is for people like me who are interested in improving their knowledge about computers, software and technology in general but are inundated with an abundance of resources and no clear path to follow. Many of the courses online tend to not have any real structure. While it's great that this knowledge is available to anyone with access to the internet, it often feels overwhelming and confusing. I always enjoy a little more structure to study, much like in a traditional college setting. So, to that end I began to look at MIT's OpenCourseWare and compare it to their actual curriculum.

    I'd like to begin by acknowledging that some time ago Scott Young completed the MIT Challenge where he "attempted to learn MIT?s 4-year computer science curriculum without taking classes". My friend Najmi here at End Point also shared a great website with me to "Teach Yourself Computer Science". So, this is not the first post to try to make sense of all the free resources available to you, it's just one which tries to help organize a coherent plan of study.

    Methodology

    I wanted to mimic MIT's real CS curriculum. I also wanted to limit my studies to Computer Science only, while stripping out anything not strictly related. It's not that I am not interested in things like speech classes or more advanced mathematics and physics, but I wanted to be pragmatic about the amount of time I have each week to put in to study outside of my normal (very busy) work week. I imagine anyone reading this would understand and very likely agree.

    I examined MIT's course catalog. They have 4 undergraduate programs in the Department of Electrical Engineering and Computer Science:

    • 6-1 program: Leads to the Bachelor of Science in Electrical Science and Engineering. (Electrical Science and Engineering)
    • 6-2 program: Leads to the Bachelor of Science in Electrical Engineering and Computer Science and is for those whose interests cross this traditional boundary.
    • 6-3 program: Leads to the Bachelor of Science in Computer Science and Engineering.(Computer Science and Engineering)
    • 6-7 program: Is for students specializing in computer science and molecular biology.
    Because I wanted to stick what I believed would be most practical for my work at End Point, I selected the 6-3 program. With my intended program selected, I also decided that the full course load for a bachelor's degree was not really what I was interested in. Instead, I just wanted to focus on the computer science related courses (with maybe some math and physics only if needed to understand any of the computer courses).

    So, looking at the requirements, I began to determine which classes I'd require. Once I had this, I could then begin to search the MIT OpenCourseWare site to ensure the classes are offered, or find suitable alternatives on Coursera or Udemy. As is typical, there are General Requirements and Departmental Requirements. So, beginning with the General Institute Requirements, lets start designing a computer science program with all the fat (non-computer science) cut out.


    General Requirements:



    I removed that which was not computer science related. As I mentioned, I was aware I may need to add some math/science. So, for the time being this left me with:


    Notice that it says

    one subject can be satisfied by 6.004 and 6.042[J] (if taken under joint number 18.062[J]) in the Department Program

    It was unclear to me what "if taken under joint number 18.062[J]" meant (nor could I find clarification) but as will be shown later, 6.004 and 6.042[J] are in the departmental requirements, so let's commit to taking those two which would leave the requirement of one more REST course. After some Googling I found the list of REST courses here. So, if you're reading this to design your own program, please remember that later we will commit to 6.004 and 6.042[J] and go here to select a course.

    So, now on to the General Institute Requirements Laboratory Requirement. We only need to choose one of three:

    • - 6.01: Introduction to EECS via Robot Sensing, Software and Control
    • - 6.02: Introduction to EECS via Communications Networks
    • - 6.03: Introduction to EECS via Medical Technology


    So, to summarize the general requirements we will take 4 courses:

    Major (Computer Science) Requirements:


    In keeping with the idea that we want to remove non-essential, and non-CS courses, let's remove the speech class. So here we have a nice summary of what we discovered above in the General Requirements, along with details of the computer science major requirements:


    As stated, let's look at the list of Advanced Undergraduate Subjects and Independent Inquiry Subjects so that we may select one from each of them:



    Lastly, it's stated that we must

    Select one subject from the departmental list of EECS subjects

    a link is provided to do so, however it brings you here and I cannot find a list of courses. I believe that this link no longer takes you to the intended location. A Google search brought up a similar page, but with a list of courses, as can be seen here. So, I will pick one from that page.

    The next step was to find the associated courses on MIT OpenCourseWare

    Sample List of Classes

    So, now you will be able to follow the links I provided above to select your classes. I was not always able to find courses that matched by exact name and/or course number. Sometimes I had to read the description and look through several courses which seemed similar. I will provide my own list in case you'd just like to us mine:

    Conclusion

    So there you have it, please feel free to comment with any of your favorite resources.


    Comments

    published by noreply@blogger.com (Dave Jenkins) on 2017-04-21 17:21:00 in the "browsers" category


    As many of you may have seen, earlier this week Google released a major upgrade to the Google Earth app. Overall, it's much improved, sharper, and a deeper experience for viewers. We will be upgrading/incorporating our managed fleet of Liquid Galaxies over the next two months after we've had a chance to fully test its functionality and polish the integration points, but here are some observations for how we see this updated app impacting the overall Liquid Galaxy experience.

    • Hooray! The new Earth is here! The New Earth is here! Certainly, this is exciting for us. The Google Earth app plays a key central role in the Liquid Galaxy viewing experience, so a major upgrade like this is a most welcome development. So far, the reception has been positive. We anticipate it will continue to get even better as people really sink their hands into the capabilities and mashup opportunities this browser-based Earth presents.

    • We tested some pre-release versions of this application, and successfully integrated them with the Liquid Galaxy and are very happy with how we are able to view-synchronize unique instances of the new Google Earth across displays with appropriate geometrically configured offsets.

    • What to look for in this new application:
      • Stability: The new Google Earth runs as a NaCl application in a Chrome browser. This is an enormous advance for Google Earth. As an application in Chrome it is instantly accessible to billions of new users with their established expectations. Because the new Google Earth uses Chrome the Google Earth developers will no longer need to engage in the minutiae of having to support multiple desktop operating systems, but now can instead concentrate on the core-functionality of Google Earth and leverage the enormous amount of work that the Chrome browser developers do to make Chrome a cross-platform application.
      • Smoother 3D: The (older) Google Earth sometimes has a sort of "melted ice cream" look to the 3D buildings in many situations. Often, buildings fail to fully load from certain viewpoints. From what we're seeing so far, the 3D renderings in the New Earth appear to be a lot sharper and cleaner.
      • Browser-based possibilities: As focus turns more and more to browser-based apps, and as JavaScript libraries continue to mature, the opportunities and possibilities for how to display various data types, data visualizations, and interactions really start to multiply. We can already see this with the sort of deeper stories and knowledge cards that Google is including in the Google Earth interface. We hope to take the ball and run with it, as the Liquid Galaxy can already handle a host of different media types. We might exploit layers, smart use controls, realtime content integration from other available databases, and... okay, I'm getting ahead of myself.

    • The New Google Earth makes a major point of featuring stories and deeper contextual information, rather than just ogling at the terrain: as pretty as the Grand Canyon is to look at, knowing a little about the explorers, trails, and history makes it such a nicer experience to view. We've gone through the same evolution with the Liquid Galaxy: it used to be just a big Google Earth viewer, but we quickly realized the need for more context and usable information for a richer interaction with the viewers by combining Earth with street view, panoramic video, 3D objects, etc. It's why we built a content management system to create presentations with scenes. We anticipate that the knowledge cards and deeper information that Google is integrating here will only strengthen that interaction.
    We are looking to roll out the new Google Earth to the fleet in the next couple of months. We need to do a lot of testing and then update the Liquid Galaxies with minimal (or no) disturbance to our clients, many of whom rely on the platform as a daily sales and showcasing tool for their businesses. As always, if you have any questions, please reach us directly via email or call.
    Comments

    published by noreply@blogger.com (Jon Jensen) on 2017-04-20 23:50:00 in the "company" category

    We are looking for another talented software developer to consult with our clients and develop web applications for them in Ruby on Rails, Django, AngularJS, Java, .NET, Node.js, and other technologies. If you like to solve business problems and can take responsibility for getting a job done well without intensive oversight, please read on!

    End Point is a 20-year-old web consulting company based in New York City, with 45 full-time employees working mostly remotely from home offices. We are experts in web development, databases, and DevOps, collaborating using SSH, Screen/tmux, chat, Hangouts, Skype, and good old phones.

    We serve over 200 clients ranging from small family businesses to large corporations. We use open source frameworks in a variety of languages including JavaScript, Ruby, Java, Scala, Kotlin, C#, Python, Perl, and PHP, tracked by Git, running mostly on Linux and sometimes on Windows.

    What is in it for you?

    • Flexible full-time work hours
    • Paid holidays and vacation
    • For U.S. employees: health insurance subsidy and 401(k) retirement savings plan
    • Annual bonus opportunity
    • Ability to move without being tied to your job location

    What you will be doing:

    • Work from your home office, or from our offices in New York City and the Tennessee Tri-Cities area
    • Consult with clients to determine their web application needs
    • Build, test, release, and maintain web applications for our clients
    • Work with open source tools and contribute back as opportunity arises
    • Use your desktop platform of choice: Linux, macOS, Windows
    • Learn and put to use new technologies
    • Direct much of your own work

    What you will need:

    • Professional experience building reliable server-side apps
    • Good front-end web skills with responsive design using HTML, CSS, and JavaScript, including jQuery, Angular, Backbone.js, Ember.js, etc.
    • Experience with databases such as PostgreSQL, MySQL, SQL Server, MongoDB, CouchDB, Redis, Elasticsearch, etc.
    • A focus on needs of our clients and their users
    • Strong verbal and written communication skills

    We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.

    Please email us an introduction to jobs@endpoint.com to apply. Include a resume, your GitHub or LinkedIn URLs, or whatever else that would help us get to know you. We look forward to hearing from you! Full-time employment seekers only, please -- this role is not for agencies or subcontractors.


    Comments

    published by Eugenia on 2017-04-20 00:37:37 in the "Metaphysics" category
    Eugenia Loli-Queru

    During my lucid dream today, semi-jokingly my spirit guide Esther says that the only way to avoid an abduction, is to induce coma for at least 1 yr, until they lose interest. Hardly a great way to get out of that ain’t it? But she says there’s no other way, you just need to make your body and spirit unavailable to them.

    That reply really got me surprised, because it was definitely not my own subconscious generating it. I never thought about getting into a… coma as a solution to the reported phenomena. In fact, it surprised me so much during my dream, that I kept repeating it throughout until I woke up (“remember it, remember it…”). I have long discussions with my spirit guide often about stuff, but I rarely remember them, but this time I needed to remember it.

    I also asked her then why do they abduct people, and she told me “you know why” (and I do, a post for another time).

    I then asked her if the Greys are good or bad, and she said that they’re not necessarily great, they are just doing what they were “hired” to do. Then I asked who ordered the project, and she says: “I can’t tell you that” (she wouldn’t reveal it).

    So, yeah, that happened today.


    Comments

    published by Eugenia on 2017-04-17 00:53:44 in the "General" category
    Eugenia Loli-Queru

    After many years of research on the subject, I found that these are the six most important points for one’s health. In no particular order, but sunlight is probably the most important of them all.

    – Exposure to Sunlight

    Two hours of early AM sunlight, as minimum. Without sunlight, our mitochondria don’t work.

    – Exposure to Clean Air

    Extra oxygenation via walking, breathing exercises, yoga, tai chi and meditation. Vigorous exercise is not needed, and especially if you’re already sick, it must not be pursued. Sitting too much or not knowing how to breath deeply, creates lactate acidosis in the body, which is the beginning of the end for health. This is what Chinese also call “Qi liver stagnation”.

    – Exposure to Clean Water

    Spring water, non-fluoridated, alkaline if possible. And LOTS of it! The water, along with some salt and DHA, will act as the electricity in your body, to carry out the needed functions of what some people call “detoxification” (although that’s not the right word for what’s going on).

    – Exposure to the Right Diet

    Plant-based Paleo, also known as Pegan (some offal, some wild fish and eggs, but mostly plants/fruits). Removing grains and sugars from the diet, we assure that the liver will have enough B vitamins to do its job: releasing away or converting the lactic acid. Otherwise, you end up with a non-alcoholic fatty liver, and everything starts breaking down in the body. More explanation of the Pegan diet here.

    – Exposure to the Right Sleep

    No sleep, no bueno. Circadian rhythms is our clock, and without that clock, things fall apart. Sleep when the sun goes down, or at the very least use blue-blocker glasses at night.

    – Exposure to the Right Frequencies

    This might be seen as quackery, but it’s not. Non-native EMF signals, are detrimental to our health. Avoid wifi, cellphones as much as you can, and anything of the like. Walk barefoot on the bare Earth to get the right frequency to heal your body.


    Comments

    published by noreply@blogger.com (Greg Sabino Mullane) on 2017-04-13 21:11:00 in the "cryptography" category

    SSH (Secure Shell) is one of the programs I use every single day at work, primarily to connect to our client's servers. Usually it is a rock-solid program that simply works as expected, but recently I discovered it behaving quite strangely - a server I had visited many times before was now refusing my attempts to login. The underlying problem turned out to be a misguided decision by the developers of OpenSSH to deprecate DSA keys. How I discovered this problem is described below (as well as two solutions).

    The use of the ssh program is not simply limited to logging in and connecting to remote servers. It also supports many powerful features, one of the most important being the ability to chain multiple connections with the ProxyCommand option. By using this, you can "login" to servers that you cannot reach directly, by linking together two or more servers behind the scenes.

    As as example, let's consider a client named "Acme Anvils" that strictly controls access to its production servers. They make all SSH traffic come in through a single server, named dmz.acme-anvils.com, and only on port 2222. They also only allow certain public IPs to connect to this server, via whitelisting. On our side, End Point has a server, named portal.endpoint.com, that I can use as a jumping off point, which has a fixed IP that we can give to our clients to whitelist. Rather than logging in to "portal", getting a prompt, and then logging in to "dmz", I can simply add an entry in my ~/.ssh/config file to automatically create a tunnel between the servers - at which point I can reach the client's server by typing "ssh acmedmz":

    ##
    ## Client: ACME ANVILS
    ##
    
    ## Acme Anvil's DMZ server (dmz.acme-anvils.com)
    Host acmedmz
    User endpoint
    HostName 555.123.45.67
    Port 2222
    ProxyCommand ssh -q greg@portal.endpoint.com nc -w 180s %h %p
    

    Notice that the "Host" name may be set to anything you want. The connection to the client's server uses a non-standard port, and the username changes from "greg" to "endpoint", but all of that is hidden away from me as now the login is simply:

    [greg@localhost]$ ssh acmedmz
    [endpoint@dmz]$
    

    It's unusual that I'll actually need to do any work on the dmz server, of course, so the tunnel gets extended another hop to the db1.acme-anvils.com server:

    ##
    ## Client: ACME ANVILS
    ##
    
    ## Acme Anvil's DMZ server (dmz.acme-anvils.com)
    Host acmedmz
    User endpoint
    HostName 555.123.45.67
    Port 2222
    ProxyCommand ssh -q greg@portal.endpoint.com nc -w 180s %h %p
    
    ## Acme Anvil's main database (db1.acme-anvils.com)
    Host acmedb1
    User postgres
    HostName db1
    ProxyCommand ssh -q acmedmz nc -w 180s %h %p
    
    

    Notice how the second ProxyCommand references the "Host" of the section above it. Neat stuff. When I type "ssh acemdb1", I'm actually connecting to the portal.endpoint.com server, then immediately running the netcat (nc) command in the background, then going through netcat to dmz.acme-anvils.com and running a second netcat command on *that* server, and finally going through both netcats to login to the db1.acme-anvils.com server. It sounds a little complicated, but quickly becomes part of your standard tool set once you wrap your head around it. After you update your .ssh/config file, you soon forget about all the tunneling and feel as though you are connecting directly to all your servers. That is, until something breaks, as it did recently for me.

    The actual client this happened with was not "Acme Anvils", of course, and it was a connection that went through four servers and three ProxyCommands, but for demonstration purposes let's pretend it happened on a simple connection to the dmz.acme-anvils.com server. I had not connected to the server in question for a long time, but I needed to make some adjustments to a tail_n_mail configuration file. The first login attempt failed completely:

    [greg@localhost]$ ssh acmedmz
    endpoint@dmz.acme-anvils.com's password: 
    

    Although the connection to portal.endpoint.com worked fine, the connection to the client server failed. This is not an unusual problem: it usually signifies that either ssh-agent is not running, or that I forgot to feed it the correct key via the ssh-add program. However, I quickly discovered that ssh-agent was working and contained all my usual keys. Moreover, I was able to connect to other sites with no problem! On a hunch, I tried breaking down the connections into manual steps. First, I tried logging in to the "portal" server. It logged me in with no problem. Then I tried to login from there to dmz.acme-anvils.com - which also logged me in with no problem! But trying to get there via ProxyCommand still failed. What was going on?

    When in doubt, crank up the debugging. For the ssh program, using the -v option turns on some minimal debugging. Running the original command from my computer with this option enabled quickly revealed the problem:

    [greg@localhost]$ ssh -v acmedmz
    OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
    debug1: Reading configuration data /home/greg/.ssh/config
    debug1: /home/greg/.ssh/config line 1227: Applying options for acmedmz
    debug1: Reading configuration data /etc/ssh/ssh_config
    ...
    debug1: Executing proxy command: exec ssh -q greg@portal.endpoint.com nc -w 180s 555.123.45.67 2222
    ...
    debug1: Authenticating to dmz.acme-anvils.com:2222 as 'endpoint'
    ...
    debug1: Host 'dmz.acme-anvils.com' is known and matches the ECDSA host key.
    ...
    debug1: Skipping ssh-dss key /home/greg/.ssh/greg2048dsa.key - not in PubkeyAcceptedKeyTypes
    debug1: SSH2_MSG_SERVICE_ACCEPT received
    debug1: Authentications that can continue: publickey,password
    debug1: Next authentication method: publickey
    debug1: Offering RSA public key: /home/greg/.ssh/greg4096rsa.key
    debug1: Next authentication method: password
    endpoint@dmz.acme-anvils.com's password: 
    

    As highlighted above, the problem is that my DSA key (the "ssh-dss key") was rejected by my ssh program. As we will see below, DSA keys are rejected by default in recent versions of the OpenSSH program. But why was I still able to login when not hopping through the middle server? The solution lays in the fact that when I use the ProxyCommand, *my* ssh program is negotiating with the final server, and is refusing to use my DSA key. However, when I ssh to the portal.endpoint.com server, and then on to the next one, the second server has no problem using my (forwarded) DSA key! Using the -v option on the connection from portal.endpoint.com to dmz.acme-anvils.com reveals another clue:

    [greg@portal]$ ssh -v endpoint@dmz.acme-anvils.com:2222
    ...
    debug1: Connecting to dmz [1234:5678:90ab:cd::e] port 2222.
    ...
    debug1: Next authentication method: publickey
    debug1: Offering RSA public key: /home/greg/.ssh/endpoint2.ssh
    debug1: Authentications that can continue: publickey,password
    debug1: Offering DSA public key: /home/greg/.ssh/endpoint.ssh
    debug1: Server accepts key: pkalg ssh-dss blen 819
    debug1: Authentication succeeded (publickey).
    Authenticated to dmz ([1234:5678:90ab:cd::e]:2222).
    ...
    debug1: Entering interactive session.
    [endpoint@dmz]$
    

    If you look closely at the above, you will see that we first offered an RSA key, which was rejected, and then we successfully offered a DSA key. This means that the endpoint@dm account has a DSA, but not a RSA, public key inside of its ~/.ssh/authorized_keys file. Since I was able to connect to portal.endpoint.com, its ~/.ssh/authorized_keys file must have my RSA key.

    For the failing connection, ssh was able to use my RSA key to connect to portal.endpoint.com, run the netcat command, and then continue on to the dmz.acme-anvils.com server. However, this connection failed as the only key my local ssh program would provide was the RSA one, which the dmz server did not have.

    For the working connection, ssh was able to connect to portal.endpoint.com as before, and then into an interactive prompt. However, when I then connected via ssh to dmz.acme-anvils.com, it was the ssh program on portal, not my local computer, which negotiated with the dmz server. It had no problem using a DSA key, so I was able to login. Note that both keys were happily forwarded to portal.endpoint.com, even though my ssh program refused to use them!

    The quick solution to the problem, of course, was to upload my RSA key to the dmz.acme-anvils.com server. Once this was done, my local ssh program was more than happy to login by sending the RSA key along the tunnel.

    Another solution to this problem is to instruct your SSH programs to recognize DSA keys again. To do this, add this line to your local SSH config file ($HOME/.ssh/config), or to the global SSH config file (/etc/ssh/config):

    PubkeyAcceptedKeyTypes +ssh-dss
    

    As mentioned earlier, this whole mess was caused by the OpenSSH program deciding to deprecate DSA keys. Their rationale for targeting all DSA keys seems a little weak at best: certainly I don't feel that my 2048-bit DSA key is in any way a weak link. But the writing is on the wall now for DSA, so you may as well replace your DSA keys with RSA ones (and an ed25519 key as well, in anticipation of when ssh-agent is able to support them!). More information about the decision to force out DSA keys can be found in this great analysis of the OpenSSH source code.


    Comments

    published by noreply@blogger.com (Emanuele 'Lele' Calo') on 2017-04-10 17:40:00
    Not long ago, one of our customers had their website compromised because of a badly maintained, not-updated WordPress. At End Point we love WordPress, but it really needs to be configured and hardened the right way, otherwise it's easy to end up in a real nightmare.

    This situation is worsened even more if there's no additional security enforcement system to protect the environment on which the compromised site lives. One of the basic ways to protect your Linux server, especially RHEL/Centos based ones, is using SELinux.

    Sadly, most of the interaction people has with SELinux happens while disabling it, first on the running system:

    setenforce disabled
    # or
    setenforce 0
    

    and then permanently by manually editing the file /etc/sysconfig/selinux to change the variable SELINUX=enforcing to SELINUX=disabled.

    Is that actually a good idea though? While SELinux can be a bit of a headache to tune appropriately and can easily be misconfigured, here's something that could really convince you to think twice before disabling SELinux once and forever.

    Back to our customer's compromised site. While going through the customer's system for some post-crisis cleaning, I found this hilarious piece of bash_history:

    ls
    cp /tmp/wacky.php .
    ls -lFa
    vim wacky.php
    set
    ls -lFa
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    fg
    ls -lFa
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    vim wacky.php
    php wacky.php 2>&1 | less
    php wacky.php > THE-EVIL 2>&1
    vim THE-EVIL
    ls -lFA
    less wacky.php
    ls
    less THE-EVIL
    less wacky.php
    cat /selinux/enforce
    ls
    less THE-EVIL
    exit
    

    As you can see, what happened was that the attacker was able to manage having a shell connection as the customer user, and started using a PHP files injected in /tmp as a possible further vector of attack.

    Sadly, for the attacker at least, what happened was that SELinux was setup in enforcing mode with some strict rules and prevented all kind of execution on that specific script so after a few frantic attempts the attacker surrendered.

    Looking into the /var/log/audit/auditd.log file I found all the type=AVC denied errors that SELinux was shouting while forbidding the attacker to pursue his nefarious plan.

    Hilarious and good props to SELinux for saving the day.

    less THE-EVIL, more SELinux! ?
    Comments

    published by noreply@blogger.com (Marco Matarazzo) on 2017-04-07 18:06:00 in the "CentOS" category

    During a recent CentOS 7 update, among other packages, we updated our Percona 5.7 installation to version 5.7.17-13.

    Quickly after that, we discovered that mysqldump stopped working, thus breaking our local mysql backup script (that complained loudly).

    What happened?


    The error we received was:

    mysqldump: Couldn't execute 'SELECT COUNT(*) FROM INFORMATION_SCHEMA.SESSION_VARIABLES WHERE VARIABLE_NAME LIKE 'rocksdb_skip_fill_cache'': The 'INFORMATION_SCHEMA.SESSION_VARIABLES' feature is disabled; see the documentation for 'show_compatibility_56' (3167)

    After a bit of investigation, we discovered this was caused by this regression bug, apparently already fixed but not yet available on CentOS:

    Everything revolves around INFORMATION_SCHEMA being deprecated in version 5.7.6, when Performance Schema tables has been added as a replacement.

    Basically, a regression caused mysqldump to try and use deprecated INFORMATION_SCHEMA tables instead of the new Performance Schema.

    How to fix it?


    Immediate workaround is to add this line to /etc/my.cnf or (more likely) /etc/percona-server.conf.d/mysqld.cnf, depending on how your configuration files are organized:

    show_compatibility_56=1

    This flag was both introduced and deprecated in 5.7.6. It will be there for some time to help with the transition.

    It seems safe and, probably, good to keep if you have anything still actively using INFORMATION_SCHEMA tables, that would obviously be broken if not updated to the new Performance Schema since 5.7.6.

    With this flag, it is possible to preserve the old behavior and keep your old code in a working state, while you upgrade it. Also, according to the documentation, it should not impact or turn off the new behavior with Performance Schema.

    More information on how to migrate to the new Performance Schema can be found here.


    Comments