All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by Eugenia on 2017-01-24 03:36:52 in the "Software" category
Eugenia Loli-Queru

Some would be quick to say that the time to innovate in the social media industry has passed. It’s true that the operating system doors closed in 1995 when Microsoft released Windows 95, but I really believe that there is just one more step forward before we can say the same about social media.

Here’s how I imagine a post-instagram app. These features must be implemented from the get-go btw.

– Every text post also contains a picture (up to 2:3 size ratio) or up to 1 minute video, as on Instagram.
– Every post is repostable, like on Twitter or Tumblr — unless if it’s marked as private. Very important for artists, so friends of friends can then follow the original poster.
– All posts can be viewed under the Recent tab, or categorized, like on Pinterest.
– 1-1 chat, live video, and group chat abilities like on FB/Hangouts/Snapchat.
– Follow people, like posts, follow tags, view automatically curated tags.
– Some of the tags must be completed automatically via AI.
– To bring more people in: make it also a game: let people mark places based on GPS with provided graphics. The more places are marked, the more points a user gets, and the more his posts are exposed in curated lists, which gets him more followers.
– Individual posts can be marked private, shared with specific people or lists, and can be set as “artistic nudity” or not. Don’t flag a whole account as mature or not, but specific posts only. These posts should still show up, just blurred until clicked. Fully mature posts would still need to be removed.
– Tools to manage traffic, which guarantees more celebrity support (very important to get users).
– Each post can be a sellable product. 10% commission if payment is done via the app’s system, or a $3 flat fee if the shopping page is an external page. Like on fancy.com.
– To get users immediately, various OpenID systems can be used as additional email credentials. Login requires a cellphone number for extra security.
– Primarily a fast, sexy app, but also a web front end.

In truth, these aren’t so difficult that Instagram itself can’t implement, but the fact that they don’t already support re-posting is troubling.


Comments

published by noreply@blogger.com (Matt Galvin) on 2017-01-22 17:35:00

I'd like to begin by acknowledging that some time ago Scott Young completed the MIT Challenge where he "attempted to learn MIT?s 4-year computer science curriculum without taking classes".

I examined MIT's course catalog. They have 4 undergraduate programs in the Department of Electrical Engineering and Computer Science:

  • 6-1 program: Leads to the Bachelor of Science in Electrical Science and Engineering. (Electrical Science and Engineering)
  • 6-2 program: Leads to the Bachelor of Science in Electrical Engineering and Computer Science and is for those whose interests cross this traditional boundary.
  • 6-3 program: Leads to the Bachelor of Science in Computer Science and Engineering.(Computer Science and Engineering)
  • 6-7 program: Is for students specializing in computer science and molecular biology.
Because I wanted to stick what I believed would be most practical for my work at End Point, I selected the 6-3 program. With my intended program selected, I also decided that the full course load for a bachelor's degree was not really what I was interested in. Instead, I just wanted to focus on the computer science related courses (with maybe some math and physics only if needed to understand any of the computer courses).

So, looking at the requirements, I began to determine which classes I'd require. Once I had this, I could then begin to search the MIT OpenCourseWare site to ensure the classes are offered, or find suitable alternatives on Coursera or Udemy. As is typical, there are General Requirements and Departmental Requirements. So, beginning with the General Institute Requirements, lets start designing a computer science program with all the fat (non-computer science) cut out.


General Requirements:



I removed that which was not computer science related. As I mentioned, I was aware I may need to add some math/science. So, for the time being this left me with:


Notice that it says

one subject can be satisfied by 6.004 and 6.042[J] (if taken under joint number 18.062[J]) in the Department Program

It was unclear to me what "if taken under joint number 18.062[J]" meant (nor could I find clarification) but as will be shown later, 6.004 and 6.042[J] are in the departmental requirements, so let's commit to taking those two which would leave the requirement of one more REST course. After some Googling I found the list of REST courses here. So, if you're reading this to design your own program, please remember that later we will commit to 6.004 and 6.042[J] and go here to select a course.

So, now on to the General Institute Requirements Laboratory Requirement. We only need to choose one of three:

  • - 6.01: Introduction to EECS via Robot Sensing, Software and Control
  • - 6.02: Introduction to EECS via Communications Networks
  • - 6.03: Introduction to EECS via Medical Technology


So, to summarize the general requirements we will take 4 courses:

Major (Computer Science) Requirements:


In keeping with the idea that we want to remove non-essential, and non-CS courses, let's remove the speech class. So here we have a nice summary of what we discovered above in the General Requirements, along with details of the computer science major requirements:


As stated, let's look at the list of Advanced Undergraduate Subjects and Independent Inquiry Subjects so that we may select one from each of them:



Lastly, it's stated that we must

Select one subject from the departmental list of EECS subjects

a link is provided to do so, however it brings you here and I cannot find a list of courses. I believe that this link no longer takes you to the intended location. A Google search brought up a similar page, but with a list of courses, as can be seen here. So, I will pick one from that page.

Sample List of Classes

So, now you will be able to follow the links I provided above to select your classes. I will provide my own list in case you'd just like to us mine:

The next step was to find the associated courses on MOC
Comments

published by noreply@blogger.com (Ben Witten) on 2017-01-20 16:21:00 in the "Business" category


Recently Chase unveiled a digital campaign for Chase for Business by asking small businesses to submit themselves ringing their own morning bells every day when they open for business. Chase would select one video every day to post on their website and to play on their big screen in Times Square.

A few months back, Chase chose to feature End Point for their competition! They sent a full production team to our office to film us and how we ring the morning bell.

In preparation for Chase, we built a Liquid Galaxy presentation for Chase on our content management system. The presentation consisted of two scenes. In scene 1, we had ?Welcome to Liquid Galaxy? written out across the outside four screens. We displayed the End Point Liquid Galaxy logo on the center screen, and set the system to orbit around the globe. In scene 2, the Liquid Galaxy flies to Chase?s Headquarter office in New York City, and orbits around their office. Two bells ring, each shown across two screens. The bell videos used were courtesy of Rayden Mizzi and St Gabriel's Church. Our logo continues to display on the center screen, and the Chase for Business website is shown on a screen as well.

The video that Chase created (shown above) features our CEO Rick giving an introduction of our company and then clicking on the Liquid Galaxy?s touchscreen to launch into the presentation.

We had a great time working with Chase, and were thrilled that they chose to showcase our company as part of their work to promote small businesses! To learn more about the Liquid Galaxy, you can visit our Liquid Galaxy website or contact us here.

Comments

published by Eugenia on 2017-01-18 17:46:27 in the "General" category
Eugenia Loli-Queru

This is my own theory, and it only works IF we accept that the Great Pyramid of Khufu was not built by humans, but by aliens. Yes, that’s a big stretch, because the pyramid was (most likely) built by humans, but in the case that all these crazy conspiracy theorists are correct, then I could think of a different theory of why it was built.

I base my theory on the simplest answer of what the pyramid is. The simplest answer is usually the correct one. So, if you ask a child “what’s the Great Pyramid”, their answer would be: “it’s a big, big building”.

The only reason why an alien race would build a humongous building on Earth at a time when only huts existed, in my opinion, is that so it can be seen from space. There are not many spacious rooms inside the pyramid (so it wasn’t grain storage), it wasn’t a temple, and we already know that they weren’t tombs. What it is though is just that: a huge building, visible from space, with basic equipment.

Picture this: humans transition from hunter-gatherers to organized societies around the same time. When you have a young race evolving to become something more than animals, that could raise some alien eyebrows. And so they erect… a sign to all other alien races: “KEEP OFF”. Passerbys are much more likely to enter a wild field and claim it their own or just mess with it, than entering one that has a sign to keep off.

Building a large building visible from space is a much smarter way to accomplish this than simply putting a satellite on orbit. The satellite would need servicing, it can go bad at any time, and it would transmit at a frequency or digital format that another alien race might not understand. These problems don’t exist if you just erect a big-a$$ building though, one that it shows high mathematics in its various elements/ratios/location etc. Math is a universal language, and one that would be respected by another alien race that has already mastered interstellar travel.

On top of that, the pyramid shape and construction is earthquake-proof, so it can stay erected for thousands of years, as it has. I don’t know if the Bauval/Hancock theory that the pyramids are older is correct, but it’s of a little consequence if my theory is correct.

Now, as to the pyramids placement point to Sirius or not, I don’t know. But it is possible to fathom that the builder alien race did leave a clue about “please inquire at the XYZ starsystem for access”, just like one would potentially put a telephone number on a Keep Off sign. But I don’t think that’s necessary.

As to why any alien race would care to “protect” this new human race by leaving them alone to develop in peace and erecting big “keep off” signs, I think that protective (or even possibly ownership) tendencies exist in all biological creatures. I don’t think that these aliens would be much different than us in basic behaviors if they have a biological base in this universe like we do. As above, so below.

Of course, as I mentioned in the beginning, this theory makes sense only if aliens built the pyramids. Which probably they didn’t. But it’s nice to spend the afternoon theorizing, if they did.


Comments

published by noreply@blogger.com (Kamil Ciemniewski) on 2017-01-18 13:35:00 in the "awk" category

Recently we've seen a sprout of re-implementations of many popular Unix tools. With the expansion of communities built around new languages or platforms, it seems that apart from the novelties in technologies ? the ideas on how to use them stay the same. There are more and more solutions to the same kinds of problems:

  • text editors
  • CSS pre-processors
  • find-in-files tools
  • screen scraping tools
  • ... many more ...

In this blog post I'd like to tackle the problem from yet another perspective. Instead of resolving to "new and cool" libraries and languages (grep implemented in X language) ? I'd like to use what's out there already in terms of tooling to build a nice search-in-files tool for myself.

Search in files tools

It seems that for many people it's very important to have a "search in files" tool that they really like. Some of the nice work we've seen so far include:

These are certainly very nice. As the goal of this post is to build something out of the tooling found in any minimal Unix-like installation ? they won't work though. They either need to be compiled or require Perl to be installed which isn't everywhere (e. g. FreeBSD on default ? though obviously available via the ports).

What I really need from the tool

I do understand that for some developers, waiting 100 ms longer for the search results might be too long. I'm not like that though. Personally, all I care about when searching is how the results are being presented. I also like to have the consistency of using the same approach between many machines I work on. We're often working on remote machines at End Point. The need to install e.g Rust compiler just to get the ripgrep tool is too time consuming and hence doesn't contribute to getting things done faster. Same goes for e. g the_silver_searcher which needs to be compiled too. What options do I have then?

Using good old Unix tools

The "find in files" functionality is covered fully by the Unix grep tool. It allows searching for a given substring but also "Regex" matches. The output can not only contain only the lines with matches, but also the lines before and after to give some context. The tool can provide line numbers and also search recursively within directories.

While I'm not into speeding it up, I'd certainly love to play with its output because I do care about my brain's ability to parse text and hence: be more productive.

The usual output of grep:

$ # searching inside of the ripgrep repo sources:
$ egrep -nR Option src
(...)
src/search_stream.rs:46:    fn cause(&self) -> Option<&StdError> {
src/search_stream.rs:64:    opts: Options,
src/search_stream.rs:71:    line_count: Option<u64>,
src/search_stream.rs:78:/// Options for configuring search.
src/search_stream.rs:80:pub struct Options {
src/search_stream.rs:89:    pub max_count: Option<u64>,
src/search_stream.rs:94:impl Default for Options {
src/search_stream.rs:95:    fn default() -> Options {
src/search_stream.rs:96:        Options {
src/search_stream.rs:113:impl Options {
src/search_stream.rs:160:            opts: Options::default(),
src/search_stream.rs:236:    pub fn max_count(mut self, count: Option<u64>) -> Self {
src/search_stream.rs:674:    pub fn next(&mut self, buf: &[u8]) -> Option<(usize, usize)> {
src/worker.rs:24:    opts: Options,
src/worker.rs:28:struct Options {
src/worker.rs:38:    max_count: Option<u64>,
src/worker.rs:44:impl Default for Options {
src/worker.rs:45:    fn default() -> Options {
src/worker.rs:46:        Options {
src/worker.rs:72:            opts: Options::default(),
src/worker.rs:148:    pub fn max_count(mut self, count: Option<u64>) -> Self {
src/worker.rs:186:    opts: Options,
(...)

What my eyes would like to see is more like the following:

$ mygrep Option src
(...)
src/search_stream.rs:
 46        fn cause(&self) -> Option<&StdError> {
 ?    
 64        opts: Options,
 ?    
 71        line_count: Option<u64>,
 ?    
 78    /// Options for configuring search.
 ?    
 80    pub struct Options {
 ?    
 89        pub max_count: Option<u64>,
 ?    
 94    impl Default for Options {
 95        fn default() -> Options {
 96            Options {
 ?    
 113   impl Options {
 ?    
 160               opts: Options::default(),
 ?    
 236       pub fn max_count(mut self, count: Option<u64>) -> Self {
 ?    
 674       pub fn next(&mut self, buf: &[u8]) -> Option<(usize, usize)> {

src/worker.rs:
 24        opts: Options,
 ?    
 28    struct Options {
 ?    
 38        max_count: Option<u64>,
 ?    
 44    impl Default for Options {
 45        fn default() -> Options {
 46            Options {
 ?    
 72                opts: Options::default(),
 ?    
 148       pub fn max_count(mut self, count: Option<u64>) -> Self {
 ?    
 186       opts: Options,
(...)

Fortunately, even the tiniest of Unix like system installation already has all we need to make it happen without the need to install anything else. Let's take a look at how we can modify the output of grep with awk to achieve what we need.

Piping into awk

Awk has been in Unix systems for many years ? it's older than me! It is a programming language interpreter designed specifically to work with text. In Unix, we can use pipes to direct output of one program to be the standard input of another in the following way:

$ oneapp | secondapp

The idea with our searching tool is to use what we already have and pipe it between the programs to format the output as we'd like:

$ egrep -nR Option src | awk -f script.awk

Notice that we used egrep when in this simple case we didn't need to. It was sufficient to use fgrep or just grep.

Very quick introduction to coding with Awk

Awk is one of the forefathers of languages like Perl and Ruby. In fact some of the ideas I'll show you here exist in them as well.

The structure of awk programs can be summarized as follows:

BEGIN {
  # init code goes here
}

# "body" of the script follows:

/pattern-1/ {
  # what to do with the line matching the pattern?
}

/pattern-n/ {
  # ...
}

END {
  # finalizing
}

The interpreter provides default versions for all three parts: a "no-op" for BEGIN and END and "print each line unmodified" for the "body" of the script.

Each line is being exploded into columns based on the "separator" which by default is any number of consecutive white characters. One can change it via the -F switch or by assigning the FS variable inside the BEGIN area. We'll do just that in our example.

The "columns" that lines are being exploded into can be accessed via the special variables:

$0 # the whole line
$1 # first column
$2 # second column
# etc

The FS variable can contain a pattern too. So for example if we'd have a file with the following contents:

One | Two | Three | Four
Eins | Zwei | Drei | Vier
One | Zwei | Three | Vier

The following assignment would make Awk explode lines into proper columns:

BEGIN {
  FS="|"
}

# the ~ operator gives true if left side matches
# the regex denoted by the right side:
$1 ~ "One" {
  print $2
}

Running the following script would result with:

$ cat file.txt | awk -f script.awk
Two
Zwei

Simple Awk coding to format the search results

Armed with this simple knowledge, we can tackle the problem we stated in the earlier part of this post:

BEGIN {
  # the output of grep in the simple case
  # contains:
  # <file-name>:<line-number>:<file-fragment>
  # let's capture these parts into columns:
  FS=":"
  
  # we are going to need to "remember" if the <file-name>
  # changes to print it's name and to do that only
  # once per file:
  file=""
  
  # we'll be printing line numbers too; the non-consecutive
  # ones will be marked with the special line with vertical
  # dots; let's have a variable to keep track of the last
  # line number:
  ln=0
  
  # we also need to know we've just encountered a new file
  # not to print these vertical dots in such case:
  filestarted=0
}

# let's process every line except the ones grep prints to
# say if some binary file matched the predicate:
!/(--|Binary)/ {

  # remember: $1 is the first column which in our case is
  # the <file-name> part; The file variable is used to
  # store the file name recently processed; if the ones 
  # don't match up - then we know we encountered a new
  # file name:
  if($1 != file && $1 != "")
  {
    file=$1
    print "n" $1 ":"
    ln = $2
    filestarted=0
  }

  # if the line number isn't greater than the last one by
  # one then we're dealing with the result from non-consecutive
  # line; let's mark it with vertical dots:
  if($2 > ln + 1 && filestarted != 0)
  {
    print "?"
  }

  # the substr function returns a substring of a given one
  # starting at a given index; we need to print out the
  # search result found in a file; here's a gotcha: the results
  # may contain the ':' character as well! simply printing
  # $3 could potentially left out some portions of it;
  # this is why we're using the whole line, cutting off the
  # part we know for sure we don't need:
  out=substr($0, length($1 ":" $2 ": "))

  # let's deal with only the lines that make sense:
  if($2 >= ln && $2 != "")
  {
    # sprintf function matches the one found in C lang;
    # here we're making sure the line numbers are properly
    # spaced:
    linum=sprintf("%-4s", $2)
    
    # print <line-number> <found-string>
    print linum " " out
    
    # assign last line number for later use
    ln=$2
    
    # ensure that we know that we "started" current file:
    filestarted=1
  }
}

Notice that the "middle" part of the script (the one with the patterns and actions) gets ran in an implicit loop - once for each input line.

To use the above awk script you could wrap it up with the following shell script:

#!/bin/bash

egrep -nR $@ | awk -f script.awk

Here we're very trivially (and somewhat naively) passing all the arguments passed to the script to egrep with the use of $@.

This of course is a simple solution. Some care needs to be applied when trying to make it work with A, B and C switches, it's not difficult either though. All it takes is to e.g pipe it through sed (another great Unix tool - the "stream editor") to replace the initial '-' characters in the [filename]-[line-number] parts to match our assumptions of having ":" as the separator in the awk script.

In praise of "what-already-works"

The simple script like shown above could easily be placed in your GitHub, BitBucket or GitLab account and fetched with curl on whichever machine you're working on. With one call to curl and maybe another one to put the scripts somewhere in the local PATH you'd gain a productivity enhancing tool that doesn't require anything else to work than what you already have.

I'll keep learning "what we already have" to not fall too much into "what's hot and new" unnecessarily.


Comments

published by noreply@blogger.com (Ben Witten) on 2017-01-10 18:38:00 in the "event" category

This past week, End Point attended and exhibited at CES, a global consumer electronics and consumer technology tradeshow that takes place every January in Las Vegas, Nevada. End Point?s Liquid Galaxy was set up in the Gigabyte exhibit at Caesar?s Palace.

Gigabyte invited us to set up a Liquid Galaxy in their exhibit space because they believe the Liquid Galaxy is the best show-piece for their Brix hardware. The Brix, or ?Brix GTX Pro? in this case, offers an Intel i7 6th gen processor and NVIDIA GTX950 graphics (for high performance applications, such as gaming) hardware in a small and sleek package (12 in. length, 9 in. width, 1 in. height). Since each Brix GTX Pro offers 4 display outputs, we only needed two Brix to run all 7 screens and touchscreen, and one Brix to power the headnode!

This was the first time we have powered a Liquid Galaxy with Gigabyte Brix units, and the hardware proved to be extremely effective. It is a significantly sleeker solution than hardware situated in a server rack. It is also very cost-effective.

We created custom content for Gigabyte on our Content Management System. An video of one of our custom presentations can be viewed below. We built the presentation so the the GTX Pro product webpage was on the left-most screen, and the GTX Pro landing webpage was on the right-most screen. A custom Gigabyte video built for CES covered the center three screens. The Gigabyte logo was put on the 2nd screen to the left. In the background, the system was set to orbit on Google Earth. This presentation built for Gigabyte, which includes graphics, webpages, videos, and KML, demonstrates many of the capabilities of End Point?s Content Management System and the Liquid Galaxy.

In addition to being a visually dazzling display tool for Gigabyte to show off to its network of customers and partners, Liquid Galaxy was a fantastic way for Gigabyte to showcase the power of their Brix hardware. The opportunity to collaborate with Gigabyte on the Liquid Galaxy was a welcome one, and we look forward to further collaboration.

To learn more about the Liquid Galaxy, you can visit our Liquid Galaxy website or contact us here.


Comments

published by Eugenia on 2017-01-01 20:54:49 in the "Filmmaking" category
Eugenia Loli-Queru

Contrary to popular belief, “Midnight Special” is a spiritual movie, turned sci-fi, turned spiritual again.

Some Q&A, and ***SPOILERS***:

1. Where did Alton come from?

– Alton was conceived & born normally by his parents, as any human has. The fact that he could reach to other dimensions was a product of evolution. His father exhibits the same abilities (as shown at the very end when his eyes shine), but to a much smaller degree. The evolution of human kind to another state of being is teased.

2. Where did the alien structures come from?

– They were not “alien” in the traditional meaning. They were on a parallel Earth (or another dimension), an Earth that had a different evolutionary path than ours. Parallel dimensions are hidden from our awareness under normal circumstances, but the biological evolutionary step mentioned above made it possible for Alton.

3. What were these beings?

– These were light beings. That’s where the spiritual part comes in: “light beings” are considered in spiritual circles to be very advanced entities. It’s been teased by the movie that that’s where humanity’s future lies too. Also telling is that Alton is reborn by the sun (the light that gives life to everything in our planet).

4. What was the point of the movie?

– Alton is a messianic figure, just not in the traditional terms. The cult thought that he was literally a religious figure, while Alton is messianic in a more subtle way: he reluctantly gives a preview to humanity of what lies ahead for them. The movie is about humanity’s “first glimpse” of how expansive the Cosmos is. Not just in terms of aliens travelling from planet A to planet B (in the same universe) as all traditional sci-fi movies have been for so long, but also in parallel, and also up and down and inner and outer (in other words, the Cosmos is a web in all directions of different universes and dimensional existences). That’s next-level sci-fi. That’s the border between sci-fi and New Age spirituality (without the negative baggage that usually accompanies it in the minds of most people).

5. Ugh, so New Age hogwash was the point of the movie?

– No. What people today call “spirituality” is really science that hasn’t been understood yet. And since science can’t explain it yet, some “faith” might be required in the meantime for those who had direct experience with it. This is why it was so important for Lucas to say “I believe”, because after he had his direct experiences with Alton, he made the leap to faith. But belief is to be transcended by hard data, otherwise it becomes dogma, which keeps humanity down. This is what the movie is going for too: the leap from unbelief, to belief, to hard data, and not to dogma (that the organized religion/cult had fallen victim of).

The most telling scene on this interpretation, is at the very end, when Durst is cutting her hair. You can interpret that scene as simply trying to get away from FBI, so she needs to change her appearance. Another, deeper explanation would be though, that Durst’s character now is free from religion and dogma. You see, her braids were the same as the women in the cult. Even if she had left the cult, she was still bound by their beliefs for years after. By cutting down the braids, she’s now free from such beliefs and dogma, she understands that the cosmos is more expansive, and that said expansiveness is not necessarily “religious” in nature, but rather, “just is”.

This was for me the best movie of 2016. The most forward-looking, and the most “edgy” sci-fi movie of them all, by literally moving the needle of sci-fi from caricature super-heroes, monsters, and A-to-B aliens, to a more expansive terrain that’s more rich in potential. As an ex-filmmaker myself, that’s the kind of sci-fi I always wanted to make too (I’m a meta-psychedelic visual artist now).

6. So why didn’t so many people get it?

– It’s because most people aren’t indoctrinated in such cosmological ideas. Even if Alton did explain it at some point, about a “world on top of ours”, that still didn’t register with most people. Most viewers needed a way more spoon-fed explanation to get it (and maybe they should have received it, that’s a failure of the movie production companies involved to not insist that the director gives it to them).

Additionally, the press’ comparison of this movie to the ’80s Spielberg movies didn’t help at all, because this movie had absolutely nothing to do with these older movies (people went to the cinema expecting something specific and recognizable, and they got something completely different instead). So they found the movie a boring dud, as if without significance, and with a WTF ending. But there is significance in the movie, it tells of a larger world that we will eventually reach one way or another, but that we must have faith until that day comes, when that faith transforms from belief to hard scientific data.

This is not different than if the year was 1870, Jules Verne trying to convince people that one day we will have technology to reach for the stars, or the deeps of the sea, and instead, he gets people thinking he was crazy, or just a “fantasy story without significance”. All it needed was some faith in the natural process of technological and/or biological evolution. That’s what the filmmaker is asking of you today too.


Comments

published by noreply@blogger.com (Ben Witten) on 2016-12-29 21:56:00 in the "Liquid Galaxy" category

An article was posted on The Tech Broadcast last week that featured the UNC Chapel Hill Center for Faculty Excellence's Faculty Showcase. The faculty showcase included a fantastic presentation featuring the many ways students and faculty use their Liquid Galaxy, and discussed other opportunities for using the system in the future.

Exciting examples cited of great classroom successes making use of the Liquid Galaxy include:

  1. A course offered at UNC, Geography 121 People and Places, requires its students to sift through data sets and spend time in the GIS lab/research hub making maps using the data they've collected. The goal of this assignment is to demonstrate understanding of diversity within particular geographic entities. The students use the Liquid Galaxy to present their findings. Examples of studies done for this project include studies of fertility, infant mortality, income inequality, poverty, population density, and primary education.

  2. A group of students working in lab found that the household income of a particular municipality was many times greater than all surrounding municipalities. By looking around on the Liquid Galaxy, they discovered an enormous plantation in a very rural area. They were then able to understand how that plantation skewed the data from the entire municipality.

  3. While studying a web map, students found that average life expectancy dropped by a decade within a very short distance. They decided to look at the Liquid Galaxy to see whether they could make any conclusions by viewing the area. By using the Liquid Galaxy, the students were able to think about what the data looks like, not just statistically but on Earth.

  4. A Geography teacher had a lecture about the geography of Vietnam. The teacher used the Liquid Galaxy to give the class a tour of Vietnam and show how the different areas factored into the course. The teacher asked the class where within Vietnam they?d like to go, and was able to take the students to the different geographical areas on the Liquid Galaxy and tell them in detail about those areas while they had the visual support of the system.

  5. A geography class called The Geography of Latin America focuses on extractive industries. The class discusses things like agriculture in South America, and the percentage of land in Brazil that is used for soy production. The faculty reports that seeing this information in an immersive environment goes a long way in teaching the students.

  6. Urban planning students use the Liquid Galaxy when looking into urban revitalization. Uses for these students include using the system to visit the downtown areas and see firsthand what the areas look like to better understand the challenges that the communities are facing.

  7. Students and faculty have come to LG to look at places that they are about to travel to abroad, or thinking about traveling abroad, in order to prepare for their travels. An example given was a Master of Fine Arts student who was a sculptor and was very interested in areas where there are great quantities of rocks and ice. She traveled around on the the Liquid Galaxy and looked around in Iceland. Researching the system on the Liquid Galaxy helped to pique her interest and ultimately led to her going to Iceland to travel and study.

During the faculty showcase, faculty members listed off some of the great benefits of having the Liquid Galaxy as a tool that was available to them.

  1. The Liquid Galaxy brought everyone together and fostered a class community. Teachers would often arrive to classes that utilize the Liquid Galaxy and find that half the students were early to class. Students would be finding places (their homes, where they studied abroad, and more) and friendships between students would develop as a result of the Liquid Galaxy.

  2. Liquid Galaxy helps students with geographic literacy. They are able to think about concepts covered in class, and fly to and observe the locations discussed.

  3. Students often bring parents and family to see the Liquid Galaxy, which is widely accessible to students on campus. Students are always excited to share what they're doing with the system, with family and with faculty.

  4. Faculty members have commented that students that don?t ask questions in class have been very involved in the Liquid Galaxy lessons, which could be in part because some students are more visual learners. These visual learners find great benefit in seeing the information displayed in front of them in an interactive setting.

  5. From a faculty standpoint, a lot of time was spent planning and trying to work out the class structure, which has developed a lot. Dedicating class-time for the Liquid Galaxy was beneficial, and resulted in teaching less but in more depth and in different ways. The teacher thinks there was more benefit to that, and it was a great learning experience for all parties involved.

Faculty members expressed interest and excitement when learning more about the Liquid Galaxy and the ways it is used. There was a lot of interest in using the Liquid Galaxy for interdisciplinary studies between different departments to study how different communities and cultures work. There was also interest in further utilization of the system?s visualization capabilities. A professor from the School of Dentistry spoke of how he could picture using the Liquid Galaxy to teach someone about an exam of the oral cavity through the LG. Putting up 3D models of the oral cavity using our new Sketchfab capabilities would be a perfect way to achieve this!

We at End Point were very excited to learn more about the many ways that Liquid Galaxy is being successfully used at UNC as a tool for research, for fun, and to bring together students and faculty alike. We look forward to seeing how UNC, among the many other research libraries that use Liquid Galaxy, will implement the system in courses and on campus in the future.


Comments

published by noreply@blogger.com (Elizabeth Garrett Christensen on 2016-12-23 16:30:00 in the "AngularJS" category

Carjojo?s site makes use of some of the best tools on the market today for accessing and displaying data. Carjojo is a car buying application that takes data about car pricing, dealer incentives, and rebate programs and aggregates that into a location-specific vehicle pricing search tool. The Carjojo work presented a great opportunity for End Point to utilize our technical skills to build a state-of-the-art application everyone is very proud of. End Point worked on the Carjojo development project from October of 2014 through early 2016, and the final Carjojo application launched in the summer of 2016. This case study shows that End Point can be a technology partner for a startup, enabling the client to maintain their own business once our part of the project is over.

Why End Point?

Reputation in full stack development

End Point has deep experience with full stack development so for a startup getting advice from our team can prove really helpful when deciding what technologies to implement and what timelines are realistic. Even though the bulk of the Carjojo work focused on specific development pieces, having developers available to help advise on the entire stack allows a small startup to leverage a much broader set of skills.

Startup Budget and Timelines

End Point has worked with a number of startups throughout our time in the business. Startups require particular focused attention on budget and timelines to ensure that the minimum viable product can be ready on time and that the project stays on budget. Our consultants focus on communication with the client and advise them on how to steer the development to meet their needs, even if those shift as the project unfolds.

Client Side Development Team

One of the best things about a lot of our clients is their technological knowledge and the team they bring to the table. In the case of Carjojo, End Point developers fit inside of their Carjojo team to build parts that they were unfamiliar with. End Point developers are easy to work with and already work in a remote development environment, so working in a remote team is a natural fit.

Client Side Project Management

End Point works on projects where either the project management is done in-house or by the client. In the case of a project like Carjojo where the client has technical project management resources, our engineers work within that team. This allows a startup like Carjojo insight into the project on a daily basis.

Project Overview

The main goal of the Carjojo project was to aggregate several data sources on car price and use data analytics to provide useful shopper information, and display that for their clients.
Carjojo?s staff had experience in the car industry and leveraged that to build a sizeable database of information. Analytics work on the database provided another layer of information, creating a time- and location-specific market value for a vehicle.

Carjojo kept the bulk of the database collection and admin work in house, as well as provided an in-house designer that closely worked with them on their vision for the project. End Point partnered to do the API architecture work as well as the front end development.

A major component of this project was using a custom API to pull information from the database and display it quickly with high end, helpful infographics. Carjojo opted to use APIs so that the coding work would seamlessly integrate with future plans for a mobile application, which normally require a substantial amount of recoding.

Creating a custom API also allows Carjojo to work with future partners and leverage their data and analytics in new ways as their business grows.

Team

Patrick Lewis: End Point project manager and front end developer. Patrick led development of the AngularJS front end application which serves as the main customer car shopping experience on the Carjojo site. He also created data stories using combinations of integrated Google Maps, D3/DimpleJS charts, and data tables to aid buyers with car searches and comparisons.



Matt Galvin: Front end developer. Matt led the efforts for data-visualization with D3 and DimpleJS. He created Angular services that were used to communicate with the backend, used D3 and DimpleJS to illustrate information graphically about cars, car dealers, incentives, etc., sometimes neatly packaging them into directives for easy re-use when the case fit. He also created a wealth of customizations and extensions of DimpleJS which allowed for rapid development without sacrificing visualization quality.



Josh Williams: Python API development. Josh led the efforts in connecting the database into Django and Python to process and aggregate the data as needed. He also used TastyPie to format the API response and created authentication structures for the API.

 




Project Specifics

API Tools

Carjojo?s project makes use of some of the best tools on the market today for accessing and displaying data. Django and Tastypie were chosen to allow for rapid API development and to keep the response time down on the website. In most cases the Django ORM was sufficient for generating queries from the data, though in some cases custom queries were written to better aggregate and filter the data directly within Postgres.

To use the location information in the database, some GIS location smarts were tied into Tastypie. Location searches tied into GeoDjango and generated PostGIS queries in the database.

Front End Tools

D3 is standard in data-visualization and is great for doing both simple and complicated graphics. Many of Carjojo?s graphs were bar graphs, pie charts and didn?t really require writing out D3 by hand. We also wanted to make many of them reusable and dynamic (often based on search terms or inputs) with use of Angular directives and services. This could have been done with pure D3, but Dimple makes creating simple D3 graphs easy and fast.

DimpleJS was used a lot in this project. Since Carjojo is data-driven, they wanted to display their information in an aesthetically pleasing manner and DimpleJS allowed us to quickly spin up information against some of the project?s tightest deadlines.

The approach worked well for most cases. However, sometimes Carjojo wanted something slightly different than what DimpleJS does out of the box. One example of DimpleJS customization work can be found here on our blog.

Another thing to note about the data visualizations was that sometimes when the data was plotted and graphed, it brought to light some discrepancies in the back-end calculations and analytics, requiring some back-and-forth between the Carjojo DBA and End Point.

Results

Carjojo had a successful launch of their service in the summer of 2016. Their system has robust user capabilities, a modern clean design, and a solid platform to grow from. The best news for Carjojo is that now the project has been turned back over to them for development. End Point believes in empowering our clients to move forward with their business and goals without us. Carjojo knows that we?ll be here for support if they need it.







Comments

published by noreply@blogger.com (Ben Witten) on 2016-12-21 18:39:00 in the "office" category
Our office-mates are leaving, and we are looking to fill their desk space. There are 8 open desks available, including one desk in a private office.

Amenities include free wifi, furniture, conference room access, kitchen access, regular office cleaning, and close proximity (one block) to Madison Square Park.

Our company, End Point, is a tech company that builds ecommerce sites, and also develops the Liquid Galaxy. There are typically 4 or 5 of us in the office on a given day. We are quiet, friendly, and respectful.

Please contact us at ask@endpoint.com for more information.


Comments

published by noreply@blogger.com (Patrick Lewis) on 2016-12-21 16:19:00 in the "database" category

Rails seed files are a useful way of populating a database with the initial data needed for a Rails project. The Rails db/seeds.rb file contains plain Ruby code and can be run with the Rails-default rails db:seed task. Though convenient, this "one big seed file" approach can quickly become unwieldy once you start pre-populating data for multiple models or needing more advanced mechanisms for retrieving data from a CSV file or other data store.

The Seedbank gem aims to solve this scalability problem by providing a drop-in replacement for Rails seed files that allows developers to distribute seed data across multiple files and provides support for environment-specific files.

Organizing seed files in a specific structure within a project's db/seeds/ directory enables Seedbank to either run all of the seed files for the current environment using the same rails db:seed task as vanilla Rails or to run a specific subset of tasks by specifying a seed file or environment name when running the task. It's also possible to fall back to the original "single seeds.rb file" approach by running rails db:seed:original.

Given a file structure like:

db/seeds/
  courses.seeds.rb
  development/
    users.seeds.rb
  students.seeds.rb

Seedbank will generate tasks including:

rails db:seed                   # load data from db/seeds.rb, db/seeds/*.seeds.rb, and db/seeds/[ENVIRONMENT]/*.seeds.rb
rails db:seed:courses           # load data from db/seeds/courses.seeds.rb
rails db:seed:common            # load data from db/seeds.rb, db/seeds/*.seeds.rb
rails db:seed:development       # load data from db/seeds.rb, db/seeds/*.seeds.rb, and db/seeds/development/*.seeds.rb
rails db:seed:development:users # load data from db/seeds/development/users.seeds.rb
rails db:seed:original          # load data from db/seeds.rb

I've found the ability to define development-specific seed files helpful in recent projects for populating 'test user' accounts for sites running in development mode. We've been able to maintain a consistent set of test user accounts across multiple development sites without having to worry about accidentally creating those same test accounts once the site is running in a publicly accessible production environment.

Splitting seed data from one file into multiple files does introduce a potential issue when the data created in one seed file is dependent on data from a different seed file. Seedbank addresses this problem by allowing for dependencies to be defined within the seed files, enabling the developer to control the order in which the seed files will be run.

Seedbank runs seed files in alphabetical order by default but simply wrapping the code in a block allows the developer to manually enforce the order in which tasks should be run. Given a case where Students are dependent on Course records having already been created, the file can be set up like this:

# db/seeds/students.seeds.rb
after :courses do
  course = Course.find_by_name('Calculus')
  course.students.create(first_name: 'Patrick', last_name: 'Lewis')
end

The added dependency block will ensure that the db/seeds/courses.seeds.rb file is executed before the db/seeds/students.seeds.rb file, even when the students file is run via a specific rails db:seed:students task.

Seedbank provides additional support for adding shared methods that can be reused within multiple seed files and I encourage anyone interested in the gem to check out the Seedbank README for more details. Though the current 0.4 version of Seedbank doesn't officially have support for Rails 5, I've been using it without issue on Rails 5 projects for over six months now and consider it a great addition to any Rails project that needs to pre-populate a database with a non-trivial amount of data.


Comments

published by noreply@blogger.com (Jon Jensen) on 2016-12-13 22:31:00 in the "company" category

Update: This position has been filled! Thanks to everyone who expressed interest.

This role is based in our Bluff City, Tennessee office, and is responsible for everything about fulfillment of our Liquid Galaxy and other custom-made hardware products, from birth to installation. See liquidgalaxy.endpoint.com to learn more about Liquid Galaxy.

What is in it for you?

  • Interesting and exciting startup-like atmosphere at an established company
  • Opportunity for advancement
  • Benefits including health insurance and self-funded 401(k) retirement savings plan
  • Annual bonus opportunity

What you will be doing:

  • Manage receiving, warehouse, and inventory efficiently
  • Oversee computer system building
  • Product testing and quality assurance
  • Packing
  • Shipment pick-up
  • Communicate with and create documents for customs for international shipping
  • Be the expert on international shipping rules and regulations
  • Delivery tracking and resolution of issues
  • Verify receipt of intact, functional equipment
  • Resolve RMA and shipping claims
  • Help test and implement any new warehouse software and processes
  • Design and implement new processes
  • Use effectively our project software (Trello) to receive and disseminate project information
  • Manage fulfillment employees and office facility
  • Work through emergency situations in a timely and controlled manner
  • Keep timesheet entries up to date throughout the day

What you will need:

  • Eagerness to ?own? the fulfillment process from end to end
  • Exemplary communication skills with the entire company
  • High attention to detail
  • Consistent habits of reliable work
  • Ability to make the most of your time and resources without external micromanagement
  • Desire, initiative, and follow-through to improve on our processes and execution
  • Work with remote and local team members
  • Strive to deliver superior internal customer service
  • Ability to work through personnel issues
  • Go above and beyond the call of duty when the situation arises

About End Point:

End Point is a 21-year-old Internet consulting company with 50 full-time employees working together from our headquarters in New York City, our office in eastern Tennessee, and home offices around the world. We serve over 200 clients ranging from small family businesses to large corporations, using a variety of open source technologies. Our team is made up of strong product design, software development, database, hardware, and system administration talent.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.

Please email us an introduction to jobs@endpoint.com to apply. Include your resume and anything else that would help us get to know you. We look forward to hearing from you! Full-time employment seekers only, please. No agencies or subcontractors.


Comments

published by noreply@blogger.com (Marco Matarazzo) on 2016-12-12 18:32:00 in the "bash" category

Let's say you're working in Bash, and you want to loop over a list of files, using wildcards.

The basic code is:

#!/bin/bash
for f in /path/to/files/*; do
  echo "Found file: $f"
done

Easy as that. However, there could be a problem with this code: if the wildcard does not expand to actual files (i.e. there's no file under /path/to/files/ directory), $f will expand to the path string itself, and the for loop will still be executed one time with $f containing "/path/to/files/*".

How to prevent this from happening? Nullglob is what you're looking for.

Nullglob, quoting shopts man page, "allows filename patterns which match no files to expand to a null string, rather than themselves".

Using shopt -s you can enable BASH optional behaviors, like Nullglob. Here's the final code:

#!/bin/bash
shopt -s nullglob
for f in /path/to/files/*; do
  echo "Found file: $f"
done

Another interesting option you may want to check for, supported by Bash since version 3, is failglob.

With failglob enabled, quoting again, "patterns which fail to match filenames during filename expansion result in an expansion error". Depending on what you need, that could even be a better behavior.

Wondering why nullglob it's not the default behavior? Check this very good answer to the question.


Comments

published by Eugenia on 2016-12-11 09:58:52 in the "Collage" category
Eugenia Loli-Queru

Well, to make money from art you gotta sell. There are three steps to making money online with art (and not via the old way of galleries and shows):

1. You must create easily-digestible “pop” art. More on this here.

2. You must market yourself. And you do that by getting A LOT of Instagram followers. Instagram has 10x the purchasing power than any other social media. Back in the day, Tumblr was big, and later was Facebook. Today, only Instagram is worth your time. So, make sure your instagram posts are very tidy, you use the right tags each time, and when you get blog articles about your art, ask them to also link to your instagram account. From there, each time you want to mention your shop or a sale you have, an instagram post will do you wonders.

3. You must sell at the right shops, using the right products on each. Don’t try uploading everything on all your shops. You have to be selective, depending on the profit provided. The last thing you want, is to start selling your most popular artwork for a profit of $1.20, that is stationary cards. Be smart. And here’s how to be smart:

A. Sell your own prints. That’s where you will make as much as 80% profit. I wrote a blog post about how to do that here. Use TicTail the way I do: you sell your own prints there, but you also link to products on third party shops. Consider your own signed prints shop your studio & gallery.

B. On Society6 you must have a three-tier system: your most popular artworks gets uploaded ONLY as art prints, framed prints, canvas, and metal prints. These are the only products that allow you to set your own profit. Set a high enough profit that is both accessible for consumers, but also gets you some good earnings. For the second tier artworks, also export and enable other products too, just make sure they’re of the expensive kind (e.g. shower curtains). This way, these products do exist for those who want them, but they don’t compete with your prints in terms of pricing (because they’d be in equal footing price-wise). And the third tier, the least popular artworks (don’t upload at all the ones that aren’t at least a bit popular btw), you export for everything. Personally, I still avoid some products completely due to their too low price that only makes artists a dollar-something: stationary cards, ipod skins, hand towels, and also apparel (I only use all-over-prints, which look way better). Consider Society6 your mall shop. PROS: lots of shoppers CONS: crippled by software bugs, doesn’t allow custom pricing for everything.

C. RedBubble allows you to set your own prices for all products, which is a huge advantage. Set a good profit for all products in your settings. The problem with RedBubble is that not as many people use it for art as they are for Society6. Therefore, at RedBubble upload only the artworks that look good on products, e.g. apparel, or clocks. Make sure you have a very high profit margin for photo-prints and posters, because these will eat away your art prints if they’re too cheap. I also always disable stationary cards and stickers there. Consider RedBubble your retail shop around the corner. PROS: custom pricing for everything. CONS: a bit more difficult to be found there.

D. Curioos is a beautiful shop (the most beautiful of all), but you only make a 10% there for art prints/canvas and metal prints. You can make there a 16% if you upload an artwork ONLY there (not a great idea). However, there is a trick, to get that 16%, by only enabling acrylic and disk prints (and die-cuts, if some of your artworks are eligible). You don’t enable prints/frames/canvas/metals at all. Because no other shop carries these three kinds of prints (acrylic, disk, die-cuts), you can select the 16% exclusive edition option. Even with 16% though, you won’t make much money there (I recently sold 16 acrylic and disk prints there and made only $200, while for the same prints on paper I’d make $900 at my own Tictail shop). That’s why it’s best to only upload on Curioos your second half of your artworks in terms of popularity. You don’t want these exotic types of prints to cannibalize your own print sales. Consider Curioos your boutique. PROS: Beautiful, functional. CONS: low profit.

E. Zazzle, LiveHeroes, DesignedbyHumans, Fab.com, Fancy.com and a few others: this is up to you if you want shops there too or not. If you are going to open shops there, again, don’t upload your most popular artworks there, because you won’t make much. For example, fab.com only pays 6%, fancy.com is a pain in the butt to upload new artworks up, liveheroes and DesignedbyHumans don’t pay much at all, and zazzle.com is ok (allows up to 99% custom profit over the base price), but it’s a really messy web site. Consider these shops like a remote gas station shop where you mostly enter just to take a piss after a long drive, but sometimes you feel obligated to buy a bag of beef jerky because the cashier is looking at you funny.


Comments

published by Eugenia on 2016-12-09 19:30:32 in the "Collage" category
Eugenia Loli-Queru

Being in the art business for almost 5 years now it has given me a good instinct about what sells and what doesn’t. Basically, what sells are artworks that are:

1. Easily comprehended visually with a single look that doesn’t take more than 0.3 seconds. This usually means: a main element right in the center of the artwork.

2. Depiction of something super-easy to understand that the viewer identifies with: e.g. eating, sleeping, taking a bath, driving, playing with a cat, being next to flowers.

3. The next step is to make these mundane, everyday depictions surreal: e.g. sleeping on top of Saturn, driving on a road to a nebula, sitting under giant flowers. Basically, take people’s bored existence and make it more interesting. In this case, the art functions as a get-away drug.

4. The most successful kind of art today, and the simplest of all, is substitution. For example, instead of the ear piece on old style landline phones, you replace it with a banana. Or, instead of bombs, you get the airplane to drop candy. The human brain immediately lights up in such substitutions because it takes less than a second for the individual to “get it”, and so it rewards itself the same way it gets rewarded when playing Tetris. Again, art functions here as a drug, not as an intellectual discourse.

Example of things people absolutely love:



Examples of more serious art that people don’t bother to look at because they’re either too visually complex, or their brain hurts too much to think about what it’s depicted:


(this one has a full blown explanation too)


Overall, I’m a successful artist, I can’t complain about that. But it bothers me that I’m selling easily digestible crap, instead of more interesting, often abstract art (called “dada” in collage circles). Only 1% of what I sell overall is serious art (and yes, I have created a number of these, it’s just that people don’t prefer them). I want to be remembered having created something worthwhile, not (essentially) memes that provide the odd smile for half a second to most people, before they move on to the next item on their Instagram feed.

I wouldn’t mind the easier artworks if there was some kind of balance between the two types among consumers. But when people prefer the easy ones 99 times out of 100, there’s a problem. And the problem is not just with me, because the same thing happens with pretty much all serious artworks from other artists (e.g. dada collages). This is why the majority of them can’t make a good buck out of their work to sustain them financially. It’s because their artworks aren’t “pop” enough. Sad, but true.


Comments