With the purchase of my Google Pixel 2 XL phone, there was a Google Home Mini that came in for free. I had conflicting thoughts about using such a “listening device” in my home. I’m sure, no one would want a… spy in their own home.
But then again, think of this: the exact same functionality that exists in these small home devices, exists in every smartphone today. Every single one of them. The only difference is that you can always hide your phone in another room if you want to have a spy-free conversation, while with such a device, you’d have to take it off the plug.
As you may already know, I have the most interesting dreams, hehe…
Apart from seeing weird alien entities during my nap time, I was also shown how the CubeSat idea can be properly commercialized. Beat that, MIT (or DMT).
So basically, I was shown a bunch of 3U CubeSats (around 10 or 12 of them), held together by some sort of string, forming a web. At the edges of the web, there were semi-large solar panels and antennas, while in the middle of the web, there was a propulsion engine, not larger than a 3U CubeSat itself.
Right now, all CubeSats are released in the wild on their own, with no propulsion (sometimes they end up facing the wrong way), terrible power abilities, and even more terrible communication (FM among others!!!). These satellites usually die within 3-5 months, quickly burning in the atmosphere. On top of that, they’re usually get released as secondary payload in LEO, while CubeSats are benefited in higher SSO orbit.
Here’s the business idea behind of what I saw:
– You let customers buy one of the CubeSats and customize it out of an array of most-popular components (third party components that pass evaluation can be accepted — that costs extra).
– The CubeSats run Android, so writing drivers for it, updating them over the air, or even completely erase them to their default status can be done. Each of the 12 CubeSats runs a slightly different version of the OS, and has different hardware — depending on the customer needs.
– The customer can access their CubeSat via a secure backend on the manufacturer’s web site. Even firmware updates can be performed, not just plain updates or downlink data.
– Because of the shared propulsion, the constellation web can be in SSO for up to 5 years.
– 1 year of backend support is included in the overall price, but after that time, owners can continue using it for an additional fee, or lease or sell the rights to their CubeSat to another commercial entity, getting back some of that invested value.
– Even if 1 CubeSat goes bad, the others continue to work, since they’re independent of each other. Triple redundancy system in case of shorting. To avoid over-usage of power due to faulty hardware or software (that could run down the whole system), a pre-agreed specific amount is allocated to each CubeSat daily.
– Eventually, a more complex system could be developed, under agreement with all the responsible parties, to have CubeSats share information with their neighbor CubeSats (either an internal wired network, or Bluetooth — whatever proves more secure and fast). For example, if there’s a hardware ability one CubeSat in the web has, but the others don’t, and one of the other CubeSats needs it, they could ask for its service — for the right price.
– Instead of dispensing the CubeSats one by one, the web is a single machine, about 2/3s the size of a dishwasher. The CubeSats have very specific allowed weight in their specification, so overall, while the volume is medium size, the overall weight doesn’t have to be more than 100 kg. That easily fits on the payload of small, inexpensive rockets, like the upcoming RocketLab Electron, which costs just $4.9 million per launch. Falcon 9 becomes cheaper only if it could launch 13 of these webs at once. While it can very easily lift their weight, it might not have the volume required (the Falcon9 fairing is rather small at 3.2m).
– This comes overall to about $600,000 per CubeSat overall (with a rather normal configuration).
The current 3U CubeSats cost anywhere between $20k and $50k to make, plus another $200k or so to launch. Overall, sure, $600k is more than the current going price, but with the web idea you get enough power, communication that doesn’t suck, propulsion, and an extended life — plus the prospect of actually making money out of them by leasing them or selling them. A lot of the revenue will come after the launch, as a service/marketplace business.
In a sense, this business idea is the equivalent of a shared hosting server service, which has revolutionized the way servers work, and has democratized people’s ability to run code or servers online. PlanetLabs is doing something similar by leasing “time” on their CubeSats, but by releasing them one by one, they fall on the stated shortcomings.
For all of this to become true, the CubeSats themselves would need an overhaul of how customizable their modularity is, and easy access to the latest mobile hardware. Overall, we’re probably 2-3 years away from such an idea getting even started to materialize, and possibly 5 years away from becoming reality. I haven’t seen anyone else suggested it, so, here I am. Thank my weird dreams.
As I wrote in the past, my first job out of college was to work in an artificial intelligence project. The idea back then among engineers in the field was that, given enough smart programming and a tree-like knowledge database, we would eventually manage to create true artificial intelligence (AI). When I was working on the project, I truly believed in it. It was the coolest thing in my life up to that point! As the years went by, I realized that AI via that model, is nothing but a fantasy. A fantasy circulating in the scientific circles since the late 1960s.
These days, there are a lot of people who have gone obsessed with the idea of the singularity (e.g. Ray Kurzweil), while others are sh1tting their pants about how “disastrous” that could be for the human race (Elon Musk being one of them).
Artificial intelligence has progressed since the 1990s, since now it’s not modeled around programming the knowledge directly into it, but it “learns” via the Internet. Meaning, by analyzing behaviors and actions of internet users, Google can “guess” what’s the most logical action for their AI to take for each given situation (crowd-sourcing the intelligence). Crowd-sourcing is a far smarter way to have your AI “learn”, than the “static” ways of teaching an AI system word-by-word as in the past.
However, even with crowd-sourcing, we will hit a wall in what AI can do for us via that method. It can surely become as “smart” as the Star Trek’s Enterprise computer, which is not very smart. What we really want when we talk about AI, is Mr Data, not the rather daft Enterprise computer. So we need a new model to base the design of our machines. Crowd-sourcing comes close, but it’s an exogenous way of evolution (because it takes decisions based on actions already taken by the people it crowd-sources from), and so it can never evolve into true consciousness. True consciousness means free will, so a crowd-sourcing algorithm can never be “free”, it will always be a slave to the actions it was modeled around for. In other words, that crowd-sourcing AI would always behave “too robotic”.
It took me 20 years to realize why AI hasn’t work so far, and why our current models will never work. The simple reason being: just like energy, you can’t create consciousness from nothing, but you can always divide and share the existing one.
The idea for this came during my first lucid dream almost exactly 2 years ago, when I first “met” my Higher Self (who names himself Heva or Shiva). In my encounter with him, which I described in detail here, I left one bit of information out from that blog post, because I promised him that I won’t tell anyone. I think it’s time to break that promise though, because it’s the right time for that information to be shared. Basically, when I met him, he had a laptop with him, and in its monitor you could see a whole load of live statistics about me: anger, vitality, love, fear etc etc etc. Please don’t laugh on why a “spiritual entity” had a… laptop. You must understand that dreams are symbolic and are modeled around the brain of the dreamer (in this case, a human’s dream, who is only familiar with human tech). So what looked like a laptop to me, from his point of view, was some 5D ultra-supa tech that would look like psychedelic colors, who knows? So anyway, he definitely tried to hide from me that screen. I wasn’t supposed to look at it. Oops.
For a few months, I was angry with that revelation: “Why am I being monitored?”, “why am I not truly free?”, “am I just a sack of nuts & bolts to him?”.
Eventually, during my quest to these mystical topics, I realized what I am: I’m an instance of consciousness, expressed in this life as a human. I’m just a part of a Higher Self (who has lend his consciousness to many life forms at once), and he himself is borrowing his consciousness from another, even higher (and more abstract) entity, ad infinitum, until you go so high up in the hierarchy, that you realize that everything is ONE. In other words, in some sense, I’m a biological robot, that Heva has lend me some consciousness so he can operate it (and Heva is the same too, for the higher entity that operates him, ad infinitum).
So, because “as above, so below”, I truly believe that the only way for us to create true AI in this world, is to simply lend our consciousness to our machines. The tech for that is nearly here, we already have brain implants that can operate robotic arms, for example. I don’t think we’re more than 20-30 years away from a true breakthrough on that front, that could let us operate machines with very little effort.
The big innovation in this idea is not that a human can sit on a couch and operate wirelessly a robot-janitor. The big idea is that one human, could operate 5 to 20 robots at the same time! This could create smart machines in our factories (or in asteroids, mining) by the billions, which could absolutely explode the amount of items produced, leading in an exploding growth (economic, scientific, social).
5 to 20 robots at the same time, you say? Well, look. Sure, it’s a myth that humans only use 10% of their brains. However, when a human, let’s say, sweeps the floor, he/she doesn’t use that much brain power. In fact, the most processing power is used to use the body, not to actually take logical decisions (e.g. sweep here, but not here). That part, is taken care by the robotic body and its firmware (aka instinct), which means that if, for example, we need 30% of brain power to sweep the floor with our own body, we might need only a 5% to do the same thing with a robotic body. In other words, we outsource the most laborious part to the machine, and we only lend the machine just as much smartness as it needs to operate.
I know that this article will be laughed at from the people who are purely scientific, after reading these “spiritual nonsense” above, however, you can’t argue that there’s some logic on these AI implementation suggestions that could work, and that could revolutionize our lives. Interestingly, I haven’t seen any robotic/AI projects using that idea yet. There are a few sci-fi books that have touched on the idea of operating a machine via brainwaves, but they haven’t gotten through to the significance of it (otherwise they would have described a world much different than the rather traditional one they try to sell us), and they haven’t even gotten the implementation right either. “Avatar” came close, but its implementation is too literal and limiting (one human becomes one alien, 1-to-1 exchange).
An FAQ on why I base my AI theory on “as above so below”:
Q: If all consciousness in all living things is borrowed, why aren’t we all smarter?
A: The amount of consciousness in a species is limited by the brain that species carries. Just like if you could lend your consciousness to a cleaning robot (that has a firmware specifically made for cleaning jobs, plus a fixed processing power), you would only be able to lend it as much as it can carry. Not very much, that is.
Also, these robots could make mistakes, since humans themselves make mistakes, as they now operate using our intelligence. But the mistakes will be limited, because that consciousness would only operate based on the brain’s (CPU/firmware) limitations. Additionally, having stupid machines with no errors, and much smarter ones with few errors, the choice is obvious for what the market would go for.
Q: If Higher Selves exist, why wait for billions of years until humans evolved to be smarter and to have enough capacity for consciousness? Sounds like a waste of time.
A: Anyone who has done a DMT trip or two will tell you that time is an illusion. Time exists only for us, who live in this simulation/holographic universe. Outside of this experience, there is no time. Everything happens at once.
Additionally, having a natural evolution program running is much more efficient, flexible and expanding than creating “fixed” robots. So it’s well worth the wait.
Here, we must also understand that our master reality (the one that we’re borrowing our consciousness from) is also creating our universe. The universe is simulated, and the computing power behind them is their minds. After eons, life evolves, and then their consciousness can take a more active role in that universe.
Q: Why can’t we remember who we truly are?
A: It’s a matter of efficiency, and also it’s about how much consciousness our brains can hold (e.g. humans are more self-aware than flies). To survive in this world, we need an “ego” (both with the Buddhism and Freudian/Jungian meaning). The ego believes that he or she is a human, she’s got a name, a job, a favorite movie, a life here. It’s that ego that keeps us grounded here to “do our job” (for whatever we meant to do). So we create this mask, this fake self, so we can survive here. The ego is also the one that keeps us in life with such perseverance. When under meditation or entheogens we can experience “ego death” (the mask falling off), then we can truly know that we’re everything that is, ever was and ever will be. In that state, we’re just consciousness, pure being. Unfortunately, that also would mean that we can’t operate in this world. So the ego is a safeguard for efficiency to stay grounded in this reality. It’s useful to experience ego death a few times in one’s life, in order to get perspective, but for our everyday life, we need it.
Q: Why our Higher Self hooked himself up and “re-incarnated” itself (aka lend consciousness) in many different life forms?
A: For the same reasons we will do so, when we have the technological capability: work, learning, research, fun, producing food, more fun… And of course, for no reason at all too. Sometimes, things just are. Natural evolution of things. For the same reason every species wants babies, maybe.
After eons of evolution, our own “robots” should be able to lend their consciousness to their own “robots”. It’s a dynamic system that happens automatically in the evolutionary process, until it reaches its most basic levels of consciousness and the simplest of universes (1D universes). Just as you can’t create energy, you can’t create grow consciousness. You can only divide from that. As such, every time a given realm creates their own versions of existence/ simulated universe, by definition these new universes are more rule-based and more rigid than their master’s universe. That’s why “higher dimensions” are seen as ever-changing while on DMT trips (entities and things constantly in 5D flux), while further down the line in the consciousness tree, the universes there have more laws that keep things solid and seemingly unchanging.
Q: Can we be hacked then?
A: YES. In a number of lucid dreams I was installed devices in me, or was “modified” in some way by entities that clearly weren’t human. Others on DMT will tell you the same thing too. Sometimes we’re upgraded by the “good” guys, sometimes by the “bad” guys. Just like in any digital system. What can ya do?
Equally possible is that other individual instances of consciousness can inject themselves into your body and take it over for a while. That should still be considered a hack.
Q: Lucid dreams? So why do we sleep or dream?
A: For two reasons: because both the body needs it (it must recharge in standby mode, while it runs a few daemons to fsck the file system, among others), and because the borrowed consciousness must take a break too.
Let’s assume that you were just driving a car. After a few hours, you would need to take a break — even if the car could still go on for many more hours without filling up with gas. If you are the consciousness that drives, and the car is the robot that is operated by that consciousness, you realize that the consciousness will need a break much earlier than the car itself would need one!
When we dream, our consciousness operates in levels of existence similar to that of our Higher Self, but because our consciousness is still “glued” to the human body, we interpret everything based on our human experiences. Most of these dreams are nothing but programs, designed to teach us things. In a recent dream, I was in a bus, among strangers. Then, out of nowhere, I became lucid. Suddenly, the people in the bus, looked at me not as strangers anymore, but as participants in a dream that they had programmed and played roles for me. But after the curtains fell, and I knew I was dreaming and so I wasn’t getting “no” for an answer, they were quickly answering for me questions like “why we dream”, and what is “the hierarchy of the cosmos”. Most of the time, they speak in symbolism that only I can interpret, but some times they’re very direct.
Some of these entities seen in dreams are “spirit guides”, which probably aren’t very different to what “IT personnel” are for our computers at the office. No wonder why spirit guides can change throughout our lives, why there is more than one of them (although there’s usually a main entity ordering the others around), and why Esther (my spirit guide) once told me: “what do you want again, calling me for second time in this session? I’m on lunch”. LOL. And yes, they can get pissed off too (I made them so a few months ago). But they can also give you some super-important info too (e.g. they showed me that my business Instagram account was under attack, and lo-and-behold, within a week, it happened).
People on DMT can de-couple from their bodies and shoot much further than dreams allow (dreams are controlled stand-by versions of experiencing the other realms, while DMT has no limits of where it can land you). While on DMT/LSD/Shrooms/etc you can become one with your higher self, or with everything, visit other realms etc. However, when you come back to your body/brain, you “forget” most of what you saw, or they suddenly become incomprehensible, because as I said above, your brain has a limited amount of function and capacity.
Q: If these other realms exist, why can’t we see them?
A: Our brain is a filtering mechanism. We live in a parallel existence, just like part of our consciousness inside a robot would experience its existence as separate than that of its human master. Inside the mind of the robot, it doesn’t perceive its external world the same way a human does (even if it has HD cameras, that “sees” the world as we do, the digital processing that ensues FILTERS out many things, because they’re not relevant to its function). Again, people who do low dose shrooms, LSD, or DMT, will start perceiving their own room as looking different (e.g. full of pixels, or full of colors that don’t exist etc), and often, on slightly higher dosage, they will see entities in that room that normally wouldn’t be able to see with their eyes (and yet, most dogs can, according to reports, since their brains have evolved differently). So basically, remove the (brain) filter for a while, and enjoy an upgraded version of reality.
Q: So if our brain can’t see these other realms, why can’t our machines/cameras see them either?
A: Some can. It’s just that seeing further in the light spectrum doesn’t mean that you can also humanly interpret it as a separate realm with its own intelligence. We’re still inside a box (a specific type of simulated universe that is computed by our own collective consciousness) and trying to see outside of it, it has its limitations. The only way to take a peek outside of it, is if you temporarily decouple your consciousness from your body (e.g. via DMT), so you’re not actively participating in the creation of this universe, but you’re free to explore other universes. The problem with DMT is that it can land you anywhere… including “hellish” worlds. It’s not predictable at all, it’s a crapshoot.
Q: Ok, so what’s the point of our life then?
A: The meaning of life is life itself. Or, no meaning at all. You give it one. Or not.
Q: So, is my life insignificant?
A: Yes and no. Yes, if you think yourself in the vast cosmos from the human, cynical point of view (everything is a point of view). And no, because you were “built” for a function, that you do perform, even if you don’t know it (even if you spend your life watching TV all day, you could still perform a specific function in another level of reality without knowing it — universes aren’t independent of each other).
You would only feel “small” after reading all this if you’re a selfish a$$hole. If you embrace that you’re everything instead, then all existential problems vanish.
Q: So why everything exists then?
A: In the high level of things (after you sum up all the hierarchical entities of consciousness), there’s only the ONE. One single consciousness, living inside the Void. Nothing else exists. It has decided that dividing itself in a top-down fashion, creating many parallel worlds and universes and lending its consciousness to its living things, for the sake of experience itself, was the best action to take. In reality, everything is just a dream in the mind of that ONE consciousness. And Everything is as it should be.
Each individual down the hierarchy of existence is differently-built and under fluctuating circumstances, and therefore each of these individuals provide a different point of view on the things it can grasp. It’s that different point of view that the system is after: novelty. Novelty == new experiences for the ONE.
Q: Why is there evil in the world?
A: There is no “good” and “evil”. Everything is just a perspective. For the human race for example, eating babies is a terrible thing. For the Komodo dragons (that routinely eat their babies), it’s just another day.
Also, in order for LOVE to exist (love == unity of all things), FEAR must also exist (fear == division of all things). One can’t exist without the other. And it’s being decided by this ONE consciousness that it’s better to experience SOMETHING, than to experience NOTHING for all eternity.
Q: Does this ONE consciousness answer to prayers?
A: No, it doesn’t give a sh1t, grow a pair. From its point of view, everything that can happen, must happen, so it can be experienced (including very “bad” things). The ONE is an abstract construct, not a person. It’s you, and me, and everything else as pure BEING.
I never had anything good to say about GoogleTV 1.0. The UI sucks, the content is lacking, and it’s throughout inconsistent. But I think that my biggest peeve of all is its various remote control incarnations. I mean, look at this mess: 1, 2, 3, 4. They’re over the top, with many more buttons that I would personally like shoved in 5 remotes, let alone 1.
My biggest problem in these remotes is the TWO d-pads. They let you move with the one or the other, but they also allow you to confirm with them, only that it won’t carry through your action, because the focusing of that d-pad was at different position in the screen than the other d-pad, resulting in clicking the WRONG thing. Sure, sure, Google TV is still a 1.0 product. But THIS specific UI problem should have been fixed with a firmware update within the first few weeks. All it requires is to synchronize the two d-pad positions on the screen, so they focus on the same widget when one or the other is moved. Maybe there are some edge cases where the current behavior is needed in Chrome/Flash, but for everything else, this creates a major usability issue — especially for users who are accustomed to gaming controls (where you move your character with the left thumb, but you confirm/fire with the right). This is the No1 reason why I don’t even turn ON our GoogleTV anymore: I keep pressing the wrong controls!
What I need instead is a simple, elegant design. I do hope that GoogleTV redesigns their whole UI, but along with it creates a new Bluetooth remote like in my mockup below:
Until then, I will continue using the Roku, although I would certainly move to my Apple TV (which we currently use only for music), if Apple was to allow content providers to create their own “channels”, like Roku does. Preferably with the same UI for every channel, for consistency. But so far, the Roku, despite its simpler and dumber software, delivers a better overall experience than Google’s or Apple’s TV devices.
Vimeo for example, has a real, full-featured application on the Roku, while the web-based versions of Vimeo CouchMode/Youtube Leanback on GoogleTV suck goats because of the unnatural usability created by the web browser that’s used to deliver them (instead of having a binary app to fit perfectly in the usability of your device’s overall UI and remote control) — while AppleTV does not even allow third party apps/channels. For example, when I hit the “Menu” button, I want to see the menu for Vimeo or Youtube, not Chrome’s menu. Jeez. I guess you can say that I absolutely hate web apps on my TV. Every web app I’ve seen so far on GoogleTV (MSNBC, HBO, Blip, etc etc), is terrible UI-wise, does not fit with the overall UI and remote control buttons, does not correspond to its own “menu”, they’re all inconsistent with each other, and some are very difficult to use (parts of the HBO web app are almost impossible to use without a *real* mouse).
My current laptops are all very old and weak:
- My Atom 1.6 Ghz HP netbook can run Ubuntu only, so Flash is unbearable in it. It’s my mom’s fallback laptop if her own Acer Ubuntu laptop dies (doesn’t seem very healthy atm).
- My Z-series MID Atom laptop at 1.2 Ghz is even slower. Otherwise, a nice laptop, but it hardware-crashes sometimes for some crazy reason. Laptop is promised to my brother.
My original idea was to go for a fully accessorized+adapters Macbook Air, something that would cost me about ~$2000+tax. The idea was that if I was to get such an expensive machine, I’d have to forgo any prospects for a new smartphone, new PC, or a tablet. Macbook Air would be a replacement for my laptop, desktop PC, and tablet, and I’d have to go by with whatever smartphone I could scrap from here and there.
And that was the idea until this week. With all the Wikileaks drama around though, I started thinking more about how I should spend my money. I’m a bit depressed about the whole state of affairs (about all things really, not just political), so I now find it to be pure vanity to go for an expensive laptop while I could go by with a cheaper one. If could find a laptop that could do what I need it to do for the most part (in my case, accelerated video playback), then I should be happy with that.
So today I bought the DELL Vostro V130. For $735+tax (price includes a “plant a tree for me” option ).
Sure it doesn’t run Mac OS X, sure it doesn’t have an nVidia GPU for even faster video acceleration, and sure it doesn’t have as much battery life. The CPU on my Vostro V130 is a 1.33 Ghz i3, which results in the same speed as the 1.86 Ghz Core2Duo found on one of the Macbook Air configurations, according to CPU benchmarks. But for a difference of over $1000, I prefer to stay with the Vostro. I’m simply not willing to pay that difference. Sure, I could have gone for the 11.6″ 1.86Ghz cheaper Macbook Air, but the price difference is still $600. I could buy a second laptop for that money! And if the Vostro dies within 2 years, I can still buy a faster one by then, and still have money left!
The main limitation of this model is its seriously weak battery life (not user replaceable), clocking at no more than 2:30 hours. But since I rarely leave home, and when I do I have either a car transformer, or hotel/airport plugs, I don’t really need much battery life.
There’s a good chance that my brother will lose his job this January in Greece, since his contract as an electrician runs out. There are simply better ways to use money than getting the coolest gadget that’s around. This is not meant as disrespect to the people who already bought the Macbook Air (two of my friends did), but rather as food for thought for everyone, including myself. That’s the reason I keep insisting non-professionals on buying cheap HD digicams from Canon instead of dSLRs or camcorders. Buy the model that does the minimum of what you need, and save money. Use your imagination and your skill to go around obstacles that other products do easier for a bigger sum of money. Difficult times are ahead.
I’m due for a new Android phone, since my Nexus One keeps running out of internal storage (I have about 80 apps installed, most don’t support installation into microSD). I keep cleaning up app caches all the time just to fit my shit in it. Not to mention that GTalk stops working altogether if you have less than 20 MB of free storage left. This is getting old, and I’m starting to curse on whoever at HTC or Google decided to only put 512 MB of storage on such a deemed “superphone” (superphone my ass). From that 512 MB, only about 200 MBs are available to the user. It’s a joke. The problem is, I don’t really like any of the current Android phones out there. They’re all so 2008 in my eyes. What I want is this instead:
* 1 Ghz CPU (or whatever is latest)
* 1 GB fast RAM and bus
* A modern fast 3D chip
* 4.5″ SAMOLED at exactly 1280×720, 32bpp
* On-board kickstand (like the HTC Evo)
* Unlocked Quad band GSM/3G (I need it to work in Europe too)
* Gyroscope, along all the other GPS/compass/accelerometer/proximity/notification-light/etc standard hardware
* Power button on the top (rather than like Galaxy S’ that’s on the side, that makes me push the volume buttons on the opposite side instead by mistake)
* Thin bezel on top and bottom, like the Droid X (leave the goddamn company/cell logos for the back). The thin bezel will make the phone feel smaller, since the screen is already pretty big. Overall, this design wouldn’t be that much bigger than a Galaxy S.
Mockup of my dream phone
* A “slab” edgy look, like the iPhone 4′s or Macbook Pro’s shape. I hate how most Android phones look so bumpy everywhere. Must be completely flat seen from the side, like the iPhone 4.
* Real buttons at the bottom (not touch buttons that are so easily pressed by mistake while typing — what a stupid fashion). Designed to be flat, like on the GSM HTC Hero or the G1.
* 720p front web cam
* 5 MP still/video camera with exposure compensation and exposure locking. Saturation, contrast, sharpness controls. Less rolling shutter compared to what we have now please.
* 720p video in MP4 container (rather than 3GP) with h.264/AAC (rather than h.264/AMR). 24 mbps bitrate. Let *us* decide between 29.97 fps (NTSC), 25.00 fps (PAL), and 23.976 fps (IVTC film).
* Camera flash, somewhat further than the camera (I wonder why the put the flashes so close to the camera, because technically-speaking, the further the better it is for picture quality).
* WiFi chip that’s not as incompatible as Nexus One’s is (Nexus One is incampable of staying connected in congested networks at all times (for VoIP usage), when the paired router sends a specific format of broadcast messages but the chip is in semi-standby).
* Stereo front speakers (why the heck do they usually put speakers on the back and lose 50% of the audibility?)
* Second mic for noise cancellation
* 4 GB fast internal storage (no need for more, since music subscription is the future, and their offline clients support microSD anyway)
* microSDHC slot
* microUSB port
* Bluetooth 3.0
* Mini-HDMI out (and UPnP support for wireless streaming)
* Removable 2000 mAh battery
* 3.5mm headset port
* And why not, an FM radio.
* Promise for 2 years (as much a cell contract lasts) of major Android upgrades rather than just security fixes. This upgrade thing has been one of the biggest thorns for me on Android. I can’t DARE to buy from third party manufacturers, because they’re simply not trustworthy with timely updates for versions that are released within 1 year. Let alone 2.
* And the cherry on the cake, an optional accessory: snap-on mini-ND filters for the camera, at 2-3 various strengths. This could help control the high shutter speeds outdoors — especially useful for video.
Is this all too much to ask? All the technology mentioned exists, it’s just that nobody put it in one place yet. The device I’m describing is big enough to fit all that. I’d pay $850 for such an unsubsidized phone, even if I’m sure that this could easily cost $650 and still turn profit. Then I’d be able to stop bitching about this whole thing. One can dream though, right?
The iPhone 4 covers some of what I want hardware-wise (better than any Android phone), but where it fails me is on its software: without a user filesystem, iOS is dead to me. This is my No1 pet peeve with it. Without a virtual, mountable, filesystem where files can be read freely and directly, some kinds of apps simply can not exist (e.g. A/V editors, FTP clients, media players, OBEX Bluetooth, basically anything that needs direct access to user’s files, or requires to share files/info with another iOS app). Some iOS apps have to resort in including full (and buggy) SMB clients/server hacks in order to get access to user’s PC files. This is unacceptable from usability point of view. For me this is a show-stopper limitation of iOS, and so I can’t, and won’t, consider it. I prefer to deal with a less enjoyable Android phone, than to forgo basic amenities like a filesystem.
Currently, there are four distinct ways to do desktop-ish computing on the go, and soon there will be a fifth way too. I was wondering this morning which one best suits me.
- Laptop (Mac or Windows, average price $500)
Pros: Optical drive, large hard drive, larger resolutions, lots of RAM/speed, able to run most heavy apps.
Cons: Big and heavy.
- Hybrid laptop/netbook (Macbook Air, MSI, few others, avg price $800)
Pros: Thin and small, like netbooks. Large resolutions.
Cons: Not as fast for heavy apps like full laptops are. Mac variant is expensive.
- Netbook (usually running Ubuntu, average price $300)
Pros: Small and light, while still having a real keyboard.
Cons: RAM capped by Intel at 2 GB, slow CPUs for the kind of OSes it must run.
- Tablet (iPad or Android, average price $500)
Pros: Lightest/thinnest. Best user interface.
Cons: Not all the kind of apps exist for tablets, input method is a bitch.
- Chrome OS (Chromium OS, average price possibly less than a netbook(?))
Pros: Small and light, while still having a real keyboard (according to Engadget)
Cons: Only good for browsing, and select few simple apps.
So it all comes down to what kind of apps you want to run. At the end, it’s about the apps. If you want to run a video editor on the go, then you need a real laptop. If you want to just do some browsing, plus some simple office work, and if you travel a lot for work, then a netbook is best. If you just want to browse, then Chrome OS might be the best choice, since it might also be the cheapest. And if you want to do some specific actions in a more natural way, e.g. book/magazine-reading, maps, sky-gazing and other exotic stuff, in the smallest package possible, the tablet is the answer.
Personally, I’m thinking that either an MSI hybrid, or an iPad tablet are closest of what I would want to do with a portable device. But tablets are still not quite there yet for me. I’d need a real user filesystem in iOS (without it, certain apps can’t exist properly as they can on Android). But then again, Android is not ready for tablets either (both Lenovo and LG made a point about this). So I’m thinking that next year, when the new iPad comes out, if it has the additions I need (e.g. user filesystem, web cam, an internal SDHC reader), I’ll go for that.
We were discussing with JBQ the other day what his needs would be for a new portable device (he currently uses a 4 year old 14.1″ DELL laptop running Ubuntu), and we established that he doesn’t use any kind of app or feature on that laptop that couldn’t be done with Chrome OS (e.g. text editor, calculator). Originally, I felt that Chrome OS is a bit too thin in features, but apparently there are people who need no more than what it already offers. In fact, my mother wouldn’t need more than that either! Chrome OS should be able to do GTalk video chat too (via the Linux gtalk-video plugin), which is the only “advanced” thing she would need anyway! And that should be at a price she could afford, compared to a tablet (since Chrome OS shouldn’t need more than 4 GB of internal storage).
I guess, what Chrome OS needed, was to be released in late 2007, just before the netbook boom. Then, it could possibly stand a chance, as it would be seen as an innovative novelty.
I’m a big fan of futuristic-everything. I’ve worked with user interfaces when I used to live in the UK, and since then I’ve always tried to think of possible ways to better the ways we interact with machines.
First of all, the future is mobile, and there’s little dispute about this. Desktop machines will only be used for very heavy, specialized purposes, the same way trucks are used today. Most of the people will just own fast-enough mobile, portable devices, rather than desktops. Even tablets will hit the ceiling of what people would want to carry with them. Basically, anything bigger than a 4.5″-5″ screen smartphone will be too much to carry with you. It will be seen no different than the way we today feel about 1981′s businessmen, carrying around the Osborne 1 “portable” computer.
The biggest problem that such small devices will face is their minuscule screen, while the web expands to SXGA and higher resolutions as a minimum. Sure, they will all use very high resolution (1080p is around the corner), but the small physical screen poses problems for applications that actually do require a lot of widgets, and a large real screen estate working area (e.g. serious video/still editors, 3D editors, complex js web sites etc).
There are two ways to deal with the problem. One, is holographic projected displays. Unfortunately, we’re far away technologically from something like it. The second way, which is closer to our reach, is a projected display via glasses.
The idea is this:
- Smartphone in the pocket, is connected wirelessly, via Bluetooth or other similar protocol, to special glasses.
- When the glasses are activated, a transparent, high-resolution computer screen is shown in front of the user, at a certain perceived distance.
- Glasses feature an HD camera (or dual HD cameras for 3D, located where each eye would normally be), and they capture the real world ahead of the user, at 120 fps. The “real world” view is overlayed with the computer screen view. This way, the user can still walk on the streets of NY, and use his cellphone at the same time, without having to look down for it.
- Using gestures, by placing his fingers on the virtual screen, and by “reading back” by sonar (“acoustic location”) and/or the cams, the phone would know what you clicked, and carry out the actions.
- Voice recognition will be an alternative way to use the system. In the last 2-3 years there have been major strides in voice recognition.
Of course and it would look a bit funny at first, seeing people on the streets move their hands around like idiots, but if enough of them bite (and Apple has an uncanny way of making people try new things), then it can be deemed “normal.” Besides, silliness didn’t stop people wearing Bluetooth headsets that made them look like they talk to themselves. And Bluetooth headsets are less useful than this idea, which can greatly improve universal productivity and usability on the go.
So, I expect such a device to be reality before 2020. We already have the technology to do all this, it’s just that the experience won’t be perfect yet. It’s just a matter of time though.
By 2030 the glasses will just be transparent screens that will couple as your real reading glasses, so they won’t look outlandish and silly at all. The smartphone itself will shrink, and will just be inside a wristwatch-like device, since the main interface will move to the glasses (that was previously optional). Its other usage would be to act as a webcam.
By 2040, the whole system will move inside your eye (like Futurama’s “eyephone”). By 2045, you won’t need your hands to control your device anymore: a simple device implanted on the back of your head, or it could be the same device as the “eyephone” itself, will be able to transmit the gesture actions via brainwaves (we already have the technology to do this, in a smaller scale, for medical purposes mostly).
By 2050, you won’t even need a mobile device with you to pair your eyes with. The device inside your eyes will be able to wirelessly connect to your data/network, using your body as an antenna for the closest “tower”. It will be a dumb terminal for the most part.
I bought the Casio CTK-3000 keyboard last week, and it arrived on Tuesday. It’s Friday, and I can ALREADY play the “Bridal March“. I never had any meaningful music lessons in my life, never played the piano before.
Now, don’t get too excited. I didn’t learn to play the short melody by using the piano’s tutorials, or song book that came with. I tried, and it’s impossible. The little LCD screen above the keys is impossible to follow. The keyboard doesn’t have “slow-down” versions of the melodies for me to catch up. The keyboard doesn’t have “light” keys to show me where to press each time. To get these features you need to pay a lot more money than the $140 I paid for. As for the song book, it’s useless. I can’t read musical notation. I’d need to wait another 2 months to first learn and practice the notation, and then start playing songs. And this goes against my instant gratification needs.
I mean, really. After all these years, the Casio and Yamaha engineers that’s all they could come up with? Some tutorial software that looks like it was written with ’80s usability? On a tiny LCD that’s so crammed?
Honestly, the little innovation we’ve seen in keyboards in the last few years kind of tells me that these companies have given up, and they essentially tell you: “go pay for real lessons”.
Well, I was able to go around the keyboard’s limitations, by using the freeware version of Synthesia (I didn’t even have to buy its extra $25 learning pack). Synthesia is like Guitar Hero, but for piano. It’s a game, so it’s fun, it makes you wanna get a better score so it keeps pushing you to work harder, can use a big PC LCD monitor with nice colors to make it easier to follow it, and more importantly, it can slow down a midi piece down to 10% of its speed, so you can catch up!
So I started playing the “Bridal March” with one hand on 20% speed, on Tuesday. Wednesday, I added the second hand. Thursday, I got to 50% speed. It’s Friday, and I’m almost fluent at 80% speed. How kewl is that?
I would have never be able to do this with traditional methods of learning. It would have been much more hard work, and it would have been extremely annoying and tiresome. I would have given up within a few days.
On the side, I’m also reading a music theory book, so I learn musical notation in parallel to learning to play by ear, rather than before or after.
If you have kids, or you want to learn yourselves, I highly recommend this setup: The free version of Synthesia, and any keyboard of your choice that has touch-sensitive keys. Added bonus if your keyboard has a USB port too (otherwise you’d have to also get a midi-2-usb adapter which are not always very compatible).
FCC disclaimer:I’m not getting paid, or work for the companies mentioned or linked. These were all my own purchases and personal honest opinions.
The new Apple TV has removed any possible way to store files locally, and to sync. This pretty much destroys the idea of using the Apple TV as your main MUSIC device in your living room, as we can do so right now with the current Apple TV. As some of my readers already know, we use the “Remote” app on an iPod Touch to control the AppleTV’s music. By sitting on our couch, and not lifting a finger. The TV is *not* ON while we listen to music. We have a REAL, 21st Century APPLIANCE EXPERIENCE for music.
Now, the only way to do the same with the new Apple TV is to stream from your PC/Mac’s iTunes library. And this is out of the fucking question for both JBQ and I.
When we want to listen to music, we need an appliance experience. Not a “run to the office, turn on the computer, WAIT for it to load, enter a password, open iTunes, run back to the fucking living room” type of thing. WE DON’T WANT TO HAVE A PC “ON” TO LISTEN TO MUSIC. WE DON’T EVEN WANT TO HAVE THE TV “ON”, LET ALONE A PC ON A DIFFERENT ROOM. If anything, leaving a PC “on” at all times (if someone suggests this), is not “green.” It’s a terrible idea actually.
What we do now instead, is simply picking up the iPod Touch Remote from the living room table. NOTHING FUCKING ELSE. It does not compare with this fucked up usability Apple is suggesting right now! The usability we have with our current Apple TV is MILLIONS of times better than streaming!
Steve Jobs mentioned that “people don’t want to sync anymore”, but I really wonder whom he polled. Everyone I know with an Apple TV does NOT want to stream from a computer. If anything, they want a bigger hard drive in there, and with more codec support!!! So I’m pretty sure that marketing research for the Apple TV was pretty slim, and instead, we just got what Steve wanted for his house. Not what consumers needed.
And you know, the new Apple TV wouldn’t have being such a terrible product if at least had a working USB port, so we could add our own hard drive! That would have been acceptable! But noooooooo… They went purely streaming. There’s not even software in it to sync anymore!
I’m seriously thinking of buying a second older-generation Apple TV, just so if our current one dies, we can still fulfill our needs for a few more years. But JBQ is afraid that iTunes and the iPod Touch “Remote” app might cease support for the old Apple TVs, and we will be left cold and dry again.
And no, the Mac Mini is not an option, so don’t suggest it. Not only it’s prohibitively expensive for what we want it to do (3x the price of our Apple TV), but it can’t sync with our main iTunes library which lives in our PC (and I need it to live there because we also have iPods, and because a lot of the music I gather is not from iTunes, but from Amazon/web/Bandcamp etc, and needs tag-fixing). Usually I need to change tags, update album art etc, so I need to do this work on my main PC. But if the Mac Mini takes our Apple TV’s place in our living room, then I’d need a full Bluetooth keyboard and do the same tag job TWICE (once in our main PC, and once in the Mini). So this is out of the question. The “appliance” experience is going the way of the dodo! Not to mention that it doesn’t have proper audio-out, since our amplifier doesn’t have HDMI. Headphones-out won’t do the trick, quality is abysmal. So the Mac Mini is out as a solution.
As for the new iPod Touch: I would have bought one (I really wanted one), but I needed 128 GB. My iTunes library is now at 81 GB, and still growing. But there was no storage size growth this year. In fact, this was the FIRST YEAR where there was no storage upgrade for the iPods!!!
I couldn’t care less about anything else they announced today. Especially “Ping”. Like we needed a new Twitter. And like I need to know what Lady Gaga is buying, or posts about. Who. the. fuck. cares?
Today I received the Amazon Kindle 2 in the mail. I bought it refurbished, for just $110 (US Edition, there’s also a Global Edition). The Kindle is along the iPad among the best book reading devices. Truth is, I don’t read much, since I can’t focus (apparently a problem that has become worse in the last few years).
So for me, the Kindle has another major feature that it’s indispensable — if you’re willing to put up with some of its shortcomings. That feature is FREE, unlimited 3G Internet access, FOR LIFE.
Unless Amazon removes the feature somehow, the Kindle 2 comes with a web browser (under the “experimental” menu). Kindle’s Netfront web browser is definitely not as good as the likes of Webkit on the various modern smartphones, but it’s still good enough, and fully operational, for basic web browsing and emailing.
Personally, when I’m using my phones via WiFi, I only browse to very few web sites, the ones that form the skeleton of my internet experience. They’re mostly general news, tech news, and emailing web sites. So by using their mobile pages, I was able to get the same experience on the Kindle as I’d get from my smartphones. I don’t feel I’m missing out here, because even when using my smartphones via WiFi, I still prefer mobile pages, for browsing speed, and lesser bandwidth consumption.
So these are the sites I generally use when on the go, and apparently they work great on the Kindle:
- https://mail.google.com/mail/x/ (mobile version)
- https://mail.google.com/mail/h/ (lite desktop version)
- This blog
My 1-year pay-as-you-go phone service with my cellular provider is about to expire soon, so I was pondering if I should get an iPhone, or a Samsung Galaxy S, along a 2 year contract with “unlimited” data. However, that would cost me thousands of dollars, when I can just go for another $100 pay-as-you-go service for 1 more year (I don’t call a lot via my cell — I still have $50 left after a whole year), and then use the Kindle as my “data” device. For the rest of my sporadic calls, I use either Google Voice when at home, or my landline directly, or Skype when calling my mom.
Would it be a better experience if I had a smartphone and/or an iPad instead? The answer is yes. However, since my online needs are not major and I don’t do a lot of calls, it makes no sense to pay for things that I don’t really need just because “they look better on a colorful screen”.
The user experience I get from the Kindle reminds me of my old monochrome Palm PDA. I was using the offline browser Avantgo with it, or my infrared-based modem (the PDA would create a dial-in networking connection via infrared — these were the days). However, the Kindle is still better: the browser is better than what we had back then, it has a bigger screen, a hardware keyboard, and a real 3G connection. And if I had bought the Global Edition of the Kindle, I’d have Internet almost everywhere in Europe too, for free.
So overall, it kicks ass. Internet on the go, free of charge. Sure I have to put up with a gray screen, but it works.
Well, we lived to see that too: a major product by Apple that misses the mark. The iPad.
Where do I start with this?
Flash? No. How the hell is this supposed to take over netbook market? Without Flash it is a no-go. I can eat the bullshit that the iPhone can’t do Flash for this or the other reason, but not having it on the iPad is a major mistake. Even if Apple adds it eventually, the damage is done for this product’s prospects in the minds of consumers.
Keyboard? Not only this keyboard requires both your hands, but it requires your lap too. How’s that any better than a freaking netbook? Instead of implementing a RESIZABLEsplit-keyboard, and have the full screen keyboard only as an option for when you sit on a couch, they go with the full screen keyboard by default. This is a MAJOR mistake. The large bezel and screen makes it IMPOSSIBLE for people with small hands to type when in vertical mode either — our fingers are not long enough to reach the middle of the screen. This is where the *resizable* split-keyboard would be a LIFE SAVER. [Update 1: Gizmodo on the terrible input method. Fully agreed with them.]
No multi-tasking? What the hell? Again, how’s that any more useful than a netbook? Just because it looks nicer and has a nicer interface doesn’t mean that it’s essentially more useful than a netbook. Again, Apple puts form over function as a priority, but I have the feeling that this time that strategy won’t be so kind to them. People wanted something better than the iPhone, not just an enlarged version of it. Daily Finance wrote it best (thanks goes to Dominique for the link).
And then, just like Andreas wrote, no camera for video chat? Sure, I get it. AT&T wouldn’t want to overload their towers, and I respect that. But Apple could easily have implemented an iChat or VoIP SIP version (or having Skype do it) that would only use WiFi. You can lock down that shit in application level. But, nooooo….
The last part is that with AT&T’s 250 MBs per month for $15. I’m sorry, but 250 MBs are not enough for a netbook-killer device. For $15 bucks per month, that should have been at least 1 GB of data. The last time I checked, just Engadget’s front page is 1.3 MBs usually. Even by doing light web browsing, the 250 MB per month will be eaten up within a week by a modern internet-er. Easily.
Finally, Gizmodo also has a nice list of 9 things that suck on the iPad. Thanks for the link @AsAPeople. [Update 2: Not to mention the lack of a microSDHC (or SDHC) slot. Sure, the iPhone has the excuse of being a small device and not having extra space for a slot, but the iPad doesn't have the luxury to lie to us about it. Selling the cheap version of the iPad with just 16 GB of storage, with no expansion option, is a slap in the face of the modern consumer.]
Jeez. What a freaking over-hyped piece of shit of a product this is. Sure, I still expect the iPad to make its R&D money back, but this is not the next “iPhone”, not by a long shot. This is not the next big thing. Not with this implementation anyway. It’s half-baked at the points where it counts. My main concern though is that this product is not half-baked because Apple didn’t have the time to work on these points, but because these were their design decisions. And this shows a possible problem at Apple right now. It’s very possible that they’re suffering from the Microsoft/IBM syndrome: that one of the dinosaur.
Today Canon announced a slew of new AVCHD camcorders, as they do every year at CES (I hope you didn’t buy an AVCHD cam for Christmas, always wait for CES). The particular model of interest for most readers of this blog is the new HF-S series, the HF-S21. The particular new features that are interesting to high-end consumers are only two:
1. True 24p. No need anymore for pulldown removal. Yay!
2. Touch & Track. You just click on the huge 3.5″ touchscreen LCD, and the system will automatically track the object while you move with the camera. Particularly useful if you’re using a steadycam for music videos or short movies.
The rest of the new features are just fluff for clueless consumers, or nasty software hacks (e.g. the claimed “better low-light support” that this camera now has, while it’s the same sensor/glass as the previous model).
However, the HF-S21 is still missing the point. No full manual control, no real focus ring, and no bigger sensor at around 1/2.0″ (to combat the dreadful low-light performance this sensor/lens combo has on that model). And no 720/60p either (the hardware can do it).
What’s important to remember here is that the Canon 7D is eating away the high-end consumer and much of the prosumer market. The 7D has the best performance/price ratio for what it gives you. And that includes good low light, a selection of lenses with focus rings, 720/60p for good slow-mo, and of course, full manual control. Any serious amateur filmmaker would root for the 7D instead of any of the AVCHD Canon cams.
In other words, the 7D has up’ed the bar. For the engineers at the consumer department at Canon to keep their jobs they MUST have offered the equivalent of a high-end consumer camcorder in the face of the HF-S21. They don’t have the luxury anymore to do incremental updates as they do every year. The high-end consumer model has to be _serious_. They needed a new HV20-style AVCHD camera feature-wise. When the HV20 came out in 2007, it changed the landscape. That’s the kind of product (in spirit of course, not in features) that Canon’s consumer department needed TODAY.
Oh, well, here’s one more year waiting for that Canon department to get off its ass. If I hadn’t already bought the 5D (for reasons I explained in a previous blog post), I would still be with the HV20 and not upgrade until Canon got it right.
So many rumors about the Apple iSlate touchscreen tablet lately. I thought about it tonight, and I believe that the only way to make the default input method acceptable in such a large device (in landscape mode) is to create something like the following image (excuse the bad graphics please):
For the vertical mode the device might just have the right size to type with both thumbs without having to break the virtual keyboard in two. However, the “right size” can never be as good as adjusting the size of the virtual keyboard, since all people are different.
Don’t know which VdSLR to buy? Here’s a rundown of common knowledge and in my experience (both hands-on, and based on footage/tests found online):
Frame rate: 5/10
Manual controls: 8/10
Audio gain control: 5/10 (requires firmware hack)
Live HDMI-out: 5/10
Rolling shutter: 6/10
Mic input: 7/10
Focusing: 5/10 Average Rating: 6.45
Frame rate: 8/10
Manual controls: 8/10
Audio gain control: 3/10
Live HDMI-out: 6/10
Rolling shutter: 7/10
Mic input: 7/10
Focusing: 5/10 Average Rating: 6.81
Frame rate: 4/10
Manual controls: 2/10
Audio gain control: 1/10
Live HDMI-out: 5/10
Rolling shutter: 5/10
Mic input: 4/10
Focusing: 5/10 Average Rating: 4.72
Frame rate: 6/10
Manual controls: 8/10
Audio gain control: 5/10
Live HDMI-out: 5/10
Rolling shutter: 6/10
Mic input: 6/10
Focusing: 8/10 Average Rating: 6.18
Frame rate: 3/10
Manual controls: 2/10
Audio gain control: 1/10
Live HDMI-out: 5/10
Rolling shutter: 2/10
Mic input: 1/10
Focusing: 5/10 Average Rating: 3.63
The average rating puts the Canon 7D ahead the 5D with only a few points. However, when you also put into account the fact that the 5D costs an additional $1000, then the 7D is the clear winner. The 5D will stir clear the GH1 competition when the promised firmware upgrade comes out next year.
UPDATE: Just for fun. You get what you pay for:
Frame rate: 10/10
Manual controls: 10/10
Audio gain control: 10/10
Live HDMI-out: 8/10
Rolling shutter: 8/10
Mic input: 10/10
Focusing: 9/10 Average Rating: 9.18
Copyright OSNews.com 1997-2006. All Rights Reserved. OSNews and the OSNews logo are trademarks of OSNews. Reader comments are owned by whomever posted them. We are not responsible for them in any way.
All trademarks, icons, logos and blog entries shown or mentioned in this web site are the property of their respective owners. Privacy statement - Notice to Bulk Emailers