The Lost Generation

How most kids at school now will probably never own a physical copy of anything…

Do you remember what it is like to buy a brand new album, the joy it brings to remove the protective cover and slip the disc into the player while you pull out the sleeve and read it cover to cover?  Or when you take out a book from the library and flick through the pages while it reveals that old, musty smell you only get from an old book?

I bet children of this generation will never get to experience this. With all this growing technology, it really gets you thinking about how it is affecting them.  It seems that they will never understand that feeling when you buy a new CD or record.  Or when you buy a new book or borrow one from the library and fan the pages across your face to get that smell. 

It is a shame in many ways to know that in school they are now on computers most of the day and are expected to have homework typed and printed from a computer.  What happened to the days of the good old pen and paper? 

Or they walk around with earphones in making it hard to socialise when out and about, every piece of music they own is probably downloaded.  Unless they are die-hard fans of music in which case they buy the physical copy then put on their iPods.

Then we have the Kindle which means that instead of going out and buying the book to have and hold, you just download onto this new piece of technology.  Ask yourself, If you are going to spend £10 on a downloaded version of the book that you will never get to physically hold, then why wouldn’t you just go to your local bookstore and buy the real thing?

It is kind of like your favourite restaurant having an ‘app’ that you can download your favourite meal from the menu and all you can do is smell how it would be if you had the real thing.  There really is no comparison to having the physical product in your hand and the joy it can bring when you re-discover that album or book you thought was lost.

 

Technology is advancing far more quickly than our minds can process it.  With iPads, iPods, Kindles and laptops it is hard to step away from it all and get a kid to read a book or write a letter.  Rather than typing a text or having their heads buried into a screen all day long.

It is a shame and if we as a society don’t try to make it better, then it will be too late.  Say goodbye to the days of going into a music or book shop and wondering around for hours taking in the smells and sounds. And say hello to the World Wide Web, where you don’t get to experience the smells and sounds you once loved. 

You are probably reading this thinking it is some old, wise and bitter person who hates technology because they can’t figure out how to work it.  But it is not, this is a woman in her early twenties who feels we are losing the generation that are the future of our nation. 

Now this is not saying technology is a bad thing, of course it isn’t.  Just that maybe it is time to stand up and show the younger generation that life isn’t all about getting the latest downloads and gadgets, that there are alternatives like having the physical product in the palm of your hands.

If they just got the revelation that you can still go to the store and buy the product then maybe we can put a stop to losing not only this generation but losing our beloved high street.

This of course is a whole separate matter…

Images reproduced from nytimes.com and inspireddribble.com

Restricting Internet Porn

‘There’s no point in closing the stable door after the horse has bolted!’  I hear some of you shout incredulously. Well let me put it another way for you in a form of a question:

Would you let your 12 year old child sit down in the living room and let them watch hardcore porn on your TV?

‘No of course not!’ I hear you shout indignantly. Welcome to 2012, where your child can access adult material via any device connected to the internet. Shocked?… Well then, we need to find some way of getting that horse back into the stable or at the very least in the controlled environment of the paddock.

I’m not going to discuss if porn itself should be banned, that’s for another time. I’m going to discuss the access to porn on the internet by the under 18s. Only a complete imbecile would say that a child viewing porn at an early age would not have a detrimental effect on their behaviour and actions. It obviously does and so we will take that as a given. The question is how do we protect them from it?

Some sites have a pathetic ‘confirm you are over 18’ confirmation click in order to view any form of content considered over 18. The majority of sites don’t even bother to have this. I’m sure you know this is completely ineffective and just the website owners covering themselves legally. A pretty weak legally required responsibility don’t you think?

Today, the majority of children are far more tech savvy than their parents and can run rings around them, so it is quite easy for them to access adult material if they want to. A recent Ofcom study showed that 91% of children live in a house with access to the internet. That’s fine I hear you say, the parent would be able to control them. The survey goes on to say that only half of parents of children aged 5-15 supervised the child’s internet use.  They, perhaps quite understandably do not see the danger, as when they were younger they had a computer. Me too, it was an Acorn Electron and if you got it to play Snakes you were doing well. It didn’t even display anything near a photo. Relying on parents to control a child’s internet access is flawed. Even the best parent in the world would struggle. Three million 8-15 year old have a Smartphone, which also gives access to the internet, so unless that child lives in a Mormon household, there’s the opportunity and curiosity for them to easily view adult material.

It’s normal for most young children to have access to a computer, their own computer or a phone  which has access to the internet. Fifteen to twenty years ago this was unthinkable and reported on programmes like Tomorrows World. (Yes I remember it, I am that old) but it’s completely natural and normal in today’s society to access information anywhere, anytime.  Just as years ago there was a debate whether children should have a TV in their bedroom, so the debate is with PC’s today. However this is not the same argument. A TV gives access to many channels, SKY etc  which not only a parent can control by ‘parental control’ functions on the freeview box,  but also these channels are regulated so that adult content is shown only after the watershed or PIN protected. As long as a parent activates these controls, the child is protected from this passive form of media. With the internet however, things are very different. There are millions of ‘channels’ available with no watershed or parental controls.

Firstly, I have to say that I think  that the internet is perhaps one of the best inventions of the 21st century, revolutionising civilisation. It gives you access to the entire world’s history, information and connection to many people, all from the comfort of your armchair. That is miraculous and amazing.

The downside is it gives you access to the whole world’s entire history, information and connection to many people, all from the comfort of your armchair without a filter. Just as the internet holds a repository of the very best of human civilisation, society and achievement, it also holds the very very worst, which we’ve all seen reported in the press all too frequently, relating to content such as paedophiles, suicide sites and a new breed of internet user the ‘troll’ who comments on social network sites. So what do we do to stop this?

The exponential growth of the internet caught governments and organisations by surprise. Times changed very quickly and the very core in the way in which we view information changed forever. Now each country has been quickly trying to play catch up to implement new laws to control this ever since with triage plasters and duct tape law.  The police recently raided the homes of a huge paedophile ring. It won’t shock anyone that the internet and technology played a big part in this group’s activities.  The police are doing the best they can but are fire fighting at the moment, similar to the governments frantic introduction of new laws. We need proper all encompassing laws relating directly the internet. There should be no objection to putting a form of control on the internet to protect young children, not just from pornography but all the dimly lit parts of the internet.

The more liberal among you might say that the internet should remain completely free of control or censorship. Some of the ISP’s and search engines back this argument perhaps because any change would involve a hefty cost for them to implement and affect their bottom line. (Do they tell you how much they make in advertising revenue from this content?) But this isn’t a ‘freedom for the individual issue’ It is about protecting the young and vulnerable. I think protection of children trumps the usual rolled out blanket ‘freedom’ argument. If an adult wants to view adult material on the internet, as long as they are over 18 and the content is legal then its their choice. That access would not change. There might be additional controls and settings that need to be selected  but they won’t be stopped viewing it.

Let nobody be in any doubt, it is completely possible to restrict and control the content of the internet to prevent children viewing pornographic content. The internet in its basic form is 1’s and 0s,  computer code and that code can be changed, added to and re-written.

There are many different methods of putting these controls into place. I won’t go into them in any detail, as I do not want to be the cause of you slipping into a deep sleep or coma but these controls/filters can be implemented at many points in the internet machine :

By ISP’s at the source; by search engines and their displayed results; by browsers restricting the webpages they display; by  additional control programs installed on the PC and mobile, an OPT-IN process with proof of age on websites with adult content…to name but a few methods. It could be that the solution will be a combination of all of them.

It is entirely possible to do this and it can be done, it just depends on the will of the people to get it done. Continuing my stable door analogy at the beginning, we must bring that horse back willingly to the stable, if possible but forcefully back if necessary.

Is it going to be a simple process? No. Is it going to be a difficult, lengthy (meaning government consultation) costly process for all parties involved. Yes it is. But if done right it will protect children from the dangers of the internet while still allowing them access to its immense wealth of knowledge. So even if it is a costly process, isn’t it a price worth paying? They are only children at the moment but if they are subjected to the dark side of the internet at an early age, what kind of adults will they grow up to be and what kind of society will that create?

Image reproduced from markgarnier.co.uk

Hövding “Invisible” Helmet: No More Helmet Hair!

Is bike safety getting in the way of your personal style? Have you often preferred the prospect of a head injury instead of—god forbid—helmet hair? Then step aside, I’m going to show you something that will twist the chain on your fixie.

Enter Hövding, the next revolution in bicycle safety wear that is—dare I say it—fashionable. A giant leap away from the conventional (and ugly) stack hat, it’s worn as a scarf/hood/snood by the cyclist and like a car’s airbag, inflates when its motion sensors detect impending bitumen eating.

See this hapless crash-test dummy cop a load of moving vehicle:

The “invisible” helmet’s designers, Anna Haupt and Terese Alstin have been shortlisted for the Design Museum’s Design of the Year 2012 award and last year won a Danish design award.  Not cheap at £355, it’s perhaps the most dedicated of fashionistas who will invest in this product in its early stages. Although, you can’t put a price on looking good, hey?

Image reproduced from idobelieveicamewithahat.com
Video reproduced from YouTube / Hovdingsverige
Article originally published on idobelieveicamewithahat.com

Are Smartphones Changing the Way We Learn?

There’s a long-held argument that smartphones have given us the access to all knowledge, but we use it instead to look at pictures of cats and argue with each other on the internet. This isn’t always true – while we may do these things, we also use our phones constantly for looking something up, whether it’s a leading actor in a film we’re watching or if we’re checking what exactly Heisenberg’s uncertainty principle actually means.

smartphoneBut that’s why it’s become such an incredible learning tool – the ability to look anything up immediately means that there’s no longer a barrier to us absorbing new knowledge or learning new skills. Even an iPad can be a window into the works of Da Vinci, should we so wish. Twenty to thirty years ago, it wasn’t a case of simply taking the phone you got from O2 out of your pocket to learn a little more about what exactly caused the First World War – you’d have to go to a library, or even enrol on an academic course. Now, almost all the academia you’ll ever need is accessible via the same machine you’re using to order pizza and call your bank.

It’s not as though the learning is accidental either, it’s simply more accessible than it was before. Ten years ago, the idea of sitting down to watch Breaking Bad would’ve been a pain in the backside – go and get the DVDs, a DVD player, a TV, and sit down and watch them in standard definition. Now I can stream the same show in high definition on my mobile device.

There are also a multitude of educational apps available for people to use to expand their mental horizons. This goes for children as well, of course – there’s a considerable pile of interactive books and games that allow for their imaginations and logical reasoning skills to come out and play while using a device that’s far more intuitive to a child than a mouse and keyboard.

But smartphones are allowing us not only to look things up quickly, but to keep ourselves connected to data banks full of new possibilities. Apple runs a part of its online digital marketplace called iTunes U. Yep, iTunes University. It comes complete with several degrees’ worth of lectures in video and audio form from some of the best universities in the world, and they’re all accessible via your smartphone. This isn’t your average afternoon on Wikipedia – we’re talking actual lectures that offer a university-level education in a particular topic.

Mobile devices are also changing how we engage with the practical parts of our education. We can take notes, Skype our lecturer, use Vine for film class, and even ensure that our work is accessible from our pockets by uploading our current written musings to a Dropbox account.

Before, it was a little risky to take your phone out when you were supposed to be soaking up information – now, it’s no different to sitting down with a computer. Your smartphone has become your PC, your textbook, your notepad and your dictaphone – it’s not difficult to see why you’d be learning with it.

Quantum Computing in 2013

The introduction of classical computing brought the languages of classical physics (electricity and magnetism) and joined it into a new assembly of people in the future called computer scientists. Comparable to most technologies, classical computers like ENIAC (Electronic Numerical Integrator and Computer) began under the purview of engineers and progressed to a shared services setting (where businesses could purchase time on the computer). With the assistance of a common simplified language and operational contexts, traditional computing moved from the scientific/government dominion to usage by large enterprises, in anticipation of what could be considered general availability for both content (data and program) inventors and content consumers.

The commencement of the simplified language for classical computing was the description of the bit, the smallest of information illustration. The bit was a language of abstraction, a representation of electrical and/or magnetic physical properties. The bit was zero while voltage was off and one when voltage was applied. Bits are usually used to symbolize data or commands. In order to create commands, voltages were combined using various methods called gates (AND, OR, NAND and COPY making up the complete classical set). These were physical representations (i.e., combinations of voltages) of logic command arrangements to integrate bits in different ways.

As programming advanced in this evolutionary sequence, not only were certain objects on lower foundation layers abstracted, but innovative languages of representation were produced. Nowadays it is innocuous to assume that a Java programmer who utilizes an object oriented program does not distress himself with how the bits are flipped.1

 

When I interviewed Dr. Vinton “Vint” Cerf, I asked him, “What are your views or view on quantum computing in today’s world in comparison to classical computers?”

2

He stated,” Quantum computing (see also D-Wave web site) has the promise of getting answers much faster FOR CERTAIN KINDS OF PROBLEMS than conventional computing. It is not a general purpose method, however, and is extremely sensitive to maintaining entanglement coherence for long enough for the computation to be performed. It appears to have application for factoring and for optimization (e.g. traveling salesman problem). Computing is becoming a key element of everyday life, especially in conjunction with mobiles – together they harness the power of the Internet, World Wide Web and cloud computing from virtually anywhere on the globe. I am very excited about the “internet of things” and also about computers that hear and see and can be part of the traditional human dialog. I like the idea of being able to have a conversation with a search engine or a discussion with a control system. Of course, Google Glass and Google self-driving cars are capturing attention where ever one goes. I am also quite excited about the extension of the Internet to interplanetary operation, as you may discover if you google “interplanetary internet”.

The Quantum Computer is a computer that connects the power of atoms and molecules to accomplish memory and processing tasks. It has the potential to perform particular calculations billions of times quicker than any silicon-constructed computer. The field of Quantum Computing was first introduced in 1980 and 1981.

The classical desktop computer functions by manipulating bits, digits that are binary — i.e., which can either signify a zero or a one. Everything from statistics and letters to the status of the modem or computer mouse are all expressed by an accumulation of bits in combinations of ones and zeros. These bits correspond very well with the approach classical physics represents the globe. Quantum computers are not restricted by the binary nature of the classical physical world. Nonetheless, they rely upon inspecting the condition of quantum bits or qubits that might represent a one or a zero, might appear as a combination of the two or might exhibit a number conveying that the state of the qubit is somewhere between 1 and 0.

With regards to the classical model of a computer, the most essential building block – the bit, can only occur in one of two distinct states, a ‘0’ or a ‘1’. In a quantum computer the procedures are altered. Not only is the qubit capable of remaining in the classical ‘0’ and ‘1’ states, but it can also be in a superposition of both. In this coherent state, the bit exists as a ‘0’ and a ‘1’ in a particular manner. If an individual considers a register of three classical bits: it would be attainable to use this register to represent any one of the numbers from 0 to 7 at any one time. If a register of three qubits is deliberated, it can be observed that if each bit is in the superposition or coherent state, the register can represent all the numbers from 0 to 7 simultaneously.

A processor that can utilize registers of qubits will basically have the ability to perform calculations applying all the likely values of the input registers simultaneously. This phenomenon is known as quantum parallelism, and is the inspiring force concerning the research which is presently being carried out out in quantum computing.

Quantum computers are beneficial in the way they encode a bit, the vital unit of information. A number – 0 or 1, stipulates the state of a bit in a classical digital computer. An n-bit binary word in a regular computer is for that reason described by a string of n zeros and ones. A qubit may be represented by an atom in one of two unalike states, which can also be indicated as 0 or 1. Two qubits, like two classical bits, can reach four different well-defined states (0 and 0, 0 and 1, 1 and 0, or 1 and 1).

On the other hand, in contrasting classical bits, qubits can be existent simultaneously as 0 and 1, with the likelihood for each state given by a numerical coefficient. Revealing a two-qubit quantum computer demands four coefficients. As a general rule, n qubits demand 2n numbers, which speedily become a sizeable set for greater values of n. By way of example, if n equals 50, about 1050 numbers are necessary to describe all the probabilities for the possible states of the quantum machine-a number that surpasses the capacity of the largest conventional computer. A quantum computer gives the assurance that it will be impressively powerful because it can be in superposition and can act on all its potential states simultaneously. As a result, this sort of computer could unsurprisingly accomplish myriad tasks in parallel, using merely a single processing unit.

Quantum Computing is the skill of utilizing all of the prospects that the laws of quantum mechanics offer humans to solve computational problems. Conventional or “Classical” computers only use a minor subset of these possibilities. In principle, they calculate in the same way that people compute by hand. There are numerous outcomes about the wonderful things humanity would be able to do if there was a sufficiently large quantum computer. The utmost significant of these is that we would be able to perform simulations of quantum mechanical procedures in chemistry, biology and physics which will never come within the range of classical computers.3

 4

This figure demonstrates the Bloch sphere which is a depiction of a Qubit, the fundamental building block of quantum computers.

Both practical and theoretical study continues and a number of national government and military funding agencies support quantum computing research to improve quantum computers for both civilian and national security purposes, for example cryptanalysis.

There exist a number of quantum computing models, distinguished by the main features in which the computation is determined. The four central versions of practical significance are:

  1. One-way quantum computer (computation divided into sequence of one-qubit measurements applied to an extremely entangled early state or cluster state)
  2. Quantum gate array (computation divided into sequence of few-qubit quantum gates)
  3. Adiabatic quantum computer or computer based on Quantum annealing (computation distributed into an unhurried constant conversion of an initial Hamiltonian into a final Hamiltonian, whose ground states comprises of the solution)
  4. Topological quantum computer (computation divided into the braiding of anyons in a 2D lattice)

The Quantum Turing machine is theoretically meaningful but direct implementation of this model is not pursued. The four models of computation have been revealed to be equal to each other in the sense that each one can simulate the other with no more than polynomial overhead.

In Modern Day, there has been a great level of controversy about the world’s only commercial quantum computer. The concern with this machine is that there has been an issue in deciphering whether it is truly a quantum device or just a regular computer. The Canadian software company D-Wave created this technological device which has been verified to work on a quantum level.

Unlike a common computer, this kind that is named an “Annealer”, cannot answer any query tossed at it. As an alternative, it can only answer ‘discrete optimization’ problems. This is the sort of issue where a set of criteria are all struggling to be met at the same time and there is one best resolution that meets the most of them. One sample is being the simulation of protein folding, in which the arrangement seeks a state of minimal free energy. The hope is that a quantum annealer should be able to solve these problems much quicker than a classical one.

Professor Scott Aaronson, a theoretical computer scientist at MIT has historically been skeptical of D-Wave’s assertions. He stated in the past that he is fairly persuaded by the data but that there are plenty of important questions remaining. These include whether the current or future versions of the D-Wave computer will truly be any faster than classical machines.

An Australian crew led by researchers at the University of New South Wales has accomplished a breakthrough in quantum science that brings the prospect of a network of ultra-powerful quantum computers that are joined via a quantum internet, closer to reality. The team is the first to have discovered the spin, or quantum state, of a single atom using a combined optical and electrical approach. The study is a group effort between investigators from the ARC Centre of Excellence for Quantum Computation and Communication Technology based at UNSW, the Australian National University and the University of Melbourne.

UNSW’s Professor Sven Rogge alleged that the technical feat was done with a single atom of erbium – an unusual earth element normally used in communications that is embedded in silicon. “We have the best of both worlds with our combination of an electrical and optical system. This is a revolutionary new technique, and people had doubts it was possible. It is the first step towards a global quantum internet,” Professor Rogge indicated.

Quantum computers guarantee to provide an exponential increase in processing power over conventional computers by using a single electron or nucleus of an atom as the basic processing unit – the qubit. By carrying out multiple calculations simultaneously, quantum computers are projected to have applications in economic modeling, quick database searches, modeling of quantum materials and biological molecules as well as drugs, in addition to encryption and decryption of information.

THE DIFFERENCES BETWEEN QUANTUM COMPUTERS AND CONVENTIONAL COMPUTERS ARE:

In Quantum Computing, information is stored in quantum bits, or qubits. A qubit can be in states labeled |0} and |1}, but it can also be in a superposition of these states, a|0} + b|1}, where a and b are complex numbers. If the state of a qubit is viewed as a vector, then superposition of states is just vector addition. For every extra qubit you get, you can store twice as many numbers. For example, with 3 qubits, you get coefficients for |000}, |001}, |010}, |011}, |100}, |101}, |110} and |111}. In addition to this, calculations are performed by unitary transformations on the state of the qubits. United with the principle of superposition, this generates possibilities that are not available for hand calculations (as in the QNOT). This translates into more efficient algorithms for a.o. factoring, searching and simulation of quantum mechanical systems. The QNOT-The classical NOT-gate flips its input bit over; NOT(1)=0, NOT(0)=1.The quantum analogue, the QNOT also does this, but it flips all states in a superposition at the same time. So if we start with 3 qubits in the state |000}+|001}+2|010}-|011}-|100}+3i|101}+7|110} and apply QNOT to the first qubit,we get|100}+|101}+2|110}-|111}-|000}+3i|001}+7|010}. Furthermore, the quantum computer is different due to Entanglement and Quantum Teleportation.

The quantum property of entanglement has a fascinating history. Einstein, who claimed that “God does not play dice with the universe”, utilized the property of entanglement in 1935 in an attempt to ascertain that quantum theory was unfinished. Boris Podolski, Albert Einstein and Nathan Rosen identified that the state vectors of certain quantum systems were associated or “entangled” with each other. If one modifies the state vector of one system, the corresponding state vector of the other system is changed instantaneously also, and independently of the medium through which some communicating signals ought to travel. Since nothing could move faster than the speed of light, how could one system arbitrarily far apart have an impact on the other? Einstein termed this “spooky action at a distance” and it demanded a philosophy of reality contrary to science in those years. He favored the notion that some unfamiliar or “hidden variables” were enhancing the results and since they weren’t known, then quantum theory must be imperfect.

In 1964, John Bell evidenced that there could not conceivably be any hidden variables, which implied that spooky action at a distance was factual. Later in 1982, Alan Aspect performed an investigation in which he displayed that Bells’ Theorem, as it was known as, had experimental validity. Either faster-than-light speed communication was occurring or some other mechanism was in process. This basic theory has made all the modification between traditional ideas of reality and quantum ideas of reality.

Throughout all of history before, all physical phenomena were reliant on some force and some particle to transport that force. Therefore, the speed of light restriction applied. For example, as electrostatic forces are carried by the electron, gravitational forces are carried by the graviton, etc. Though, with entanglement, quantum systems are connected in some manner that does not contain a force and the speed of light restriction does not apply. The real mechanism of how one system affects the other is still unknown.

6

1. Collapse of the State Vector

When two quantum systems are generated while maintaining some property, their state vectors are correlated or entangled. For example, when two photons are created and their spin conserved, as an essential, one photon has a spin of 1 and a spin of -1. Through measuring one of the state vectors of the photon, the state vector falls into an intelligible state. Instantaneously and robotically, the state vector of the other photon collapses into the other identifiable state. When one photon’s spin is measured and found to be 1, the other photon’s spin of -1 immediately becomes recognized as well. There are no forces involved and no description of the mechanism.

2. Quantum Teleportation

The code of entanglement enables a phenomenon termed “quantum teleportation”. This type of teleportation does not include moving an entity from one physical position to another, as shown in popular science fiction stories, but a disintegration of the original and recreation of a matching duplicate at another location.

3. Brassard’s Theoretical Circuit

In 1996, Gilles Brassard visualized a quantum circuit that could build and entangle two pairs of qubits, where one is entangled with two others. On the whole, “Alice’s” circuit entangles three bits (M, A, and B), and communicates M to “Bob”. Bob’s circuit, using information from M, produces a replica of bit B. The prompt result on B, by measuring M, is efficiently a teleportation of qubit B.

For purposes of debate and at the risk of underestimation, the gates marked L, R, S, and T, are referred to as left-rotation, right-rotation, forward-phase shift, and backward-phase shift gates, separately. The XOR gate is presented as a circumscribed cross. These gates can bring about entanglement when qubits are put through them.

Alternatively, classical computers differ to quantum computers as information is stored in bits, which take the discrete values 0 and 1. If storing one number takes 64 bits, then storing N numbers takes N times 64 bits. Calculations are done essentially in the same way as by hand. As a result, the group of problems that can be solved proficiently is the same as the category that can be solved efficiently by hand. Here “efficiently”, deals with the idea that the evaluation period doesn’t grow too quickly with the size of the input.

Applications that cannot be done now are easily possible with quantum computers. The spin-off concepts, like quantum teleportation, open outlooks only imagined before. To conclude, quantum computers are approaching in their maturity, and they will require a new way of looking at computing.

New Battery Technology

battery

In a world of developing technology, every enthusiast loves it whenever a new gadget is invented. Has the average user ever noticed or observed how tacky and bothersome it is when the battery of the cell phone loses power in a short period of time? By way of example, what if you were in the middle of an important conversation concerning business or anything which is considered to be significant from the perspective of a particular customer? Then suddenly the phone no longer functions until it is able to be recharged again which becomes an inconvenience. Currently, there is a new upgrade in the battery sector as the new lithium-ion battery design has been created. This product is 2,000 times more powerful and recharges 1,000 times faster when compared to other batteries. As stated by the scientists, this is not merely an evolutionary stage in battery tech, “It’s a new enabling technology… it breaks the normal paradigms of energy sources. It’s allowing us to do different, new things.”  The old, bulky lithium-ion battery has finally been updated with enough power to boost a car battery and still restore in seconds.

So how does this sort of power relate to real-world scenarios? The batteries could, in theory, broadcast radio signals 30 times farther than normal and allow devices to be about 30 times smaller. The fact that the microbatteries will recharge 1,000 times faster than existing technologies makes this product highly extraordinary. This denotes that devices could be powered for days and recharge in seconds. William P. King utilises medical devices and implants as an illustration declaring, “Where the battery is an enormous brick, and it’s connected to itty-bitty electronics and tiny wires. Now the battery is also tiny.” Many global citizens share the perspective in life that if something is bigger that this connotes that it would be more beneficial to the consumer. However in scenarios like this, the trend of thought alters as this battery demonstrates that these small devices are quite favorable and delightful.

Presently, technology generally is objectively limited by battery technology, particularly when it deals with medicine. There are several types of hearing aid batteries, categorized by size and color coding. The size of battery desired will depend on the particular type and model of the hearing aid. Most hearing aids use one of five standard button cell battery sizes, each represented by a color code. Cochlear implant hearing aids are used for those with significant hearing loss to deafness. Batteries for these hearing aids are made from zinc. They are typically 1.45 volts, which allows them to last longer and be changed less frequently. However, a miniature battery such as the lithium-ion battery will allow programmers to create tablets that can easily be held in one hand for any amount of time and laptops that wouldn’t be difficult to hold due to their heavy weight. Additionally, the new battery is less prone to fire hazards because of its capability to function within a wider temperature range and better manage its internal temperature modifications. It can be used for boats, for accommodation and grid storage, as well as plug-in vehicles, which are subject to a “large and growing sales pipeline.”

It is an incredible fact that advances in battery technology may one day help to solve the global energy crisis. The improvements that are being made in battery technology are quite “mind baffling” for those that are unaware of how it is done. These devices are collecting power from about every source that is imaginable and at the present there is battery technology from researchers who work at various universities across the globe. From cell phones to cars and all in between, there may sooner or later be nothing more required than to actually use the device.  So although, the technology behind the lithium-ion battery has not yet entirely reached maturity, the batteries are the type of choice in many consumer electronics.  They also have one of the best energy-to-mass ratios and a very slow loss of charge when not in use.

Technology Updates

“Machines take me by surprise with great frequency”. – Alan Turing

Cheers to Gordon Moore’s Law which describes a driving force of technological and social change in the late 20th and early 21st centuries. Computers are still evolving at an amazing speed and PC as well as laptop accessories seem to grow at the same rate. G TECHNOLOGY announced its latest gadget in the Computing World and Creative Professional Market called the G-Drive Pro. It is a high performance storage solution featuring screaming fast USB 3.0 and FireWire® interfaces. A product like this would come in handy in today’s world because it delivers flexible and extreme-performance storage for the Thunderbolt-enabled computer. Over the years, everyone has owned or had access to various brands of computers such as Hewlett-Packard, Alienware, Apple and Toshiba (just to name a few) for purposes such as communication or leisure. Evidently, the computers are changing as each software company upgrades their products on a yearly basis so that customers can enjoy these latest devices in technology. Imagine a world without computers such as in the Middle Ages and automatically there is a vision of a sluggish, draconian place for those who live in modern day society for the advancement of technology.

This upgrade in equipment is a new storage solution which is designed chiefly for photographers and film-makers. Capacity and performance requirements to support 2K, 4K and other media formats differ by camera and supported media formats, as well as related pixel sizes, frame rates, compressed or uncompressed media, depths and color models. The G-DOCK ev with Thunderbolt, the G-DRIVE ev and G-DRIVE ev PLUS external hard drive modules with USB 3.0 together deliver quick content transfers for camerawork, 2K and 4K¹ digital cinema assignments with simple storage expansion. The G-DOCK ev with Thunderbolt is the only two-bay solution with interchangeable and rugged storage modules that have the adjustability to be taken into the field and used as true standalone external drives.

At this point currently, new high-resolution media formats and related files sizes do require massive amounts of storage space in addition to ever faster data transfer rates so that one can efficiently edit and distribute content throughout the workflow without being inefficient. If one were to take a look at the statistics, it would be astounding. Take for example, an uncompressed 2K digital video format, which alone will take up more than one terabyte (1TB) each hour, and will require just about 305 megabytes per second (MB/s) sustained throughput for smooth editing without experiencing any kind of dropped frames. Such high-resolution video formats, alongside the increasing popularity of higher megapixel DSLR cameras which are used to capture professional 2K video has served up a need for a new high-performance storage solution that will be able to cater to the requirements of nowadays’ digital content creators.

Patience is a valuable virtue but everyone likes products that work smoothly and quicker in the context of Computing. It would be a nauseating experience to use a retrograde gadget that doesn’t serve any purpose and does not operate properly. If firstly, the appearance of your external hard drive as much as its performance is treasured, and secondly, works greatly on 2K or 4K video editing, then G-Technology’s G-DRIVE PRO with Thunderbolt would be appealing to enthusiasts. It connects a single 3.5-inch hard drive with the intensely fast Thunderbolt technology to offer sustained data transfer rates of up to 480MB/s, while still giving the high capacity storage of up to 4TB.

For those fiddling with high resolution videos, it is most likely pleasing to know that this gleaming invention certainly supports compressed 4K and multiple streams of 2K, HDV, DVCPro HD, XDCAM HD, ProRes 4444 as well as uncompressed SD workflows. The forthcoming summer is awaiting entertainment with this state-of-the-art device which will be available through G-Technology and its Premier Channel Partners. The cost is $699.95 and $849.95 for the 2TB and 4TB model, separately. With up to 4TB capacity, the G-DRIVE PRO with Thunderbolt is Mac-formatted and is Apple Time Machine ready.

Photographers and film-makers who desire to get a head-start on making 4K content (or 2K videos that the next-generation of DSLRs should be capable of) will be particularly interested in the G-Drive Pro with Thunderbolt. Introduced today at 480 MBps, it has the fastest read-write speed of all the G-Tech hard drives, which means one drive can support 1TB of unedited 2K content (it has a need for a 305 MBps data transfer). If the user wishes to make full use of the Thunderbolt’s read-write speed of 750 MBps, he or she can daisy-chain two G-Drive Pros together to give an enormous 960 MBps, which is just quick enough to accurately handle unedited 4K videos.

The Product Highlights are:-

  • eSATA, FireWire 400/800, USB 2.0
  • High-performance 7200rpm Drive Speed
  • Up to 64 MB Cache
  • Data Transfer Rates up to 100 MB/s
  • Ideal for Audio/Video/Photo Applications
  • Stylish, Industrial Aluminum Enclosure
  • Mac Formatted and Time Machine Ready
  • Mac and Windows Compatible
  • Stylish, industrial design features an integrated heat-sink for near-silent operation
  • Pre-formatted for Mac but works equally as well with Windows based systems
  • Includes an industry-leading three year limited warranty

A summary of its Kit Contents are:

G-DRIVE PRO External Hard Drive

(1) Thunderbolt Cable

Universal AC Power Supply

Quick Start Guide

3-Year Limited Warranty

 

MSRP: 2TB – $699.95

4TB – $849.95

 

Dimensions: 2.68″ x 9.25″ x 5.12″ / 68 x 235 x 130mm

Weight: 3.13 lbs / 1.42 kg

Type: SATA III

Storage Capacity: 2TB , 4TB

Drive Speed: 7200 RPM

Compatibility: 

1

Mac® OS® 10.7

2

2x Thunderbolt

Award-winning director, producer and photographer Vincent Laforet, one of the world’s leading adventure photographers Lucas Gilman and Vienna-based award-winning filmmaker, director of photography as well as film producer Nino Leitner are all supporters of G-Technology. They hosted in-booth workshops (G-Technology Booth #SL12105), showcasing and discussing their artistic approaches, gear and workflows. G-Technology makes the workflow simpler, improved and faster.  Its commitments include transporting, editing, distributing and storing content. G-Technology’s high-performance portable and desktop drives, flexible transfer/edit solutions and fast RAID systems are all manufactured for professional content creation environments where performance and reliability are paramount. Due to the fact that G-Technology has a reputation for the uppermost standards, its products can be found in open post-production facilities worldwide.

It is understood that it is factual to state that storage capacity needs to increase every day. This is because humans get faster internet connections allowing them to download enormous chunks of data from the Internet in astonishingly small periods of time. G-Technology storage solutions are engineered precisely to meet the necessities of the content creation and Apple Mac communities. This includes heavy users of multimedia content, Final Cut Pro® and Adobe® Premiere® Pro audio/video specialists along with other pre/post production professionals.

3

 

 

 

The New Age of Computing

“In attempting to construct such (artificially intelligent) machines we should not be irreverently usurping His (God’s) power of creating souls, any more than we are in the procreation of children. RRather we are, in either case, instruments of His will providing mansions for the souls that He creates.” ― Alan Turing

As Windows 8 launched last year, a great deal of hybrid notebooks that are created to benefit from the touch-optimized operating system started to appear in the computing market. Evidently, it appears that the trend will definitely continue well in 2013. The new HP Envy x2 is one of the products which was first presented at CES 2013 back in January and has made its way across the globe earlier this month.

With its smooth metallic colorway, the HP Envy x2’s physical design suits its purpose as an invention under the corporation’s Envy premium series. With a hybrid design that enables it to perform as a tablet as well as a normal notebook when attached to its keyboard, the Envy x2 is equipped with an 11.6-inch IPS touch display with a graphics resolution of 1366×768 and supports up to 5 simultaneous touch points.

volney1

When I interviewed Professor Leonard Adleman, I asked him generally “What motivates you?”

He stated, “I am motivated by the beauty of mathematics.”

When I reviewed this product currently in the market I thought in terms of Applied Mathematics that it was epic for mathematicians that work on practical problems. Most people in one way or the other know how to calculate things whether they are young individuals or in their elderly years. By way of example, whenever someone purchases or sells an item of any sort, he or she utilizes the logical part of the brain as well as its reasoning ability to function in the activity.

In a scenario where a software company desires to have a feasibility study which entails a report that purposes to detail the characteristics that will determine the success or failure of a project, the following is significant to note. The various elements that constitute system requirements are necessary to be looked at and thoroughly assessed. HP Envy x2 is a suitable product to utilize in a situation where there is a ‘High Tech Restaurant & Bar’ which needs a technological upgrade of the establishment. Clearly, in order for any business to operate efficiently on a daily or weekly basis, there are certain individuals that are involved in this process. These persons include:-

  1. The end-users: The prospective users would comprise of: – the chef, possibly kitchen staff, waiters, bartenders, managers and the accounting staff.
  2. The managers: The management of the client comprises of the manager of the kitchen staff and the serving staff.
  3. Indirect beneficiaries: The customers of the client, both average and business are indirectly affected by the system.
  4. Maintenance and Support People: Any Software Development Company and its technical staff will continue to serve and support this system launch for the client.
  5. Regulators and standards people: This includes a Systems Auditor.

Volney2

Furthermore, it is critical that there are techniques of Elicitation such as:-

  1. Interviews: Business data, business practices, business goals, technical information and skill levels of users can best be extracted by the method of interviewing.
  2. Observation: Physical environment, business practices, skill levels of users and interfaces with other systems are all best obtained through this process.
  3. Scenarios or walkthroughs: Technical information and business practices are best attained through this way.
  4. Questionnaires: These are best used when gathering business data even though they are not always 100% correct.
  5. Brainstorming: This is good for understanding what the clients’ preferences are.

 

If a Software Development Company has decided to use the Spiral Model of RE to dictate the different stages to be used, then it would be wise to implement a system where each table in the ‘High Tech Restaurant & Bar’ has this product for the use of the customers. The sequence of stages include:-

Quadrant 1: Information gathering makes use of the aforementioned elicitation techniques and provides an understanding of what is to be built.

Quadrant 2: Analysis and modeling is the pulling together of data from elicitation to determine whether additional information is needed.

Quadrant 3: The purpose of feasibility is to decipher the success of the project. This stage is fundamental.

Quadrant 4: The feasibility document is presented to the stakeholders for validation of the requirements’ specification.

In the technological upgrade of such an establishment would require this sequence of stages as in the case of the product HP Envy x2.

Volney3

THE OUTPUT

The output of Requirements Engineering would be constructed as a contractual agreement. This agreement would consist of the client having specified their needs and how the Software Company would achieve it. The Systems Specification document would comprise of the requirements needed by the designer. The Evolutionary model should be used to construct a product as mentioned before to meet the needs of the client. This feasibility assessment has been conducted to confirm the practicality of this system upgrade.

THE FEASIBILITY ASSESSMENT

From my perspective, this is a new endeavor. Based on case studies, it can be done. With the upgrading of the network, installation of hardware and software as well as proper training, the solution to the client is viable.

With the results from Elicitation, the upgrade in ‘High Tech Restaurant & Bar’ in its data processing and other services is feasible.

Given that the software company is creating an additive to the current system, the input information required already exists. In order for this upgrade to be successful, new hardware and software would need to be purchased and changes would have to be made to the existing. Despite the intricacies of this project, it can be delivered in accordance to a detailed project plan. Training of users, changes in the hardware and practicing of safety procedures would eradicate possible dependencies. As a result, the arrangement would be used at its fullest potential.

Technology, Time and Ageing

The question that many polymaths, scientists, technology-enthusiasts and intellectuals have been curious about ever since educational institutions were introduced is- will the world reach the period where the length of human life can be expanded? Many individuals in the global society have wished to look younger when their facial characteristics started aging and do surgical procedures as well as use cosmetics as a feature of the extended human phenotype to fulfill their desires. Technology has extended the phenotype of man to unprecedented heights. Human technologies differ from animal technologies in their inventiveness, multiplicity and sophistication. Noted experts throughout the ages have searched for the formula to the miraculous phenomenon: “Can one turn back the clock of time?” Although time travel has been a traditional plot device in science fiction since the late 19th century and the theories of special and general relativity allow methods for forms of one-way travel into the future via time dilation, it is currently unrevealed whether the laws of physics would permit time travel into the past.

Some theories, most notably special and general relativity, propose that suitable geometries of spacetime, or specific types of motion in space, might allow time travel into the past and future if these geometries or motions are possible. In technical papers, physicists generally avoid the commonplace language of “moving” or “traveling” through time (“movement” normally refers only to an adjustment in spatial position as the time coordinate is varied), and instead discuss the potentiality of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves (such as Gödel spacetime), but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backwards time travel is highly implausible. Any theory that would allow time travel would require that problems of causality be resolved. The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? However, some scientists believe that paradoxes can be avoided, by appealing either to the Novikov self-consistency principle or to the notion of branching parallel universes.

Nevertheless, the theory of general relativity does suggest a scientific basis for the possibility of backwards time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed. These semiclassical arguments led theoretical physicists to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel, but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.

Dr. Bill Andrews has spent two decades solving the molecular mechanisms of aging. His mission is to extend the human life span to 150 years or die trying. In the 1990s, as the director of molecular biology at the Bay Area biotech firm Geron, Andrews supported a team of researchers that, in alliance with a laboratory at the University of Colorado, just barely beat out the Massachusetts Institute of Technology in a furious, near-decade-long race to identify the human telomerase gene. This basic science took on the trappings of a hysterical Great Race is a testament to the biological preciousness of telomerase, an enzyme that maintains the ends of our cells’ chromosomes, called telomeres.

Telomeres get shorter each time a cell divides, and when they get too short the cell can no longer make fresh copies of itself. If humans live long enough, the tissues and organ systems that count on continued cell replication begin to falter: The skin sags, the internal organs grow slack, the immune-system response weakens such that the next chest flu could be the last. Telomerase was first discovered by Professor Elizabeth Blackburn and Molecular Biologist Carol W. Greider who were both awarded the Nobel Prize in Physiology or Medicine in 2009 for this work. Though, what if bodies could be induced to express more Telomerase? That is what Dr. Andrews intends to do in order to prolong human life which would demonstrate one of the greatest breakthroughs on planet earth.

Image reproduced from http://skin-carereviews.com/