Nov 262014
 

Changing Web usage is hard. Google has granted a few extra months of leeway to those who rely on a handful of popular plug-ins, such as Silverlight, to extend what their browser can do.

Instead of cutting off all old-style browser plug-ins at the end of 2014, Google has given a temporary break to people who rely on plug-ins that extend the abilities of its Chrome browser.

The company is gradually banning plug-ins that hook into the browser using a mechanism called NPAPI (Netscape Plugin Application Programming Interface) that’s more than a decade old. But it’s been tough getting Chrome users to completely stop using those plug-ins.

In September 2013, Google announced its plan to cut off support for NPAPI plug-ins. But it took a phased approach that still permitted the most popular ones: Microsoft’s SilverlightUnity Technologies’ Web PlayerOracle’s JavaFacebook’s video-calling tool and Google’s own Google Talk and Google Earth plug-ins.

Google decided not to leave plug-in-reliant customers in the lurch quite as soon as it had planned. Justin Schuh, a Google programmer on Chrome’s security team , explained why in a blog post Monday:

Although plugin vendors are working hard to move to alternate technologies, a small number of users still rely on plugins that haven’t completed the transition yet. We will provide an override for advanced users and enterprises (via Enterprise Policy) to temporarily re-enable NPAPI while they wait for mission-critical plugins to make the transition.

Good riddance

After years of slow going, the Web programming world is now working productively to expand the Web’s possibilities not with plug-ins, but rather with new Web standards like HTML5’s video and audio support. Plug-ins date back to the era when Microsoft’s Internet Explorer ruled the roost but Web standards stagnated. Now the browser market is highly competitive, and plug-ins are on their way out.

And good riddance: plug-ins don’t work on smartphones and tablets, they’re hard to maintain, they’re a bother for users to install, and are a top culprit in browser crashes, slowdowns and security vulnerabilities.

Plug-ins aren’t totally disappearing from Chrome, however. Google will continue to indefinitely support plugins that use its own PPAPI (Pepper Plugin API), which includes the most widely used browser plug-in, Adobe Systems’ Flash Player.

Google has been working to add new interfaces to its preferred system for extending Chrome abilities, called extensions, and has shifted its own Hangouts app to Web standards.

Some of the affected plug-ins are still fairly common. Among Chrome users, Silverlight was launched 15 percent of the time in September 2013, falling to 11 percent of the time in October 2014. Java dropped from 8.9 percent to 3.7 percent over the same period. Google Earth plunged from 9.1 percent to 0.1 percent.

Three-step removal over 2015

Initially, Google said it estimated it would completely remove Chrome’s NPAPI support by the end of 2014, subject to usage patterns and feedback. Now it’s pushed that back, but the ban will still continue over a three-step process in 2015.

The first step, in January 2015, will be to begin blocking even whitelist-permitted NPAPI plug-ins by default — a setting that can be overridden.

The second step, in April 2015, will be to disable Chrome’s ability to run plug-ins at all unless a user specifically enables it by setting a flag — chrome://flags/#enable-npapi — in Chrome’s technical preferences. Google also will remove all NPAPI plug-ins from its Chrome Web Store at this stage.

The last step, in September 2015, will be to completely remove all ability to run NPAPI plug-ins from Chrome.

Google also recommends plug-in programmers look to itsNPAPI deprecation guide for advice.

“With each step in this transition, we get closer to a safer, more mobile-friendly Web,” Schuh said.

 

Source: CNET, by Stephen Shankland

Apr 162014
 

One of the main features found on the Samsung Galaxy S5 – the fingerprint scanner integrated into the home button – can easily be fooled by hackers looking to gain access to the device, according to a report from Germany’s Security Research Labs.

To bypass the fingerprint scanner’s security lock, the team created a wood glue spoofed fingerprint from an etched PCB mold, using a latent print on a smartphone display photographed by an iPhone 4S. With very little effort this spoofed fingerprint can be swiped across the sensor, with the Galaxy S5 believing it’s a real finger and giving immediate access.

Even more concerning is that the fake fingerprint can be used to access a victim’s PayPal account, as the app found on the Galaxy S5 supports authentication through fingerprint. The Security Research Labs team was able to access a PayPal account, transfer funds and make purchases using their wood glue spoofed fingerprint; a process made easier by the fact you’re allowed unlimited swipe attempts, giving hackers plenty of time to perfect their spoof if it was rejected the first few times.

The system would be made more secure if it required a password after a number of failed attempts to use a fingerprint, like is the case on the iPhone 5S. With that said, the iPhone 5S’ fingerprint scanner is still vulnerable, falling to hackers in under 48 hours after its release.

Some people criticized the fingerprint hacking method used as unrealistic in the real world, however Security Research Labs dismissed these claims, stating that a hackers have “incentive to steal digital fingerprint scans and learn how to mass-produce spoofs” when fingerprint security is implemented poorly. Anyone that steals a device may have access to a high quality fingerprint on the handset itself, and the method to produce a spoof isn’t highly complex.

Source: Tech Spot

Sep 042013
 

 Talk. Tap. Share. That’s the motto behind a new project on Kickstarter called Kapture, an always-on wristband that allows users to save and share the last 60 seconds of audio it records. With a simple tap, the past minute is captured and synced to your smartphone via Bluetooth.

Kapture isn’t intended to be the next James Bond spy gadget but rather a handy piece of wearable technology that lets you capture life’s little moments as they happen. Examples include storing funny, insightful or heartwarming phrases but the possibilities here are limitless.

Once a clip is captured, it’s sent to the Kapture app on your smartphone. From here, it can be edited down, renamed and even have a photo attached to it much like you can do with video for Instagram. Should you not have your smartphone handy when a recording is captured, it will be stored on the wristband for syncing later. We’re told that roughly 25 records can be stored locally.

Legally speaking, Kapture presents a unique situation as 12 states have what’s known as a two-party consent law. This means that one person in a conversation must notify others in a conversation that a recording is taking place. But since Kapture doesn’t actually record audio until the users taps it, a recording doesn’t technically exist and no laws are broken.

Kapture launched on Kickstarter earlier today with a goal of raising $150,000 over the next month. It’s off to a solid start with more than $8,000 in pledges from more than 80 backers as of writing. An investment of $75 is all that’s needed to be one of the first to receive the wristband. The project has an estimated delivery date of March 2014.

Source: TechSpot

Aug 272013
 

Xbox One vs PS4 

Sony’s presentation at the Gamescom conference in Germany on Aug. 20 wasn’t too exciting, but it did help drive home one of Sony’s most important points: It aims to offer a bigger variety of games at a better value than Microsoft.

The new console wars won’t really get underway until both the Xbox One and the PlayStation 4 hit shelves this November, but so far, Sony has a definitive edge, and it’s not hard to see why. Microsoft made a series of consumer-unfriendly decisions early in the Xbox One’s life cycle, and Sony has capitalized on each of those mistakes.

Microsoft’s missteps

It’s easy to forget now, but the PS4-reveal event on Feb. 20 was not that great. Solid facts about the console were hard to come by, and Sony did not even display the console design itself until its E3 presentation on June 10.

Microsoft debuted the Xbox One on May 21 but did not answer many pressing questions about the console: How much would it cost? Would it need a constant Internet connection? Would it support used games? Gaming also constituted only a very small portion of the reveal; a comprehensive list of games for the new system would have to wait another month.

Microsoft’s E3 conference did not assuage many fears. Although the company exhibited a number of interesting titles, there were a few overarching themes: games where you shoot things, games where you drive cars and games where you play sports.

Add in a Kinect camera that can’t be turned off, a required online check-in every 24 hours, an Xbox Live Gold subscription ($60 per year) to watch streaming video or access an Internet browser, arcane rules that put draconian restrictions on sharing and reselling games, and a $500 price tag, and Microsoft had effectively repelled a huge swath of gamers who had been so eager to buy the next Xbox.

Sony, however, did not disappoint. In addition to reaffirming support for its PS3 and Vita consoles (the Xbox 360 is not likely to have a long life span once the Xbox One debuts), the Japanese electronics giant exhibited all sorts of different games. Sure, there were shooting, driving and sports games, but there were also role-paying games, puzzle games, platformers and a strong focus on indie titles.

The hits kept coming: The PS4 would cost $400, offer free access to an Internet browser and streaming video services, require no online check-in, and have no restrictions whatsoever on used or borrowed games.

Sony even went so far as to release an “Official PlayStation Used Game Instructional Video,” wherein two Sony employees provided step-by-step instructions on how to share games on the PS4: Walk up to your friend, hand over the game and go about the rest of your day. The process was much simpler than the Xbox One’s digital license transfers and extra fees.

Source: TechNewsDaily

Aug 272013
 

Apple’s cheaper iPhone 5C doesn’t officially exist, but plenty of gossip suggests that it does. CNET details what we know, what we think we know, and what we don’t know.

The low-cost iPhone continues to be one of those rumors that just won’t quit. But as we near the magical month of September, a time when Apple announced new handsets in both 2011 and 2012, the rumor finally appears to be close to reality.

As Josh Lowensohn said earlier this week, despite Apple’s vow to clamp down on leaks, the last few weeks have delivered a steady stream of gossip about a cheaper iPhone, which the tech blogosphere has collectively dubbed the “iPhone 5C” (the “C” denoting the multicolored backs, or simply just “cheaper”); the official product name is anyone’s guess.

We’ve see some alleged specs and a few credible photos not taken by the usual Mr. Blurrycam. Of course, Apple has yet to comment on the dish and won’t do so until it’s good and ready. So until then, here’s what we know about this still elusive — but increasingly certain — device.

What we know

Frankly, not much of anything. Yes, it will be less expensive, but that’s not exactly a cogent analysis of the 5C chatter.

What we think we know

When it will be announced
AllThingsD reported two weeks ago that Apple will hold its next iPhone reveal event on September 10. If that’s true — and we’d bet it is, given AllThingsD’s reliable track record in predicting these dates, and Apple’s recent release schedule — then we should see both the iPhone 5C and the next-generation iPhone 5S.

The true cost
We won’t believe anything until we hear it from CEO Tim Cook, but Morgan Stanley predictsthat it will cost between $349 and $399 unlocked (or, at least, off-contract). Though that’s significantly more than what the 16GBiPhone 5 costs with a contract ($199), that’s a big savings from the $450 that Apple currently charges for an unlocked 8GB iPhone 4. Carrier subsidies would change that dynamic, but the 5C may be sold only without a contract.

A plastic back
Apple needs to make the 5C cheaper somehow, and a plastic body would be a great way do it. Not only is plastic an easier material to mold than aluminum, but Morgan Stanleyestimates that using it could cut the cost of the mechanical parts of the 5C in half, from $33 to $16.

Remember me?

(Credit: CNET)

if you’re wondering if plastic will make the 5C less durable, the answer is not necessarily. Remember that Apple used plastic on both the iPhone 3G and3GS without causing a rash of broken handsets. What’s more, though the switch to a glass (iPhone 4 and 4S) and then metal body (iPhone 5) has seemed like a move toward more durability, anyone who’s cracked the rear end of an iPhone after dropping it will disagree.

Fewer features
That’s likely since Apple will have to find other ways to save dollars. Some analysts think Siri, which first appeared in the iPhone 4S, is a likely candidate for the axe, but we also may see a different screen resolution, less memory capacity, no LTE, or a less powerful camera. It’s also probable that the 5C won’t include any brand-new features that we might see in the 5S, such as therumored fingerprint sensor. Or perhaps most of the main features will be intact, but it will simply have an older or slower processor (like the currentiPad Mini versus the full-size iPad).

A world of colors
While the current iPhone is only available in black or white (with gold/champagne likely on deck for the 5S), it appears the basic iPhone will follow the iPod “rainbow” approach, with availability in a wider range of colors.

In addition to white and black, it looks like we’ll see it in several other colors, as demonstrated in the below photo from Australian blogger Sonny Dickson.

(Credit: SonnyDickson)

What we don’t know

Release date
If Apple announces the 5C on September 10 as we expect, then it should go on sale the next week, most likely by September 20. That 10-day cycle could follow Apple’s usual pattern.

Where it will be available
After whether the handset even exists, this is one of the biggest 5C questions. Some speculation suggests that because the 5C will be made for an unlocked “bring-your-own-SIM” scenario then it may miss the carrier-dominated US market. Of course, that dynamic is changing with T-Mobile’s new contractless service plans, but we’re still waiting for the other big service providers to follow that model.

(Credit: Apple/CNET)

Alternatively, the 5C may be Apple’s shot at increasing its presence in developing markets (in which case, the “C” stands for “China”). Android phones, for example, range from very cheap to very expensive. The 5C could compete with budget-price Android handsets that are positioned as starter smartphones.

Until September 10

Until we know more, that’s all we can say. But if the September 10 event does happen, rest assured that CNET will be there to bring you everything that happens in full detail.

Source: CNET

Aug 272013
 

Loose lips, sunken ships and — iOS7? Sure looks that way, given an email sent to a developer named Owen Williams who reposted contents of an email sent to him today by Siri developer Nuance suggesting that the public would get its first look at iOS 7 on September 10th.

Developers have so far gone through sixth beta versions of this upcoming software update for iPhones, iPods and iPads as they work out any kinks.

Source: CNET
Aug 272013
 

Microsoft’s upcoming Xbox One could be produced in white to go along with the black console if a recent leaked screenshot is to be believed. But before you get too excited, the white Xbox One appears to be limited (at least initially) to Microsoft employees.

The image first surfaced on Reddit where the original poster said the image was sourced from a friend that works at Microsoft. The OP goes on to suggest full-time employees will be the only ones to have access to the white console, a sentiment that’s echoed in several different places.

Eagle-eyed readers may have noticed a small bit of text on the front bezel of the console. It read “I MADE THIS” followed by smaller text that looks like “launch team” with the last word being too small to make out. There’s also a sentence at the bottom of the screenshot that mentions availability to employees still working at Microsoft at the time of launch.

In addition to the console, employees will be given a one year subscription to Xbox Live, all single-player games and a special achievement – all for free on launch day. Not a bad way to reward staffers, I’d say.

This white Xbox One looks very similar to a dev console that hit the web last month, without the custom text and whatnot. Keep in mind of course that neither of these images have been confirmed but it’s certainly looking like a white Xbox One is at least possible.

Source: TechSpot

Aug 272013
 

New reports suggest that Google has plans to take self-driving cars to the next level, integrating their futuristic technology into a vehicle of their own; a stark contrast to their current method of modifying old Toyotas.

According to former Wall Street Journal writer Amir Efrati, Google intends to use the autonomous cars in a “robo-taxi” service. The self-driving taxis would initially be accompanied by a driver to mitigate any safety concerns, but eventually the cars would navigate the streets all by themeselves. And much like the recent rollout of Google Fiber to Kansas City, Google wants to experiment with one city at a time.

Interestingly, Google’s ambitions to enter the taxi industry coincides with their recent investment in Uber, a San Francisco-based startup that connects passengers with the drivers of luxury vehicles.  

At first, it might seem unwise for Google to design and develop an automobile all on their own, especially when they have little to no expertise in this area. It would appear, however, that the tech giant may have had their hand forced, seeing as their efforts to reel in a major car manufacturer have failed to yield a partnership. Furthermore, the do-it-yourself attitude is a staple at Google; a company that has successfully designed its own smartphones and laptops as a way to showcase both Android and Chrome software.

To bring this idea to fruition, Google has entered into talks with auto-component companies such as Continental AG and Magna International. According to German newspaper Frankfurter Allgemeine Zeitung, Google is close to finalizing a deal with Continental; a firm that not only provides automakers with components, but also aids in the vehicle assembly process.

There are numerous regulatory and politcal hurdles that Google must overcome, and once these concerns are laid to rest, interested buyers will have to foot the excessive price tag.  As of now, Google’s fleet of camera-retrofitted Toyotas cost approximately $150,000 apiece to develop. Needless to say, it will probably take several more years before fully autonomous vehicles become available at retail.

Source: TechSpot

Aug 232013
 

The latest Android 4.3 updates brought a slate of unfortunate software bugs to the party and to Google’s own Nexus devices, ironically enough. Thankfully, the Mountain View crew is hard at work patching things up, as evidenced by the Nexus 7 update earlier today that resolved its multi-touch and GPS issues. Now those fixes are up on AOSP as well, not only with the aforementioned JSS15Q build for the 7-inch tablet, but also the JWR66Y for the rest of the recent Nexus clan. The reasoning behind having two fixes instead of one was the addition of an extra bit of code unique to the Nexus 7; they’ll be incorporated into one patch as soon as the devs work out the kinks. Aside from patching those aforementioned bugs, the update resolved a clipboard crash issue, tweaked App Opps permissions and fixed a few extra bits of errata. If you’re not afraid of a bit of tinkering, head on over to the source to update your Nexus hardware now, or just wait for Google to release Android 4.3.1.

Aug 202013
 

On September 4th, Samsung is expected to introduce the Samsung Galaxy Note III and its new Samsung Galaxy Gear smartwatch. While there seems to be more interest in the phablet, the launch of the watch is actually much more important for Samsung. With Apple apparently having difficulties developing the Apple iWatch, Samsung will have the stage all to itself for the first unveiling in this new smartwatch era.

A couple of days ago, we told you that the device is expected to run using a dual-core 1.5GHz Samsung Exynos 5412 CPU with an ARM Mali-400 MP4 GPU, 1GB of RAM, a 1.67 inch AMOLED display with resolution of 320 x 320. The watch also includes a 2MP camera, and support for both Bluetooth and NFC.

Now, some more information about the watch has leaked. At the time of the first report, we might have sounded a bit skeptical about the camera placement. But the latest speculation has the camera integrated with the strap along with tiny speakers placed in the clasp of the watch. Furthermore, the device is expected to support Bluetooth 4.0 LE which means it will work great with wireless monitors designed to measure your health. These include monitors to measure your heart rate and blood pressure. And because you’re using Bluetooth 4.0 LE, it won’t be a stress on your watch’s battery.

While the watch will have an accelerometer, and the screen will support the usual gestures such as swiping, there will be no way to enter text on the watch. Versions of the watch that were handed out to developers came with either Android 4.1 or Android 4.2 aboard. There is Twitter and Facebook integration out of the box, allowing you to use the smartwatch to keep up with your social networks. To sync the watch with your phone, you will need to install the Samsung watch manager app which will be available from Samsung’s own app store. That could mean that owning a Samsung Galaxy handset or tablet will be necessary if you want to use the watch to its fullest.

If  the watch is released in week 40, as one source says it will be (September 30-October 6th), and everything goes smoothly, this could be Samsung’s game to win or lose with Apple having to play catch-up.

source: GIGaom

Aug 192013
 

NASA’s next spacecraft going to Mars arrived Friday, Aug. 2, at the agency’s Kennedy Space Center in Florida, and is now perched in a cleanroom to begin final preparations for its November launch.

The Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft is undergoing detailed testing and fueling prior to being moved to its launch pad. The mission has a 20-day launch period that opens Nov. 18.

The spacecraft will conduct the first mission dedicated to surveying the upper atmosphere of Mars. Scientists expect to obtain unprecedented data that will help them understand how the loss of atmospheric gas to space may have played a part in changing the planet’s climate.

“We’re excited and proud to ship the spacecraft right on schedule,” said David Mitchell, MAVEN project manager at NASA’s Goddard Space Flight Center in Greenbelt, Md. “But more critical milestones lie ahead before we accomplish our mission of collecting science data from Mars. I firmly believe the team is up to the task. Now we begin the final push to launch.”

Over the weekend, the team confirmed the spacecraft arrived in good condition. They removed the spacecraft from the shipping container and secured it to a rotation fixture in the cleanroom. In the next week, the team will reassemble components previously removed for transport. Further checks prior to launch will include software tests, spin balance tests, and test deployments of the spacecraft’s solar panels and booms.

The spacecraft was transported from Buckley Air Force Base in Aurora, Colo., on Friday, aboard a U.S. Air Force C-17 cargo plane. Lockheed Martin Space Systems in Littleton, Colo., designed and built the spacecraft and is responsible for testing, launch processing, and mission operations.

“It’s always a mix of excitement and stress when you ship a spacecraft down to the launch site,” said Guy Beutelschies, MAVEN program manager at Lockheed Martin. “It’s similar to moving your children to college after high school graduation. You’re proud of the hard work to get to this point, but you know they still need some help before they’re ready to be on their own.”

Previous Mars missions detected energetic solar fields and particles that could drive atmospheric gases away from Mars. Unlike Earth, Mars does not have a planet-wide magnetic field that would deflect these solar winds. As a result, these winds may have stripped away much of Mars’ atmosphere.

MAVEN’s data will help scientists reconstruct the planet’s past climate. Scientists will use MAVEN data to project how Mars became the cold, dusty desert planet we see today. The planned one-year mission begins with the spacecraft entering the Red Planet’s orbit in September 2014.

“MAVEN is not going to detect life,” said Bruce Jakosky, planetary scientist at the University of Colorado Boulder and MAVEN’s principal investigator. “But it will help us understand the climate history, which is the history of its habitability.”

MAVEN’s principal investigator is based at the University of Colorado Laboratory for Atmospheric and Space Physics in Boulder. The university provides science instruments and leads science operations, education and public outreach.

by Staff Writers
Kennedy Space Center FL (SPX)

Aug 192013
 

The larger of the two moons of Mars, Phobos, passes directly in front of the other, Deimos, in a new series of sky-watching images from NASA’s Mars rover Curiosity.

A video clip assembled from the images is at http://youtu.be/DaVSCmuOJwI .

Large craters on Phobos are clearly visible in these images from the surface of Mars. No previous images from missions on the surface caught one moon eclipsing the other.

Illustration Comparing Apparent Sizes of MoonsThis illustration provides a comparison for how big the moons of Mars appear to be, as seen from the surface of Mars, in relation to the size that Earth’s moon appears to be when seen from the surface of Earth. Earth’s moon actually has a diameter more than 100 times greater than the larger Martian moon, Phobos. However, the Martian moons orbit much closer to their planet than the distance between Earth and Earth’s moon. Credit: NASA/JPL-Caltech/Malin Space Science Systems/Texas A&M Univ.

The telephoto-lens camera of Curiosity’s two-camera Mast Camera (Mastcam) instrument recorded the images on Aug. 1. Some of the full-resolution frames were not downlinked until more than a week later, in the data-transmission queue behind higher-priority images being used for planning the rover’s drives.

These observations of Phobos and Deimos help researchers make knowledge of the moons’ orbits even more precise.

“The ultimate goal is to improve orbit knowledge enough that we can improve the measurement of the tides Phobos raises on the Martian solid surface, giving knowledge of the Martian interior,” said Mark Lemmon of Texas A&M University, College Station. He is a co-investigator for use of Curiosity’s Mastcam. “We may also get data good enough to detect density variations within Phobos and to determine if Deimos’ orbit is systematically changing.”

The orbit of Phobos is very slowly getting closer to Mars. The orbit of Deimos may be slowly getting farther from the planet.

Lemmon and colleagues determined that the two moons would be visible crossing paths at a time shortly after Curiosity would be awake for transmitting data to NASA’s Mars Reconnaissance Orbiter for relay to Earth. That made the moon observations feasible with minimal impact on the rover’s energy budget.

Although Phobos has a diameter less than one percent the diameter of Earth’s moon, Phobos also orbits much closer to Mars than our moon’s distance from Earth. As seen from the surface of Mars, Phobos looks about half as wide as what Earth’s moon looks like to viewers on Earth.

NASA’s Mars Science Laboratory project is using Curiosity and the rover’s 10 science instruments to investigate the environmental history within Gale Crater, a location where the project has found that conditions were long ago favorable for microbial life.

Malin Space Science Systems, San Diego, built and operates Curiosity’s Mastcam. JPL, a division of the California Institute of Technology in Pasadena, manages the project for NASA’s Science Mission Directorate in Washington and built the Navigation Camera and the rover.

More information about the mission is online at: http://www.jpl.nasa.gov/msl , http://www.nasa.gov/msl andhttp://mars.jpl.nasa.gov/msl/ .

You can follow the mission on Facebook and Twitter at: http://www.facebook.com/marscuriosity andhttp://www.twitter.com/marscuriosity .

For more information about the Multi-Mission Image Processing Laboratory, see: http://www-mipl.jpl.nasa.gov/mipex.html .

Guy Webster 818-354-6278
Jet Propulsion Laboratory, Pasadena, Calif.
guy.webster@jpl.nasa.gov

Aug 182013
 

Verizon will offer the flagship HTC One on August 22.

(Credit: Verizon)

Verizon customers are finally going to get their hands on the flagship HTC One smartphone.

After months of begging and pleading, Verizon subscribers will be able to purchase the HTC One starting from August 22. According to tweet from the wireless carrier, the handset will run for $199 with a two-year service agreement.

It is unclear what color options will be offered to customers, so we might infer this to mean the standard black and silver models. An exclusive blue version of the HTC One has been banging about the rumor mill for the last few weeks, but no carrier has officially laid claims to it yet.

sign-up page is now live on Verizon’s Web site if you’re interested in learning more about the handset.

Source: CNET

Aug 152013
 

A newly published study explores how the recently discovered Higgs boson may provide a possible “portal” to physics that will help explain some of the attributes of dark energy.

One of the biggest mysteries in contemporary particle physics and cosmology is why dark energy, which is observed to dominate energy density of the universe, has a remarkably small (but not zero) value. This value is so small, it is perhaps 120 orders of magnitude less than would be expected based on fundamental physics.

Resolving this problem, often called the cosmological constant problem, has so far eluded theorists.

Now, two physicists – Lawrence Krauss of Arizona State University and James Dent of the University of Louisiana-Lafayette – suggest that the recently discovered Higgs boson could provide a possible “portal” to physics that could help explain some of the attributes of the enigmatic dark energy, and help resolve the cosmological constant problem.

In their paper, “Higgs Seesaw Mechanism as a Source for Dark Energy,” Krauss and Dent explore how a possible small coupling between the Higgs particle, and possible new particles likely to be associated with what is conventionally called the Grand Unified Scale – a scale perhaps 16 orders of magnitude smaller than the size of a proton, at which the three known non-gravitational forces in nature might converge into a single theory – could result in the existence of another background field in nature in addition to the Higgs field, which would contribute an energy density to empty space of precisely the correct scale to correspond to the observed energy density.

The paper was published online, Aug. 9, in Physical Review Letters.

Current observations of the universe show it is expanding at an accelerated rate. But this acceleration cannot be accounted for on the basis of matter alone. Putting energy in empty space produces a repulsive gravitational force opposing the attractive force produced by matter, including the dark matter that is inferred to dominate the mass of essentially all galaxies, but which doesn’t interact directly with light and, therefore, can only be estimated by its gravitational influence.

Because of this phenomenon and because of what is observed in the universe, it is thought that such ‘dark energy’ contributes up to 70 percent of the total energy density in the universe, while observable matter contributes only 2 to 5 percent, with the remaining 25 percent or so coming from dark matter.

The source of this dark energy and the reason its magnitude matches the inferred magnitude of the energy in empty space is not currently understood, making it one of the leading outstanding problems in particle physics today.

“Our paper makes progress in one aspect of this problem,” said Krauss, a Foundation Professor in ASU’s School of Earth and Space Exploration and Physics, and the director of the Origins Project at ASU. “Now that the Higgs boson has been discovered, it provides a possible ‘portal’ to physics at much higher energy scales through very small possible mixings and couplings to new scalar fields which may operate at these scales.”

“We demonstrate that the simplest small mixing, related to the ratios of the scale at which electroweak physics operates, and a possible Grand Unified Scale, produces a possible contribution to the vacuum energy today of precisely the correct order of magnitude to account for the observed dark energy,” Krauss explained. “Our paper demonstrates that a very small energy scale can at least be naturally generated within the context of a very simple extension of the standard model of particle physics.”

While a possible advance in understanding the origin of dark energy, Krauss said the construct is only one step in the direction of understanding its mysteries.

“The deeper problem of why the known physics of the standard model does not contribute a much larger energy to empty space is still not resolved,” he said.

Publication: Lawrence M. Krauss & James B. Dent, “Higgs Seesaw Mechanism as a Source for Dark Energy,” Phys. Rev. Lett. 111, 061802, 2013; 10.1103/PhysRevLett.111.061802

PDF Copy of the Study: A Higgs–Saw Mechanism as a Source for Dark Energy

Source: Arizona State University

Aug 142013
 

ESA’s Mars Express has captured new images of a region on Mars known as Sulci Gordii, which lies about 200 km east of Olympus Mons.

Giant landslides, lava flows and tectonic forces are behind this dynamic scene captured recently by ESA’s Mars Express of a region scarred by the Solar System’s largest volcano, Olympus Mons.

The image was taken on 23 January by the spacecraft’s high-resolution stereo camera, and focuses on a region known as Sulci Gordii, which lies about 200 km east of Olympus Mons.

Sulci Gordii is an ‘aureole’ deposit – from the Latin for ‘circle of light’ – and is one of many that form a broken ring around the giant volcano, as hinted at in the context map.

Sulci Gordii on Mars

Sulci Gordii is one of many similar features that form a broken ring around the volcano, formed during giant collapse and landslide events on the flanks of Olympus Mons. Credit: NASA MGS MOLA Science Team

The aureoles tell the story of the catastrophic collapse of the lower flanks of Olympus Mons in its distant past. Today, it stands with steep cliff edges that rise 2 km above the surrounding plains.

The collapse was brought about by weakening in the rocks supporting the volcanic edifice, perhaps influenced by subsurface water. During the collapse, rocky debris slid down and out over hundreds of kilometers of the surrounding volcanic plains, giving rise to the rough-textured aureole seen today.

Similar avalanches of debris are also seen surrounding some volcanoes on Earth, including Mauna Loa in Hawaii, which, like Olympus Mons, is a smooth-sided ‘shield’ volcano built up from successive lava flows.

The smooth plains surrounding Sulci Gordii suggest that the massive landslide was later partially buried by lava flows. Indeed, faint outlines of ancient lava flows can be seen by zooming into the upper center-left portion of the lead high-resolution image.

Image of Western Limb of Sulci Gordii

This image focuses on a region on the western limb of Sulci Gordii (top center-right on the corresponding main image). It shows clearly in the foreground the near-parallel characteristic of the ridges and valleys that define geological features called sulci. Close inspection of the ridges reveals dark streaks along their faces, evidence of numerous small landslides of rocky and dusty debris. Sulci Gordii is an aureole deposit resulting from the dramatic collapse of the flank of Olympus Mons in its distant past. Credit: ESA/DLR/FU Berlin (G. Neukum)

The characteristic corrugated appearance of the ‘sulci’ – a geological term used to describe roughly parallel hills and valleys on Mars – likely resulted during the landslide as material slid away from the volcano and became compressed or pulled apart as it traveled across the surface. Over time, erosion of weaker material between the peaks accentuated this effect.

The corrugated effect is best seen in the close-up perspective views. Zooming in on these images reveals that the hills and ridges are also covered by fine wind-blown dust, and that many small-scale landslides have occurred down the sides of the valleys between them.

Similarly, on close inspection of the smooth plains, subtle ripples in the martian dust blanket can be seen. Here, thin undulating dunes have been whipped into shape by the prevailing wind.

Image of Channels and Fractures in Sulci Gordii

This perspective view focuses on the southernmost portion of Sulci Gordii, which highlights jagged fractures and fault lines, as well as some sinuous channels that were likely widened by short-lived lava flows or water. In the foreground to the left, a channel can be seen that is abruptly truncated by a tectonic fault. Another channel in the center foreground has also clearly undergone a complex fracturing history. To the upper right, a few rocky blocks appear like islands in a sea of ancient lava plains, with the ‘shoreline’ at the top of the image part of the ridge and valley system of Sulci Gordii. Credit: ESA/DLR/FU Berlin (G. Neukum)

Numerous sinuous channels and jagged fracture networks also crisscross the scene, in particular at the southern (left) end of the main image and in close-up in the perspective view above. The channels range in length from around 50 km to 300 km and were probably widened by short-lived lava flows, or perhaps even by water.

An impressive sight on the left side of the perspective view is a sinuous channel that is suddenly truncated by a tectonic fault. Another channel running across the center foreground clearly has a complex fracturing history.

Sulci Gordii Topography Image

This color-coded overhead view is based on an ESA’s Mars Express High Resolution Stereo Camera digital terrain model of the Sulci Gordii region of Mars, which lies about 200 km east of Olympus Mons. Credit: ESA/DLR/FU Berlin (G. Neukum)

In rougher terrain towards the south (top center-right of the main image), tectonic forces have torn apart the martian crust, most clearly visible in the color-coded topography map.

By studying complex regions like this – and by comparing them to similar examples here on Earth – planetary scientists learn more about the geological processes that dominated ancient Mars, when it was an active planet.

Just as on Earth, the scene at Sulci Gordii tells us that volcanoes can suffer dramatic collapses that transport vast quantities of material across hundreds of kilometres, where it is subsequently sculpted by wind, water and tectonic forces.

Image of Sulci Gordii in 3D

Data from the nadir channel and one stereo channel of the High Resolution Stereo Camera on ESA’s Mars Express have been combined to produce this anaglyph 3D image of Sulci Gordii that can be viewed using stereoscopic glasses with red–green or red–blue filters. Sulci Gordii is an aureole deposit resulting from the dramatic collapse of the flank of Olympus Mons in its distant past. Credit: ESA/DLR/FU Berlin (G. Neukum)

Source: European Space Agency

Aug 142013
 

A newly published animal study from the University of Michigan shows high electrical activity in the brain after clinical death, providing the first scientific framework for near-death experiences.

Ann Arbor, Michigan — The “near-death experience” reported by cardiac arrest survivors worldwide may be grounded in science, according to research at the University of Michigan Health System.

Whether and how the dying brain is capable of generating conscious activity has been vigorously debated.

Researchers Provide the First Scientific Framework for Near Death Experiences

But in this week’s PNAS Early Edition, a U-M study shows shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception.

“This study, performed in animals, is the first dealing with what happens to the neurophysiological state of the dying brain,” says lead study author Jimo Borjigin, Ph.D., associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School.

“It will form the foundation for future human studies investigating mental experiences occurring in the dying brain, including seeing light during cardiac arrest,” she says.

Approximately 20 percent of cardiac arrest survivors report having had a near-death experience. These visions and perceptions have been called “realer than real,” according to previous research, but it remains unclear whether the brain is capable of such activity after cardiac arrest.

“We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” she says.

Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.

Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain.

Furthermore, the authors observed nearly identical patterns in the dying brains of rats undergoing asphyxiation.

“The prediction that we would find some signs of conscious activity in the brain during cardiac arrest was confirmed with the data,” says Borjigin, who conceived the idea for the project in 2007 with study co-author neurologist Michael M. Wang, M.D., Ph.D., associate professor of neurology and associate professor of molecular and integrative physiology at the U-M.

“But, we were surprised by the high levels of activity,” adds study senior author anesthesiologist George Mashour, M.D., Ph.D., assistant professor of anesthesiology and neurosurgery at the U-M. “ In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.­­­”

The brain is assumed to be inactive during cardiac arrest. However the neurophysiological state of the brain immediately following cardiac arrest had not been systemically investigated until now.

The current study resulted from collaboration between the labs of Borjigin and Mashour, with U-M physicist UnCheol Lee, Ph.D., playing a critical role in analysis.

“This study tells us that reduction of oxygen or both oxygen and glucose during cardiac arrest can stimulate brain activity that is characteristic of conscious processing,” says Borjigin. “It also provides the first scientific framework for the near-death experiences reported by many cardiac arrest survivors.”

Additional University of Michigan authors: Tiecheng Liu, Dinesh Pal, Sean Huff, Daniel Klarr, Jennifer Sloboda and Jason Hernandez.

Funding: The work of George Mashour, M.D., Ph.D., was supported by National Institutes of Health Grant GM098578 and the James S. McDonnell Foundation.

Publication: Jimo Borjigin, et al., “Surge of neurophysiological coherence and connectivity in the dying brain,” PNAS, August 12, 2013; doi: 10.1073/pnas.1308285110

Source: University of Michigan Health System

Aug 142013
 

Call of Duty: Ghosts is the next big installment in Activision’s shooter franchise. Infinity Ward revealed the first details and footage of the multiplayer mode today at an event in Los Angeles.

(Watch the first gameplay trailer below.)

The game is actually coming with a surprising number of changes, while bringing back some stuff like the “pick-10″ load-out customization from Black Ops 2.

New Modes include:

Search and Rescue mode allows players to revive fallen teammates—something gamers have been asking for now for quite some time. (Update: some players inform me they hate revival, which is probably why it’s juts in one mode. Others have told me they love it. Surprisingly, there are differing opinions re: its merits.)

Cranked mode gives players temporary a number of boosts after a kill but sets a timer going. If you don’t get another kill before the clock runs out you explode. This promises to be a tense and hilarious mode that I will play just because I know I’ll explode a lot and want to see that in action.

Squad Mode is pretty neat. You can create 10 soldiers that form your “squad.” You can play in this mode solo, co-op, or competitive and your squad can be “challenged” even while you’re not playing, earning you experience. Squad mode allows you to swap in AI or other actual human players at will. I like idea of squad-based solo play, to be honest.

More customization of weapons and soldiers means a much wider variety of characters onscreen. You can change the soldiers’ appearance rather than just their loadout and gear, which should make for a pretty interesting and chaotic experience.

Dynamic Maps also sound interesting. Traps should shake things up. And the move away from so much aerial assault nonsense is plenty welcome.

Clans are being overhauled, giving the Call of Duty experience an almost MMO-like feel. You can start your own clan or join a clan and then participate in two-week long “Clan Wars” that are basically wars for dominance over different territories. Winning in this mode unlocks exclusive gear.

Activision has some mobile apps to tie all of this together.

But perhaps the biggest reveal today is the addition of female soldiers, a first for the Call of Duty franchise. That revelation comes in a pretty cool moment at the end of the reveal trailer, when the sniper that just brought down a bunch of enemies and structures turns out to be a woman. This is a great, long overdue, addition to the game.

“This is the biggest overhaul of multiplayer since the original Call of Duty: Modern Warfare,” says Mark Rubin, the studio producer at Infinity Ward.

Activision also revealed a Season Pass that will include 4 DLC packs. If you upgrade to a next-gen console but purchased your Season Pass for a current system you can transfer the DLC at no additional charge, so long as it’s within the same console family.

Here’s the reveal trailer—with a foul-mouthed Eminem rapping over the top of it:

More to come.

P.S. Blatantly misogynistic comments will be scorned and then deleted. A part of me would like to let them stand because, let’s face it, they just make you look immature and pathetic, but I’d rather not have examples of why sexism is still a problem in video games crowding the combox.

Aug 132013
 
Even if your company operates on a shoestring budget, you can grow your IT to meet your requirements and help make your business successful.

 

You’re a small business and you have the budget to prove it. The problem is, you need to expand your IT. Without such an expansion, you can’t grow. How do you get around the budget-lock? You get creative. That’s one of the beauties of technology: It’s there for you to use and to use in a way that benefits you. Of course, nearly every piece of technology has its recommended usages — but that doesn’t mean you can’t bend the rules a bit or just add some new policies to help your business IT grow.

I’ve come up with 10 creative ways you can expand your company’s IT without having to blow your budget wide open. Some of these ideas can be implemented with little to no effort, whereas some will require some serious change. Either way, the end result is the same.

1: Open source

This should be a no-brainer. Your IT budget is limited and you need more of just about everything. Though open source can’t easily help you with hardware, it can do wonders for you on the software side of things. Those older machines? Slap a lightweight Linux distribution on them. The newer machines? Opt for LibreOffice instead of Microsoft Office. There are so many ways in which open source can help you — even beyond the desktop. Install Linux on a desktop machine or even put it to work as an in-house server you can use in a multitude of ways.

2: CRM/CMS/HRM

One of the best-kept (non) secrets of midsize to large businesses is that they manage their workflow with the help of CRM (customer relationship management), CMS (content management system), and HRM (human resource management) tools. Part of that “secret” is that there are plenty of cost-effective solutions that can meet (and exceed) those needs. Try the likes of Orange HRMDrupal, and openCRX. Each of these tools offers tremendous power, at zero software cost, that can enable your company to expand in ways you probably never thought possible. And you don’t always have to use the tools exactly as outlined. For example, the Drupal CMS platform is (with the help of plugins) an outstanding tool for creating a powerful company Web site.

3: Crowd-source development

One of the nice things about open source is that it’s possible to get people involved in your project. This, of course, isn’t limited to open source – but it’s a great place to start. If you have a specific need for a project, or if you have a feature you’d like to get rolled into a currently existing project, reach out! I have done this on a number of occasions — contacted developers and asked for a feature to be added. Sometimes it works and sometimes it doesn’t. You can always host your project on Google Code, which offers free hosting for collaborative, open source projects. Other services, such as theZohoMarketplace, allow you to post your requirements, to which developers will submit to develop your app.

4: BYOD

BYOD is not new, nor is it all that creative. But for many smaller companies, it can be a real boon for getting technology in the hands of employees. This is especially true when you’d like to have the power and flexibility of tablets and other mobile devices. This doesn’t mean you simply tell your employees, “If you want to use a computer, bring your own!” Instead, you let them know it’s okay for them to bring their own devices to add a level of familiarity to their everyday usage. You will want to make sure that all devices brought in meet certain criteria (e.g., all Windows-based devices must have antivirus and anti-malware).

5: Google Apps or Zoho for business productivity

Google Apps is quickly becoming a standard by which businesses measure cloud-based software, butZoho offers a host of software and services that can do wonders to expand your business. Zoho offers tools like invoicing, email/social marketing campaigns, CRM, bug tracking, reports, recruiting, and finances.

6: Cloud-source backups

Maybe you won’t be backing up a server’s worth of data, but you can use the likes of Dropbox, SpiderOak, and UbuntuOne to sync your data to multiple computers. It’s not a be-all, end-all backup solution (I would add some form of local back as well). But if disaster strikes, you can at least rest assured that certain folders and files can be retrieved easily. You can even get away with the free version of these tools. Although you are limited to 2 to 5 GB of data per service, you can get creative by installing multiple cloud-based tools and have them each sync different folders.

7: Interns

This is a rather touchy subject, but for some companies, bringing in undergraduate interns can help on a number of levels. First, you’re bringing in new ideas. These students are typically just about to come out of their CIS or Comp Sci programs and need the internship hours. This means you get fresh minds, with fresh ideas, at a pittance. This isn’t taking advantage of a system, because both sides have a need. Just make sure you don’t work your interns too much or ask more from them than originally agreed upon.

8: Social networking

Social networking can play a huge role in expanding your IT. If you remove the “social” aspect of social networking, you’re left with “networking.” Being able to network means you have a large resource for help and information. If you’re stuck with a problem, get on Facebook, LinkedIn, or Twitter and try to get help. I realize that anyone in the IT industry knows that the classroom and Google are your best friends — but honestly, sometimes connecting with others is better than scouring Google or the Microsoft Knowledge Base.

9: Resisting lock-in

Don’t fall for lock-in. Microsoft and other big companies are going to do everything they can to lock you into their products. The problem is, once you’re locked in, it’s a costly endeavor to get unlocked. Instead of falling for the typical tactics of the big software companies, understand that the world of computing has become very homogeneous. This is especially true as everything migrates to Web-based and cloud-based platforms. At some point in the near future, the operating system is going to be an afterthought. Keep this in mind as you begin purchasing new hardware and software. Avoid lock-in, and expansion will be much easier.

10: Agility

“Expand by remaining agile” might sound like a buzz-filled catch phrase. But when you give it some thought, one of the most remarkable characteristics of small businesses is that their size lends them an agility that big business doesn’t have. By remaining small, you remain agile. And if you apply this to your IT, you will continue to operate that way. So in the end, thinking small can really be thinking big.

Aug 132013
 
Virtualization delivers a host of benefits — but that doesn’t mean that everything is a good fit for a virtual environment. Here are 10 things that should probably stay physical.

Virtualization provides a solid core of benefits — cost savings, system consolidation, better use of resources, and improved administrative capabilities — but it’s important to remember that supporting the goals of the business are the reason IT departments exist in the first place. Virtualizing everything as far as the eye can see without analyzing the consequences is like what comedian Chris Rock said about driving a car with your feet: You can do it, but that doesn’t make it a good idea.

The first step in any virtualization strategy should involve envisioning disaster recovery if you put all your eggs in the proverbial basket. Picture how you would need to proceed if your entire environment were down — network devices, Active Directory domain controllers, email servers, etc. What if you’ve set up circular dependencies that will lock you out of your own systems? For instance, if you configure VMware’s vCenter management server to depend on Active Directory for authentication, it will work fine so long as you have a domain controller available. But if your virtualized domain controller is powered off, that could be a problem. Of course, you can set up a local logon account for vCenter or split your domain controllers between virtual and physical systems, but the above situation represents a good example of how it might be possible to paint yourself into a corner.

In my experience, some things just aren’t a good fit for a virtual environment. Here is my list of 10 things that should remain physical entities.

1: Anything with a dongle/required physical hardware

This one is a no-brainer, and it’s been repeated countless times elsewhere, but — like fire safety tips — just because it may be a well-known mantra doesn’t make it less significant. Believe it or not, some programs out there still require an attached piece of hardware (such as a dongle) to work. This piece of hardware is required by licensing for the program to work properly (to prevent piracy, for instance).

Case in point: An HVAC system for a client of mine ran on a creaking old desktop. The heating-and-cooling program required the use of a serial-attached dongle to administer the temperature, fans, etc. We tried valiantly to virtualize this system in a VMware ESXi 4.0 environment, using serial port pass through and even a USB adapter, but no luck. (I have heard this function may work in ESXi 5.) Ironically, this would have worked better using VMware workstation instead of the ESX environment, which did allow the pass-through functionality. But there was little point in hosting a VM on a PC, so we rebuilt the physical system and moved on.

This rule also applies to network devices like firewalls that use ASICs (application-specific integrated circuits) and switches that use GBICs (Gigabit interface converters). I have not found relevant information as to how these can be converted to a virtual environment. Even if you think you might cobble something together to get it to work, is it really worth the risk of downtime and administrative headaches, having a one-off setup like that?

2: Systems that require extreme performance

A computer or application that gobbles up RAM usage, disk I/Os, and CPU utilization (or requires multiple CPUs) may not be a good candidate for virtualization. Examples include video streaming, backup, database, and transaction processing systems. These are all physical boxes at my day job for this reason. Because a virtual program or machine runs in a “layer” on its host system, there will always be some level of performance sacrifice to the overhead involved, and the sacrifice likely tips the balance in favor of keeping it physical.

You might mitigate the issue by using a dedicated host with just the one program or server, but that detracts from the advantage of virtualization, which allows you to run many images on a dedicated physical server.

3: Applications/operating systems with license/support agreements that don’t permit virtualization

This one is fairly self-explanatory. Check the license and support contract for anything before you virtualize it. You may find that you can’t do that per the agreement, or if you proceed you’ll be out of luck when it comes time to call support.

If it’s a minor program that just prints out cubicle nameplates and the support agreement doesn’t cover (or mention) virtualized versions, you might weigh the risk and proceed. If it’s something mission critical, however, pay heed and leave it physical.

Which brings me to my next item…

4: Anything mission critical that hasn’t been tested

You probably wouldn’t be likely to take your mortgage payment to Las Vegas, put it down on at the roulette table, and then bet on black. For that matter, you definitely wouldn’t gamble it all on number 7. The same goes for systems or services your company needs to stay afloat that you haven’t tested on a virtual platform. Test first even if it takes time. Get a copy of the source (useSymantec Ghost or Acronis True Image to clone it if you can). Then, develop a testing plan and ensure that all aspects of the program or server work as expected. Do this during off-hours if needed. Believe me, finding problems at 11 PM on a Wednesday night is far preferable to 9 AM Thursday. Always leave the original source as is (merely shut it off, but don’t disconnect/remove/uninstall) until you’re sure the new destination works as you and your company anticipates. There’s never a hurry when it comes to tying up loose ends.

5: Anything on which your physical environment depends

There are two points of failure for any virtual machine — itself and its host. If you have software running on a VM that unlocks your office door when employees swipe their badges against a reader, that’s going to allow them in only if both the VM and its parent system are healthy.

Picture arriving to work at 8 AM Monday to find a cluster of people outside the office door. “The badge reader isn’t accepting our IDs!” they tell you. You deduce a system somewhere in the chain is down. Now what? Hope your master key isn’t stored in a lockbox inside the data center or you’ll have to call your security software vendor. Meanwhile, as employees depart for Dunkin’ Donuts to let you sort out the mess, that lost labor will quickly pile up.

It may not just be security software and devices at stake here. I have a client with a highly evolved VMware environment utilizing clustering and SAN storage. And yet if they clone four virtual machines simultaneously, their virtualized Exchange 2010 Client Access Server will start jittering, even though it runs on another server with a separate disk (datastore). That server is being converted to a physical system to heal the issue. Yes, there is probably further tweaking and analysis that could be done to fix this, but in my client’s view, solid Exchange connectivity is too valuable for them to experiment behind the scenes and hope for the best.

6: Anything on which your virtual environment depends

As I mentioned in the introduction, a circular dependency (such as a virtual domain controller being required to log into the virtual environment) puts you at a great risk once the inevitable downtime arrives — and yes, even in clustered, redundant environments that day will come. Power is the big wildcard here, and if you live in the Northeast like me, I bet you’ve seen your share of power outages spike up just over the past five years.

I grouped this separately from the previous item because it requires a different way of thinking. Whereas you need to figure out the layout of your physical environment to keep the video cameras up and running, you need to map out your virtual environment, including the host systems, virtual images, authentication, network, storage, and even electrical connectivity. Take each item out of the mix and then figure out what the impact will be. Set up physically redundant systems (another domain controller, for instance) to cover your bases.

7: Anything that must be secured

This is a slightly different from rule #5. Any system containing secure information that you do not want other staff to access may be a security risk if virtualized. You can set up permissions on virtual machines to restrict others from being able to control them, but if those staff members have the ability to control the host systems your controls might be circumvented. They might still be able to copy the VMware files elsewhere, shut down the server, etc.

The point of this is not to say you should be suspicious of your IT staff, but there may be compliance guidelines or regulations that prohibit anyone other than your group from maintaining control of the programs/data/operating system involved.

8: Anything on which time sync is critical

Time synchronization works in a virtual environment — for instance, VMware can sync time on a virtual machine with the host ESX server via the VMware tools application, and of course the operating systems themselves can be configured for time sync. But what if the operating systems forget or the host ESX server time is wrong? I observed this latter issue just a few weeks back. A set of virtual images had to have GMT for their processing software to work, but the ESX host time was incorrect, leading to a frustrating ordeal trying to figure out why the time on the virtual systems wouldn’t stick properly.

This problem can be reined in by ensuring all physical hosts use NTP to standardize their clocks, but mistakes can still occur and settings can be lost or forgotten upon reboot. I’ve noticed this happening on several other occasions in the VMware ESX realm, such as after patching. If the system absolutely has to have to correct time, it may be better to keep it off the virtual stage.

9: Desktops that are running just fine

In the push for VDI (virtualized desktop infrastructure), some companies may get a bit overzealous in defining “what should be virtualized” as “anything that CAN be virtualized.”  If you’ve got a fleet of PCs two or three years old, don’t waste time converting them into VDI systems and replacing them with thin clients. There’s no benefit or cost savings to that plan, and in fact it’s a misuse of the benefits of virtualization.

It’s a different story with older PCs that are sputtering along, or systems that are maxed out and need more juice under the hood. But otherwise, if it ain’t broke, don’t fix it.

10: Anything that is already a mess… or something sentimental

On more than one occasion I’ve seen a physical box transformed into a virtual machine so it can then be duplicated and preserved. In some situations, this has been helpful; but in others it has actually led to keeping an old cluttered operating system around far longer than it should have been. For example, a Windows XP machine already several years old was turned into a virtual image. As is, it had gone through numerous software updates, removals, readditions, etc. Fast forward a few more years (and MORE OS changes) and it’s no surprise that now this XP system is experiencing strange issues with CPU overload and horrible response time. A new one is being built from scratch to replace it entirely. The better bet here would have been to create a brand new image from the start and install the necessary software in an orderly fashion, rather than bringing that banged-up OS online as a virtual system with all of its warts and blemishes.

The same goes for what I call “sentimental” systems. That label printing software that sits on an NT server and has been in your company for 15 years? Put it on an ice floe and wave good-bye. Don’t be tempted to turn it into a virtual machine to keep it around just in case (I’ve found “just in case” can be the three most helpful and most detrimental words in IT) unless there is absolutely 0% chance of replacing it. However, if this is the case, don’t forget to check rule #3!

Bonus: The physical machines hosting the virtual systems

I added this one in tongue-in-cheek, fashion, of course. It’s intended to serve as a reminder that you must still plan to buy physical hardware and know your server specs, performance and storage needs, network connectivity, and other details to keep the servers — and subsequently the virtual systems — in tiptop shape. Make sure you’re aware of the ramifications and differences between what the hosts need and what the images need, and keep researching and reviewing the latest updates from your virtualization providers.

Conclusion

As times change, these rules might change as well. Good documentation, training, and an in-depth understanding of your environment are crucial to planning the best balance of physical and virtual computing. Virtualization is a thing of beauty. But if a physical host goes down, the impact can be harsh — and might even make you long for the days of “one physical server per function.” As is always the case with any shiny new technology (cloud computing, for instance), figure out what makes sense for your company and its users and decide how you can best approach problems that can and will crop up.

Aug 132013
 
Microsoft is preparing to update Windows to version 8.1 and is offering us the chance to preview the changes before it is officially released.

 

This fact sheet will be continually updated with the latest details as we learn more about Windows 8.1 Preview. You can check back anytime and refresh this article to get the latest updates.

What we know

  • Prevalent caveat: Microsoft makes a point of offering this warning before you install Windows 8.1: This preview is mainly for experienced PC users, so if you’re not sure whether it’s right for you, read the FAQ.
  • Noteworthy caveat: You are required to have a personal Windows Live account or the enterprise equivalent in order to finish the installation.
  • Availability: You can download and install the Windows 8.1 Preview from the Windows Store for free. There is also a Windows 8.1 Preview for the Enterprise available for download.
  • Search integration: Windows 8.1 features a single search feature that will return results for a search from your computer, your applications, and the web.
  • Updated basic apps: The standard Windows 8 apps are updated and retooled in 8.1, including the mail, photo, people, and calendar.
  • Cloud storage: SkyDrive is the default location for saving documents as opposed to the C: drive, for example.
  • Internet Explorer upgrade: With Windows 8.1 you get the updated Internet Explorer 11.
  • Apps: One of the new apps included with 8.1 is Fresh Paint, an updated and modern interface version of the venerable Paint program.
  • Search improvements: Besides integrated search, Windows 8.1 also includes several new Bing apps, such as Bing Sports, Bing Travel, and Bing Health & Fitness.
  • Windows Store: The Windows Store has been redesigned to be simpler to use and to provide you with a better shopping experience. Apps should be easier to discover with 8.1.
  • Compatibility: Windows 8.1 is completely compatible with all Windows 7 apps, including Office 365.
  • Adaptable windows: In Windows 8.1, you can have up to four apps on the screen at the same time and you can size and arrange those windows anyway you choose.
  • Multi-monitor support: Windows 8.1 has a more coherent approach to support for multiple monitors, whether operating in the modern interface or on the traditional desktop.
  • Across devices: Personal settings for desktop backgrounds, favorites, documents etc. can synchronize across various Windows 8.1 devices.
  • Social connections: Windows 8.1 Preview expands on the concepts of social connections by integrating social features into Outlook, the Mail app, the People app, and Skype.
  • BYOD: Coupling 8.1 Preview with Windows Server allows more flexibility when managing personal devices in the enterprise.
  • Security: Enterprise-grade security is available through enhanced access control, data protection, and encryption.
  • Connectivity: Windows 8.1 Preview includes several improvements to connectivity, such as enhanced mobile broadband functionality, NFC based tap to pair with enterprise printers, and native Miracast wireless display capabilities.
  • Annoyances: Many of the complaints about Windows 8 revolve around what I would classify as simply annoying. Windows 8.1 Preview fixes a few of these:
    • Shutdown/Restart/Sleep: A user can now Shutdown, Restart, or put a PC in Sleep mode from the Desktop by right-clicking the Windows button (where the old Start Button used to be) and navigating to the appropriate menu item. Total number of clicks required for the procedure – two.
    • All Apps: Users can see all the available apps by clicking the arrow on the Start Screen. (To see the apps in Windows 8 you had to right-click on an empty part of the Start Screen.)

 

Source: TechRepublic