Tuesday, January 31, 2012

Big Data is more than Hadoop

We recently published the results of our benchmark research on Big Data to complement the previously published benchmark research onHadoop and Information Management. Ventana Research undertook this research to acquire real-world information about levels of maturity, trends and best practices in organizations’ use of large-scale data management systems now commonly called Big Data. The results are illuminating.

Volume, velocity and variety of data (the so-called three V’s) are often cited as characteristics of big data. Our research offers insight into each of these three categories. Regarding volume, over half the participating organizations process more than 10 terabytes of data, and 10% process more than 1 petabyte of data. In terms of velocity, 30% are producing more than 100 gigabytes of data per day. In terms of the variety of data, the most common types of big data are structured, containing information about customers and transactions.

However, one-third (31%) of participants are working with large amounts of unstructured data. Of the three V’s, nine out of 10 participants rate scalability and performance as the most important evaluation criteria, suggesting that volume and velocity of big data are more important concerns than variety.

This research shows that big data is not a single thing with one uniform set of requirements. Hadoop, a well-publicized technology for dealing with big data, gets a lot of attention (including from me), but there are other technologies being used to store and analyze big data.

The research data shows an environment that is still evolving. The majority of organizations still use relational databases but not exclusively: More than 90 percent of participants using relational databases also use at least one other technology for some of their big-data operations. One-third (34%) are using data warehouse appliances, which typically combine relational database technology with massively parallel processing. About as many (33%) are usingin-memory databases. Each of these alternatives is being more widely used than Hadoop. As well, 15% use specialized databases such as columnar technologies, and one-quarter (26%) are using other technologies.

While these technologies enable organizations to do things they haven’t done before, there is no technological silver bullet that will solve all big-data challenges. Organizations struggle with people and process issues as well. In fact, our research shows that the most troublesome issues are not technical but people-related: staffing and training. Big data itself and these new approaches to processing it require additional resources and specialized skills. Hence we see high levels of interest in big-data industry events such asHadoop World and the Strata Conference. Recognizing the dearth of trained resources here, some academic institutions have launched degree programsin analyzing big data, and IBM has started BigData University.

Research participants cited real-time capabilities and integration as their key technical challenges. The velocity with which they generate data and the fact that over half the organizations analyze their data more than once a day are forcing them to seek real-time capabilities; the pace of business today demands that they extract as soon as possible all useful information to support rapid decision-making.

With respect to integration, less than half of participants are satisfied with integration of third-party products, and almost two-thirds cite lack of integration as an obstacle to analyzing big data. Three-quarters have integrated query and reporting with their big-data systems, but more advanced analytics such as data mining, visualization and what-if analysis are seldom available as integrated capabilities. Responding to such comments, vendors have been racing to integrate their business intelligence and information management products with big-data sources. As you consider big-data projects and technologies, make sure that the vendors you select can handle the big-data sources you must use.

Looking ahead we expect more changes in this evolving landscape. In some ways big-data challenges and the presence of Hadoop in particular have paved the way for other technologies besides relational databases. NoSQL alternatives, such as Cassandra, MongoDB and Couchbase, are gaining notice in enterprise IT organizations after the success of Hadoop. In-memory databases, once considered a niche technology, are being considered by SAP, in HANA, as its primary big-data analytical platform. There are differing opinions about whether these various big-data technologies will converge or diverge. We can look to the past for some indications of where the market might go. Over the years a variety of alternatives to relational databases have emerged, including OLAP, data warehouse appliances and columnar databases; each eventually was absorbed into relational databases.

We also see signs of the major relational vendors embracing big-data technologies. IBM acquired Netezza for its massively parallel data warehouse appliance technology. IBM has also invested heavily in Hadoop. Oracle introduced its own line of data warehouse appliances and recently brought a big-data appliance to market that includes Hadoop and NoSQL technologies. Microsoft has invested in massively parallel processing and Hadoop. We also see independent vendors such as Hadapt combining relational database technology with Hadoop. The past is not necessarily an indication of the future, but our research shows and recent market dynamics suggest it may be premature to write off the relational database vendors as out of touch.

In light of this information, I recommend that your organization explore various alternatives for solving specific challenges. At a minimum you should be aware of the alternatives so when the need arises you will know what is available. Use our big-data research to guide your use of these technologies and to help avoid some of the obstacles they present so you can be more successful in applying big data to business decisions.

This blog originally appeared at Ventana Research. (David Menninger)

Tuesday, January 10, 2012

China Telecom agrees to UK MVNO deal

China Telecom’s European division is a step closer to launching a virtual mobile network in the region, agreeing a network sharing deal with UK carrier EverythingEverywhere.
The agreement clears China Telecom Europe to launch services during the first quarter, targeting Chinese nationals and businesses in the UK. EverythingEverywhere claims the network will be the first MVNO established by a Chinese operator outside its domestic market.
China Telecom Europe’s managing director Ou Yan first detailed plans for an MVNO in the UK in August. At the time he told Telecoms Europe.net the network will pave the way for similar agreements in France and Germany, and he subsequently revealed the firm is also eyeing the US.
Yan says EverythingEverywhere’s network coverage and experience in launching MVNO’s were the deciding factors in the UK deal. “We are keen to launch the service in the UK as soon as possible as there is a real gap in the market for the provision of tailored mobile services…aimed at the growing Chinese population,” he comments.
The agreement is the 24th arranged by EverythingEverywhere’s mobile virtual network aggregator Transatel.

Five new challenges for APAC telecoms in 2012

The telecoms industry in the Asia-Pacific region will face considerable challenges in 2012 as overall growth in the mobile market slows down and competition for customers increases. As revenue growth slows, operators will be forced to improve efficiency and control costs within their businesses. They will need to do this in an environment driven by smart devices and fixed-service bundling.
While details vary from market to market, the overall picture is one of tightening margins. A key success factor for operators will be strong internal management to make operational changes that ensure continued profitability. Ovum believes that five major trends will drive the telecoms industry in Asia-Pacific in 2012.
The push for cost optimization and efficiency
Cost optimization will grow in importance as operators face increasing competition and margin pressures over the next 12 months. While early cost optimization initiatives will involve relatively simple measures such as passive network sharing and the outsourcing of non-core functions, more aggressive cost optimization strategies, such as backhaul sharing and access infrastructure sharing, will also begin to emerge. There is no single strategy that operators should adopt as different markets will require different approaches. However, organizing joint ventures with competitors will be challenging, especially in developed markets.
Opex reductions should be easier to realize than capex reductions, and opex reductions are expected to increase in 2012 through the outsourcing of network management, customer service, and other non-core business functions. Ovum expects outsourcing deals among emerging market operators to grow by approximately 50% in 2012.
The importance of customer service
Many operators are struggling to find sustainable, non-price advantages over their competitors. The current strategy is to stay ahead of the competition with a series of tactical moves including promotions, marketing, exclusive device relationships, better network coverage/reliability, and customer service.
As poor customer service is expensive to provide, good customer service benefits both operators and their customers. Many operators around the world are spending heavily to improve their customer service systems. This expenditure is encompassing areas such as 24/7 customer service and weekend fault repair calls. Some telcos are now also addressing customer problems over social networking services such as Twitter and Facebook. Customer service can be a significant differentiator for telcos, but any efforts must involve the entire organization and ultimately result in cultural change.
The future of smart devices and mobile app ecosystems
The continuing movement away from feature phones towards smartphones and tablets running “light” operating systems will continue to affect operator strategy. It will have a significant impact on network investment, service pricing, and will drive operators’ value-added service offerings. Application functionality and content will become increasingly reliant on the network and cloud services.
Consumers are no longer content to purchase a device based solely on hardware features and price. Successful devices will need to integrate applications, content, and services into the platform.
The emergence of cross-platform development based on web standards and/or proprietary Rich Internet application runtimes provides a potential route away from the current reliance on proprietary vendor-controlled app stores. The challenge for operators over the next two years lies in managing this transition, and using it to move up the value chain in delivering applications and content to users of smart devices on their networks.
Network data management is vitally important
With data traffic increasing exponentially, operators are being forced to implement a mix of technologies to alleviate network congestion. Advanced pricing schemes, such as quality of service and prioritization-based tariffs, have been hard for customers to understand and therefore difficult for operators to sell.
In some markets, operators have continued to embrace Wi-Fi offloading. While femtocells are gaining some traction in Asia-Pacific, the business case for them is very operator- and market-specific. Adding to operators’ dilemma is the debate surrounding picocell, macrocell, and microcell networks.
Operators will ultimately roll out a combination of solutions. We expect to see several more LTE networks launched, more extensive Wi-Fi offloading, and increased discussions of heterogeneous network solutions in Asia-Pacific in 2012.
A lack of sufficient backhaul will be a major component of capacity challenges in 2012. Mobile operators that have not already done so will look to move their backhaul to packet technologies (typically Ethernet) in conjunction with capacity upgrades.
Bundling for customer retention
Bundling strategies have begun to gain traction, and we expect this trend to accelerate in 2012. Telcos with bundling strategies maintain that the net outcome of bundling is revenue growth and reduced churn.
Ovum expects to see more bundling strategies emerge in 2012, particularly from second tier operators. There is also a significant bundling opportunity for mobile-only operators in countries where governments are deploying wholesale-only fiber NGA networks.
(David Kennedy, the practice leader for Ovum’s Asia-Pacific research group)

Technologies to watch: 2012 and beyond

A series of trends are emerging in the telecom landscape, some of which will blaze a trail, but others are likely to vanish without a trace.
Factors that will ensure the endurance of certain technologies will ranging from pure necessity to a coolness factor, from innovativeness to cost. The following technologies are certain to be trailblazers in the years to come.
Software defined networking: SDNs are based on the pathbreaking paradigm of separating the control of a network flow from the flow of data. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol. SDNs decouple the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the Flow controller. This allows the flow of data packets through the network to be controlled in a programmatic manner.
The OpenFlow Protocol has three components – the flow controller, the OpenFlow switch and the flow table - and a secure connection between controller and switch. SDNs also include the ability to virtualize network resources. Virtualized network resources are known as a “network slice”. A slice can span several network elements including the network backbone, routers and hosts. The ability to control multiple traffic flows programmatically provides enormous flexibility and power in the hands of users.
Smart Grids: The energy industry is delicately poised for a complete transformation with the evolution of the smart grid concept. There is now an imminent need for an increased efficiency in power generation, transmission and distribution coupled with a reduction of energy losses. In this context many leading players in the energy industry are coming up with a connected end-to-end digital grid to smartly manage energy transmission and distribution.
The digital grid will have smart meters, sensors and other devices distributed throughout, capable of sensing, collecting, analyzing and distributing the data to devices that can take action on them.
The huge volume of collected data will be sent to intelligent device which will use the wireless 3G networks to transmit the data. Appropriate action like alternate routing and optimal energy distribution would then happen. Smart Grids are a certainty given that this technology addresses the dire need of efficient energy management.
NoSQL: In large web applications where performance and scalability are key concerns a non –relational database like NoSQL is a better choice to the more traditional relational databases. There are several examples of such databases – the more reputed are Google’s BigTable, HBase, Amazon’s Dynamo, CouchDB & MongoDB.
These databases partition the data horizontally and distribute it among many regular commodity servers. Accesses to the data are based on simple get (key) or set (key, value) type of APIs. The ability to distribute data and the queries to one of several servers provides the key benefit of scalability. Applications that have to frequently access and manage petabytes of data will clearly have to move to the NoSQL paradigm of databases.
NFC: Near Field Communications is a technology whose time has come. Mobile phones enabled with NFC technology can be used for a variety of purposes. One such purpose is integrating credit card functionality into mobile phones using NFC. Already the major players in mobile are integrating NFC into their newer versions of mobile phones including Apple’s iPhone, Google’s Android, and Nokia. We will never again have to carry in our wallets with a stack of credit cards. Our mobile phone will double up as a Visa, MasterCard, etc.
NFC also allows retail stores to send promotional coupons to subscribers who are in the vicinity of the shopping mall. Posters or trailers of movies running in a theatre can be sent as multimedia clips when travelling near a movie hall. NFC also allows retail stores to send promotional coupons to subscribers who are in the vicinity of the shopping mall besides allowing exchanging contact lists with friends when they are close proximity.
Tinniam V Ganesh is a telecom expert with 25 years experience in the software industry. He blogs at http://gigadom.blogspot.com

(Source: telecomasia.net Sept 11.2011

SingTel aims high for cloud growth

SingTel’s cloud service portfolio comprises on-demand computing resources, on-demand connectivity, on-demand managed services, plus SaaS solutions, which include office productivity, finance and accounting, human resources, sales and marketing, and supply chain management.
Alvin Kok, SingTel Business’ head of infocomm services, said the carrier has more than 800 businesses and 150,000 business users subscribing to its cloud services today. "Demand for our cloud services has accelerated over the last three years. We aim to grow our cloud services with a CAGR of around 50-70% over the next three years," he said.
SaaS continued to gain traction among the small and medium businesses (SMBs) in Singapore. "We are expecting this momentum to carry on in the area of Infrastructure-as-a-service for both SMB and MNCs," Kok said. "In the next three to five years, SingTel will focus and continue our thrust to become Asia’s best and largest one-stop ICT experience provider," Kok said.
In an excerpted interview with Asia Cloud Forum, Kok describes SingTel’s cloud service deployment for the Singapore 2010 Youth Olympic Games, and advised businesses the key questions to ask before they adopt cloud services.
Asia Cloud Forum: Describe one of your company’s most successful customer deployments of cloud services.
Alvin Kok: SingTel was the official data center infrastructure partner of the Singapore 2010 Youth Olympic Games (YOG), the world’s first games for the youth. We provided secured virtual data center services to drive key applications such as games and results management, internet applications, email services and Web hosting for the successfully concluded Games.
SingTel’s virtual data center services allowed the Games organizers to deploy and scale up primary and secondary data centers rapidly, without the need to purchase, configure, and maintain physical infrastructure.
SingTel also provided a Web-based social networking application and a multi-language online chat platform to help athletes and officials keep in touch with one another, and to stay abreast of the latest results.
Finally, SingTel also provided disaster recovery and business continuity services -- including network and infrastructure diversity -- as well as 24/7 systems monitoring to ensure optimum performance and security.
“The Singapore 2010 Youth Olympic Games was the most complex IT project for a sporting event ever undertaken in Singapore to date.” Lim Bee Kwan, director of technology, Singapore Youth Olympic Games Organising Committee. (Source: Telecomasia.net)

Android Google TV to be released by LG

With Google having confirmed LG as a major device partner for Google TV, the consumer electronics giant has revealed further details of the product it will be showcasing at CES in Las Vegas.

LG says the forthcoming LG Smart TV with Google TV combines the familiarity of Google’s Android OS with the convenience and comfort of LG’s 3D and Smart TV technologies, offering consumers a new and attractive home entertainment option.

“LG has constantly strived to provide consumers with wider choices in home entertainment that bring the highest level of sophistication and convenience,” said Havis Kwon, President and CEO of the LG Electronics Home Entertainment Company. “Through Google TV, LG has merged Google’s established Android operating system with LG’s proven 3D and Smart TV technologies, offering consumers a new and enthralling TV experience.”

LG says the LG Google TV’s most attractive feature is its ease of use, thanks to the combination of its Android-based user interface and the Magic Remote Qwerty designed by LG. LG Google TV’s user interface and main screen have been designed for convenient browsing and content selection. Multi-tasking is also possible, as the search, social networking and TV functions can be run simultaneously. The user interface can be accessed using the Magic Remote Qwerty which combines the user-friendly benefits of LG’s Magic Remote with a QWERTY keyboard.

Equipped with LG’s own CINEMA 3D technology, LG Google TV provides a home entertainment experience that is immersive, comfortable and convenient, says the company. Based on LG’s own Film Patterned Retarder (FPR) technology, CINEMA 3D glasses are battery-free and lightweight. “The glasses are also very affordable, making LG’s Google TV ideal for viewing by a large group of family and friends when used in 3D mode. And with a single click of the remote, any 2D content can be viewed in 3D, thanks to the built-in 2D to 3D conversion engine,” notes LG.

LG confirmed that alongside Google TV, the company will continue to advance its own Smart TV platform based on NetCast using open web technology such as Webkit browser and Linux. LG Smart TV with NetCast will be available globally in 85 plus countries at launch.

LG Smart TV with Google TV will be available in two series at launch in the US in 2012. The first demonstration of LG’s Google TV will take place at CES, January 10-13. (Source: Telecomasia.net)

Sharpening data center due diligence: 6-CIOs questions

Asking a board of directors for several hundred million dollars to obtain new data center capacity is one of the least popular requests a senior technology executive can make. As one CIO said, “I have to go to the executive committee and tell them that I need a billion dollars, and in return I’m going to give them exactly nothing in new functionality— I’m going to allow them to stay in business. I’m not looking forward to this.”

Investments in data center capacity are a fact of business life. Businesses require new applications to interact with customers, manage supply chains, process transactions, and analyze market trends. Those applications and the data they use must be hosted in secure, mission-critical facilities. To date, the largest enterprises have needed their own facilities for their most important applications and data.

How much data center capacity you need and when you need it, however, depends not only on the underlying growth of the business but also on a range of decisions about business projects, application architectures, and system designs spread out across many organizations that don’t always take data center capital into account. As a result, it’s easy to build too much or to build the wrong type of capacity. To avoid that, CIOs should ask a set of questions as part of their due diligence on data center– investment programs before going to the executive committee and the board with a capital request.

1. How much impact do our facilities have on the availability of important business applications?

Resiliency is among the most common justifications for data center investments. Out-of-date, low-quality data centers are often an unacceptable business risk. Yet upgrading facilities typically isn’t always the most direct or even the most effective means of making applications more available. At the margin, investments in improved system designs and operations may yield better returns than investments in physical facilities.

Downtime overwhelmingly stems from application and system failures, not facility outages. An online service provider, for example, found that facility outages accounted for about 1 percent of total downtime. Even the most aggressive improvements in facility uptimes would have a marginal impact on application downtimes.

Organizations with high-performing problem-management capabilities can achieve measurably better quality levels by identifying and eliminating the root causes of incidents across the technology stack. Yet many infrastructure organizations do not have integrated problem-management teams.

2. How much more capacity could we get from existing facilities?

In many cases, older data centers are constrained by cooling capacity, even more than by power capacity: insufficient air-conditioning infrastructure limits the amount of server, storage, and network equipment that can be placed in these sites. The data center team can often free up capacity by improving their cooling efficiency, sometimes through inexpensive and quick-to-implement moves.

A large European insurance company, for example, wanted to consolidate part of its data center portfolio in its largest, most resilient data center, which was cooling constrained. The company freed up one to two critical megawatts of capacity in this facility—with approximately $40 million in capital cost savings—by replacing worn floor tiles, cable brushes, and blanking plates (all of which improved air flow) and increasing the operating-temperature range. As a result, the company consolidated facilities and provided capacity for business growth without having to build new capacity.1

3. What does future demand for data center capacity look like and how can virtualization affect it?

World-class data center organizations deeply understand potential demand scenarios. Rather than make straight-line estimates based on historical growth, they use input from business and application-development groups to approximate the likely demand for different types of workloads. They then model potential variations from expected demand, factoring in uncertainties in business growth, application-development decisions, and infrastructure platform choices.

Without a business-driven demand forecast, IT organizations tend to build “just in case” capacity because few data center managers want to be caught short. A large European enterprise, for instance, cut its expansion plans to 15 critical megawatts, from 30, after the data center team conducted a deeper dive with business “owners” to better understand demand growth.

Even after years of rationalization, consolidation, and virtualization, many technology assets run at very low utilization rates, and every incremental server, storage frame, and router takes up space in a data center. Mandating that applications be migrated onto virtualized platforms in the facility, rather than moved onto similarly configured infrastructure, can be a powerful lever not only for reducing IT capital spending broadly but also for limiting new data center capacity requirements. A global bank, for example, cut their six-year demand to nearly 40 megawatts, from 57— a more than 25 percent reduction—by leveraging its data center build program to accelerate the use of virtual machines (Exhibit 1). This translated to a 25 percent reduction in new capacity build. That achievement helped create a political consensus for implementing virtualization technology more aggressively.

4. How can we improve capacity allocation by tier?

Owners of applications often argue that they must run in Tier III or Tier IV data centers to meet business expectations for resiliency.2 Businesses can, however, put large and increasing parts of their application environments in lower-tier facilities, saving as much as 10 to 20 percent on capital costs by moving from Tier IV to Tier III capacity (Exhibit 2). By moving from Tier IV to Tier II, they can cut capital costs by as much as 50 percent.

Many types of existing workloads, such as development-and-testing environments and less critical applications, can be placed in lower-tier facilities with negligible business impact. Lower-tier facilities can host even production environments for critical applications if they use virtualized failover— where redundant capacity kicks in automatically— and the loss of session data is acceptable, as it is for internal e-mail platforms.

With appropriate maintenance, downtime for lower-tier facilities can be much less common than conventional wisdom would have it. One major online service provider, for instance, has hosted all its critical applications in Tier III facilities for 20 years, without a single facility outage. This level of performance far exceeds the conventional Tier III standard, which assumes 1.6 hours of unplanned downtime a year. The company achieved its remarkable record through the quarterly testing and repair of mechanical and electrical equipment, the preemptive replacement of aging components, and well-defined maintenance procedures to minimize outages that result from human error.

It is inherently more efficient and effective to provide resiliency at the application level than at the infrastructure or facility level. Many institutions are rearchitecting applications over time to be “geo-resilient,” so that they run seamlessly across data center locations. In this case, two Tier II facilities can provide a higher level of resiliency at lower cost than a single Tier IV facility. This would allow even an enterprise’s most critical applications to be hosted in lower-tier facilities.

5. How can we incorporate modular designs into our data center footprint?

There is a traditional model for data center expansion: enterprises build monolithic structures in a highly customized way to accommodate demand that is five or sometimes ten years out. In addition, they design facilities to meet the resiliency requirements of the most critical loads. New modular construction techniques (see sidebar “Three modular construction approaches”), however, have these advantages:

  • shifting data center build programs from a craft process for custom-built capacity to an industrial process that allows companies to connect factory-built modules
  • building capacity in much smaller increments
  • making it easier to use lower-tier capacity
  • avoiding the construction of new facilities, by leveraging existing investments (see sidebar “Deploying modular capacity: Two case studies”)

6. What is the complete list of key design decisions and their financial impact?

Even after the company has established its capacity requirements, dozens of design choices could substantially affect the cost to build. They include the following:

  • redundancy level of electrical and mechanical equipment
  • floor structure (for instance, single- or multistory)
  • cooling technology (such as free-air cooling, evaporative chillers, and waterside economizers)
  • degree to which components are shared between or dedicated to modules
  • storm grade (for instance, the maximum wind speed a data center can withstand, as defined by regional or national standards, such as the Miami–Dade County building code and the Saffir–Simpson Hurricane Wind Scale)

Individual choices can have a disproportionate impact on costs per unit of capacity even after a company chooses its tier structure. A large global financial-services firm, for example, looked closely at its incident history and found that electrical failures—rather than mechanical ones—caused almost all of the issues at its facilities. Knowing this, the firm increased its electrical redundancy and decreased mechanical redundancy, shaving off several million dollars in construction costs per megawatt.

Given the scale of the investment required—billion-dollar data center programs are not unheard of— CIOs must undertake aggressive due diligence for their data center capital plans. They can often free up tens of millions of dollars from build programs by asking tough questions about resiliency, capacity, timing, tiering, and design. Perhaps more important, they can tell the executive committee and the board that they are using the company’s capital in the most judicious way possible.
(Source: James Kaplan - McKinsey)

How a grocery giant puts technology at the center of innovation

Cooperative Consumers Coop, better known as Coop, was Italy’s first retailer to embrace hypermarkets, in the 1980s, and then began opening even bigger superstore venues while expanding its offerings to include insurance and banking services, electricity, and prescription drugs. Throughout this expansion, Coop sought innovative ways to support its strategy with technology. Massimo Bongiovanni has strongly helped the company realize that goal as president of Coop Centrale, which manages purchasing and distribution for the retailer’s cooperative network of stores, as well as the IT and services that support marketing, pricing, and other elements of Coop’s commercial policies.

Earlier this year, McKinsey’s Brad Brown, Lorenzo Forina, and Johnson Sikes spoke with Massimo Bongiovanni about technology’s role in fostering growth and innovation.

A rising role for IT: McKinsey Global Survey results

Aspirations—and current expectations—for IT have never been higher. Executives continue to set exacting demands for IT support of business processes, and they see an even larger role for IT in a competitive environment increasingly shaken up by technology disruptions. These are among the results of our sixth annual business technology survey, where we asked executives across all functions, industries, and regions about their companies’ use of, expectations for, and spending on IT.1 Looking ahead, executives expect IT to create new platforms to support innovation and growth, help guide strategy with data and advanced analytics, and stay on top of possible new roles for mobile devices. For IT leaders, the good news is that along with these higher expectations, most respondents also see a greater willingness to spend more on IT.

Google TV Tries Again

LAS VEGAS — Manufacturers of televisions have been searching for something — anything! — to reverse a years-long slide in profits.

This week, several manufacturers plan to unveil their effort at the huge International Consumer Electronics Show here. It’s called Google TV.

If that sounds familiar, it’s because Google has been trying to crack the television market for some time, with rather tepid results. Google’s first foray into television, a partnership with Sony and Logitech in the fall of 2010, didn’t catch on because the remote was so big and complicated and because its software was confusing. And so far, Google’s success has been limited by the availability of on-demand television from the major networks.

Several announcements are expected at the show. LG Electronics says the latest iteration of Google TV has merged Google’s Android operating system with LG’s 3-D and Smart TV technologies, offering consumers “a new and enthralling TV experience.”

Whether consumers will be enthralled remains to be seen, but Google and television manufacturers are wagering that it will be a hit. Besides LG, Sony, Samsung and Vizio will introduce Google TV-powered products.

The idea of Google TV, and more broadly of Internet-connected televisions, is that viewers would be able to watch television in much the same way they browse the Internet, allowing them to watch movies, television shows, concerts and sporting events whenever they wanted.

In the case of Google TV, viewers would use Google’s Android operating system and apps specifically developed for the television. Google says it is a simpler interface than the first version of Google TV and offers more “TV-like” viewing of YouTube