Skip to main content

Orange Mobile News

Mobile Cloud News

Home  About Us  Contact Us  Site Map  Member Login  Events  Employment    
Cloud Computing  > Storage > Display > Mobile > Social > Semiconductor >  

VMware virtualization technology to create a safe and stable operation of medical Cloud

Posted by Daniel J Su, [Dec-21th , 2016]

Four of today's information technology trends affect each industry, there are Cloud (cloud computing), Mobile (mobile devices), Social (social networking), Big Data (large data).

In industry, for example, just like Alibaba Group, listed in the US, is beginning construction of shopping through the Internet platform, to today's derived from a variety of service site, its cash flow platform also branched out into banking, bank deposit to attract users keep the Internet have changed the matter to the attention of the State Council also shaken, and this is an important case of the Internet on the financial industry shocks.

The mainland industries, are facing the BAT (Baidu + + Alibaba Tencent) competition. Alibaba for example, is applied to a large number of cloud computing technology, so Compute resources can be infinitely extended.

Alibaba during get ride off the solution from intel, Oracle and EMC, completed the last one IBM UNIX host offline milestone, the All x86 virtualized cloud infrastructure, this architecture to support the service's various sites,

The virtualization also allows IBM Power System revenue has been declining, which means that IT infrastructure all walks of life began to cloud, so only on the basis of the environment, in order to support and develop more new multi applications.

The distal end of the medical being rammed States wind "cloud" Chung

Recently in Shanghai to attend the forum Software Association, the spindle surrounded by smart cities, health care and the health of future industrial development. Medical assistance through the IT evolution of information technology to enhance the competitiveness of the hospital.

Forbes also said that health care industry will embrace cloud computing technology, 83% of medical institutions using cloud-based App.

IT support to the hospital, "cloud" to "Client" architecture, the intermediate layer includes HIS, PACS, EMR / EHR, HR, outpatient system, and its derivatives medical / care / case / pharmaceutical / health cloud applications.

Underlying these applications, the cloud must be supported, including private hospitals cloud (Cloud Data Center) of the cloud virtualization (such as servers x86, network, storage) technology, so that the upper layer of the mobile medical applications can be utilized actions terminal (such as hospitals Windows workstation, iPad / Android tablet, personal and home Notebook, mobile car care, smart phones, etc.), with all kinds of App provide various medical services.

VMware offers the world's most advanced technology to construct a complete cloud-to-end cloud infrastructure. In the bottom part of the cloud with VMware vSphere (servo virtualization), VMware VSAN (SVC), VMware NSX (network virtualization); at the end part of a VMware Horizon (virtual desktops), AirWatch (mobile devices).

Cloud virtualization medical information to promote health care environment wisdom action

Current challenges and needs encountered in medical action, that medical information system (including PACS / EMR / HIS) must support different devices, reaching action-oriented and fast services to ensure security and data protection acess, stands ready to provide medical information systems. Therefore, to achieve the first step in support of the action, is the "desktop virtualization" of the desktop environment through virtualization mode, on the server side centralized management, high efficiency through the distal end of the protocol, so that the use of by any means for remote access.

Currently medical units imported virtual desktop units, including emergency / nursing station operations, human resources between the Ministry of Information Planning Division, dialysis rooms, clinics, medical classes, in remote areas outside the points to see the doctor;

As VMware Horizon wisdom medical application scenarios, including action-medical tablet PC, such as doctors rounds 1, 2 Action car care, 3. document signoff, 4 radiologists Dundas report, 5 patients physician View Profile 6. on call service telemedicine security without interrupting other scenes.

IT management application in the scene, there is

1. The security alert medical care data,

2. streamline software licensing fees,

3. Ease of use and maintenance of all types of medical USB device containing medical card,

4. After the old software to the new operating system deployment package,

5. XP Win7 PC upgrades and entity management,

6. easily manage desktops, desktop reached without interruption.

VMware virtual desktop infrastructure provided (VDI) support for phone / tablet / laptop device with a variety of operating systems, it can be deployed to each mobile device, health care workers can use iPad or Android tablet, remote login to the familiar Windows system to query data input, analysis and sign-off documents and other applications, without the need to change existing habits, nor shall be bound to use specific features of a fixed seat in the hospital, even if a business trip, but also to connect back to the hospital's computer use; in the security section, do not worry about data input into the half, due to network instability and must re-enter, afraid of a capital and other privacy leaks.

VMware virtual desktop and mobile security management technology, far ahead of the industry in the various competitions are first. Today, VMware is introducing the medical industry customers to construct a low-cost, high-efficiency, green hospital private cloud architecture, provide any terminal device using a zero interrupt, familiar user interface without compromising data security of mobile health care environment.

Big Data and Hadoop MapReduce application development

Posted by Daniel J Su, [Dec-10th , 2016]

Even Big Data abundant treasure trove of hidden treasures, however, intended to dig out these babies, you need to prepare business tools, are still used in the past, the usual relational database, SQL syntax, ETL (Extract, Transform, Load). In addition to a huge amount of information has almost become synonymous with Hadoop outside and inside the same framework of MapReduce, HDFS and other technology companies who must engage in active learning.

When Big Data IT professional media overwhelming captured important forum, many of the past is still little knowledge of this business, they have to amend standing on the sidelines of mind, I began to spend time to study and found that many of the world well-known companies are already surrounded by enthusiastic Big data and fruitful application of the results obtained, so without further ado, to catch the boom is borne.

But, for the RDBMS, SQL syntax, Schema and other Lord of light on things already driving light IT staff, now life title massive data reading of the boss, only to find inside is full of unfamiliar terms, for example, always with a yellow elephant schematically of Hadoop, Big data relating to all the vocabulary, the highest frequency of occurrence, and then find out, only to find it has been hailed as the most suitable for processing, storage and query Big data platform, since Hadoop so much praise, even if it is not adept in the past, the future does not intend to obtain its license, but at least it can not continue in ignorance.

The main reason is that a huge amount of information is intended to open the door of hope, almost certainly, absolutely can not afford less Hadoop that key!

Hadoop software framework to delve inside, suddenly scalp tingling, really wanted surrendered, but whether it is a huge amount of thought data, or even cloud computing and other hot topics, all related with the stuff, so no matter how painful, and only bite the bullet and continue to K down.

"Adder" Hadoop among all associated, in the end what? Two of the most core of the project, one of the MapReduce programming model to perform distributed processing, and the other is the virtual HDFS distributed file system, using specially arithmetic, two pillars of storage, firmly hold up Hadoop architecture.

Turning first to MapReduce. Recalling those familiar business intelligence (BI) mode of operation, it must be from among multiple systems, the analysis model of the desired information to be pooled, and therefore is bound to get through a lot of IT people is afraid to let the program, and that is ETL (Extract, Transform, Load); now enter the Big data world, MapReduce as if playing ETL role, responsible for the processing of the raw data.

In fact, MapReduce can be divided into "Map", "Reduce" has two sections, the former is the one from corresponds to the list function, as the name suggests it is holding the Key, paired with each piece of data, then come up with metadata; As for Reduce, is a correspondence from the list function is responsible for a lot of value ', also metadata is various, convergence becomes a value ''.

In Hadoop system architecture, often spend a lot of called "Worker" unit of work, accept the "Master" of dispatching (task assigned by Job Tracker first to Task Tracker, and then assigned to go), separately executed Map and Reduce tasks ; these Worker done their work, the outcome will return to Task Tracker.

Secondly, when it comes to HDFS distributed file system, and the aforementioned Master node also can not get away, and the Master unit has JobTracker TaskTracker responsible for delivering computing tasks, but also NameNode DataNode and two other generals, etc., relating to the distribution of information on particular portfolio.

NameNode features and traditional file systems have similarities, will put a file into many parts Block, however, these traditional file system will be stored in the Block Drive into the same physical host, NameNode is not, and will disperse Block to different DataNode; for those who understand Linux technology, then inevitably there will be a sense of deja vu, because NameNode look just like the Linux file system inside the Inode, if someone wants to ask, all Block certain dossiers, what are arranged where at? Inode alone or NameNode this type of key players in order to give an answer.

Thus, when a user needs to read a particular file, you can with the help of NameNode, so that each store had a Block 5 hosts, these Block released, while a user a read-5 Block after, and then combines them into a complete file, the benefits of this model is very efficient, if it is otherwise in accordance with a sequential read from the same server station 1 Block 1 ~ 5, during which light is frequently read lock (read lock) 

But then again, if you do not have much experience, you want to start to write MapReduce programs, I am afraid not so simple, sometimes just some seemingly insignificant little mistake, you can make the program developers in the context ghost hit the wall a long time can not get back on track; fortunately, with the huge amount of data issues fever, some manufacturers began to design a quite approachable Hadoop suite to Microsoft, for example, they put their accumulated from Bing search technology, intellectual SQL Server data mining technology, packaging become one Template, and through Azure Marketplace app to provide the type to go out, with its help, app developers can go from a lot of costly mistakes, from development to accelerate the completion of MapReduce programs, and help to provide program content correctness.

Accompanied by the relevant aids mature, in fact, whether it is old or less IT staff, can quickly enter the case, or indeed without the Hadoop MapReduce imagined such horror.

MapReduce, HDFS "double arrow" outside, looking around the Hadoop software framework, in fact, there are many inside the weapon, it is worth aspiring to invest a huge amount of information companies make good use of. MapReduce programs written by the aforementioned difficulty, in fact, has just "Mahout" The auxiliary tool, you can save a lot of trouble; the main reason is that, Mahout is a set of MapReduce library, there have been a number of ready-made templates for easy programming developers use the call to be, significantly reduce the burden of writing the program area.

In addition there is a called a "Pig" thing, is enough to relieve the pressure of writing MapReduce programs. Pig's full name Pig Latin, it is specifically designed to perform data analysis of the huge amount of language, such as due to the large selection GROUP, FILTER, etc., or JOIN command very high affinity, it was very easy to use, but this can also Pig script generated at the program are automatically converted into MapReduce Java program, it is tantamount to provide another a shortcut.

Such as remaining "HBase" This plant Column Hadoop based database system, as well as some SQL-like language, and yet not the SQL language "Hive" or "HiveQL," in fact the aforementioned Mahout or Pig Latin, like, there are the same purpose wonderful, wonderful to play a considerable degree of efficiency, in order to make the uninitiated a glimpse into the leading edge of the huge amount of data, it becomes a lot of relatively simple.

Take the route search engine, easily entered the Big Data world.

For frequently involved a huge amount of information seminars, occasionally reading relevant literature people, Splunk must have heard of the term, in this "time-series search engine started" and "IT Google," the company self-positioning, the use of a very special solution that allows advertised fully understand Hadoop, never deploy any BI tools business, also quickly entered the Big Data world, At first, in fact, many niche point.

Splunk reason play unique demands, mainly originated from a single platform versatility, whether it is data collection, computing, storage, query, indexing, analysis, monitoring or display, a variety of needs, all are covered including; the one hand, if too much trouble to build enterprise Hadoop environment, the second aspect is again based on a particular topic analysis, successive to write programs, manual data will be sent to Hadoop data warehouse, in contrast, all-embracing Splunk, does not have small affinity.

But strength lies in analyzing "machine data" of Splunk, although can effectively control the huge amount of information among the fastest growing part, but after all, is part of the machine data, rather than the whole picture (such as the inability to analyze the image drawing data), rather than uphold hoodwink , it would be better integrated with Hadoop operation, in order to maximize synergy.

The Splunk really do, has launched integration with Hadoop package; this way, the user can Splunk data into Hadoop, to facilitate promote academic research, also will go to Splunk Hadoop, perform visual analysis, report production the other tasks through the two join forces and, together with immediacy also insufficient to fill the Hadoop Achilles Heel.

Big Data in Critical Business

Posted by Daniel Su  [Nov-18th, , 2016]

If one has a new data analysis technique, but no clear commercial application, then Big Data does not have any value, however, relatively, in the past cannot solve the problem of massive data analysis, just need to find the target for commercial applications, the Big Data will become a big hero.

Big Data is a technical component heavier topics as data processing breakthroughs in technology, such as MapReduce, Hadoop and other big data processing Distributed asked the city, just let us have more ways to meet future data processing and analysis challenges, such as the rapid increase in the volume of data, frequency and speed of information flow faster, and a large increase in geometric progression of unstructured data.

However, the light of these new data analysis techniques, and will not let companies become smarter, earn more money. If there is no clear commercial applications, Big Data will not bring any value. However, IT departments know how Big Data business applications where, is a big problem.

Big Data vendors say there is no shortage of information executives asking: What Big Data can be done? In fact, this problem is also faced by foreign companies today, while back in the one of Big Data seminar which many CIO reactions have this problem, because it is difficult to find IT department application opportunities of Big Data, but rather the company's management or business head unit more able to find an opportunity to take advantage of Big Data, however, the boss or business unit IT executives often do not understand the technology, it is unclear today Big Data Analysis technology to what extent, of course, cannot think of what issues businesses can rely on Big Data to improve, or even create new opportunities.

Many of the CIO also mentioned the IT department are mainly deal with structured data from relational data library data to come up with new uses, or methods to improve business processes, and not difficult down IT staff, but for IT personnel are not familiar with unstructured data, from which expect new applications more difficult.

So, if IT departments begin to implement Big Data technology, let's look at how you can use, you may not have very good outcome. Unlike cloud computing, even if you start wrong direction, but always start with the IT infrastructure virtualization start, even without reaching the final goal of the pre-set, but at least can produce effective results virtualization single.

EMC CTO recently pointed out in an interview which he saw Big Data has been some change in the development in the U.S. which many of the companies has grown from explore some specific technologies into application opportunities from the commercial point of looking for Big Data, he believes that such a Development for Big Data in terms of a good result.

Big Data processing tool allows you to hold a huge amount of data analysis, if the application of a clear purpose for the company or the community, might bring great benefits, but it takes a weapon without a clear goal, the Big Data will be end up useless.

Recently there is a Big Data event which I was very impressed saw that one of Japanese company have a Big Data application program by collecting their car with the car computer information, statistics car suddenly brakes, shift, and so unexpected situations, then through statistical analysis, they generally will identify the driver brakes the location, and then go to the local observation of actual traffic conditions, the condition of the site based on traffic signals to adjust or modify the rules of the road, the results of specific reducing the incidence of traffic accidents This is a great for the people in terms of well-being.

The plan must collect a large number of vehicles with computer information, in order to effectively identify all possible locations of traffic accidents, so the challenge is bound to get a lot of data analysis.

In fact, after the technical capacity to understand Big Data, and all walks of life have a good application cases, such as banks used to predict more fickle nature of global financial markets, the rapid dispatch of global investment; a food companies to analyze abnormal weather, different farms in the world to adjust crop planting strategies; even movie companies can Big Data technology to save the actor's every word will be able to quickly find the clip in each fragment, trim out the most touching results.

Previously unsolvable problem of massive data analysis, and now, Big Data can be technically be helped, and just need to find commercial applications for the goal.

Understand the five concept of NoSQL 

Posted by Daniel J Su, [Oct-25th , 2016]

Back in 1998, it has been suggested that the concept of NoSQL databases, however, the technology did not become mainstream. Until recent years, there has been a large number of users contribute information website, led to the demand distributed database, these sites have a large number of users contribute information, but also continue to grow. In order to meet the expanding needs of data growth, the traditional commercial relational database technology by means of a database cluster level can be resolved, however, the high investment in hardware and software which have expanded funding. 

Web site operators in order to solve such as TB or even PB grade rating massive data storage and expansion issues, began the development of a variety of low-cost distributed build open source database, Google's BigTable own R & D is one of the best examples. Others such as Amazon, Yahoo invested in recent years have also developed this type of NoSQL databases. Even Microsoft's Azure cloud platform NoSQL technology is also used to access the data. 

Same situation, like Facebook, Twitter, Zynga social type sites in order to solve such a huge user interaction data, but also extensive use of NoSQL database technology. Such as Facebook developed Cassandra database on more than 600 core cluster computing systems, storage within more than 120TB of outbound messages. 

To address the rapid growth of user contribution data problem, Facebook has developed a NoSQL database Cassandra, on more than 600 core cluster computing systems, storage within more than 120TB of outbound mail data. 

In 2009, the open source community to re-use the term NoSQL to represent collectively the distributed non-relational databases. 

In fact, NoSQL databases, including a dozen database system, it is very unlike a relational database as the basis for a set of common database theory. However, there are a few key to understanding NoSQL database must know, as long as the master these critical of NoSQL databases can have a basic understanding. 

(1) NoSQL is Not Only SQL 

Because language is the standard query language SQL relational database, the original NoSQL database system used to represent those who can not provide SQL database query language, which is mostly open source database system for distributed database systems, but there are a few commercial NoSQL database system with features such as ways to store data on Microsoft's Azure platform. 

This year the open source community is there another way to a new definition, the NoSQL considered as "Not Only SQL", not just SQL mean, that mix of relational databases and NoSQL databases to achieve the best storage results, for example, the front is to use force to NoSQL database technology to store large amount of data the user state data, but other information is still using relational database to use the benefits of SQL syntax. 

(2) Increase the machine will automatically expand the data storage capacity 

Another important feature of NoSQL database is a level of scalability, simply add a new server node, you can continue to expand the capacity of the database system. And can take advantage of low-cost general level of computer will be able to expand horizontally, unlike relational database cluster systems often require the performance and capacity of larger servers to be competent. NoSQL database can be used to create a lower cost TB or PB grade level large database systems. 

Some NoSQL database can even be non-stop or in the case does not affect the application, the online database will be able to directly expand the capacity of the system. 

For example, Cassandra can dynamically expand the new database node, as long as you start a new database node, the old database node will automatically copy the data to a new node, each data access load balancing. Do not like the common practice of cutting as database, you must manually de-normalized database, cutting table, copy the data, specify the application links and other processes. 

In simple terms, the level of expansion means that as long as the ability to add new server equipment, can automatically increase the capacity of the database, from a management point of view, it can also reduce long-term maintenance of a database of manpower. 

(3) To break the limit Schema field architecture 

Relational database tables must be to establish correlation between the structure of the field through the Schema database, Schema is usually pre-designed architecture, the future on the line to make a field change is very difficult, especially when you want to change the Schema huge amount of data very difficult, such as Twitter in order to adjust the data fields, just execute Alter Table command to change the definition of tables and ran for a week. 

NoSQL database is switched to Key-Value data model to solve the huge problems transaction data. Key-Value model is to simplify the structure of a data value corresponding to only one Key to a Value value is not related between each piece of data, it can be cut or adjusted, but also can be distributed to different servers create copies . 

Some NoSQL database is to increase the concept Column, usage fine so you can use more Key values ​​corresponding Value, such as Cassandra provides four layers or five layers Key-Value data structure, you can use 3 Key to value corresponds to a value. For example, with "user account", "personal files", "birthday" of the three Key value to get a particular user's date of birth. Column design using NoSQL database than only Key-Value Data architecture database more flexible, reducing the difficulty of developing data access program. 

Because there is no Schema NoSQL database architecture, therefore, can not support the standard SQL syntax to query data. NoSQL databases typically through a simple API to add, update, or delete the contents of the database, the database will provide some SQL-like syntax Select query mechanism, but usually can not perform complex Join instructions, such as Google App Engine provides a GQL syntax allows developers to query data on BigTable. 

(4) Would be consistent with the information sooner or later 

To ensure the integrity, relational database using transaction (Transaction) design information, so that data access or transaction process will not be disturbed. Characteristics Transaction database is ACID, in SQL implementation process to ensure that the transaction as a minimum operating units (Atomicity), the entire transaction process to ensure database consistency (Consistency), when executing multiple transactions can be isolated transaction data is not affected by other transactions (Isolation) and the transaction process does not change the original data persistence (Durability). 

But ACID database schema expansion difficult, so most did not design NoSQL database transactions, instead of using another different CAP database theory. 

There are three key CAP theory, including data consistency (Consistent), availability (Availability) and interrupt tolerance (Partition Tolerance). Theoretically could not taking into account the CAP three characteristics, so, NoSQL databases typically choose two features to design, usually choose CP or AP. 

Most NoSQL database choice is CP's design, however, the significance of NoSQL database information about consistency and relational databases are different. NoSQL database will take Eventually Consistency (data sooner or later the same) approach, because the design will be distributed NoSQL data replicated to different nodes scattered, each node can each transaction data, and then synchronize with each other. There will be a time gap between the synchronization process, if the read data simultaneously on different nodes, the case data inconsistencies occur. 

NoSQL database expansion in order to maintain a decentralized architecture to allow such a situation, only to ensure that the information will reach final agreement. And yet within a short time data synchronization requires the developer to resolve the conflict or missing data problem on their own, or with a NoSQL database to record those lower data accuracy requirements, such as Facebook's Like button, and even less for a few a commendable record, the user is not easy to find, it is suitable for use NoSQL database to store. When you import a NoSQL database, developers must first assess the nature of the information, whether the risk of data loss. 

(5) Lack of maturity, high-risk version upgrade 

Because in recent years, the prevalence of Web 2.0 sites and social networking sites, users began to appear to solve the problem with the contribution of information soared NoSQL database. Many NoSQL databases are 2,3 years before it appeared, so the function of the database itself is not complete, there are less mature and stable version, the version upgrade process will easily appear incompatible situation. 

On the other hand, these are mostly used to access the database through the API data, if the new version adds new features, will change the way these parameters or call API. For developers, equal to have to re-modify the application in order to obtain the correct database content. Even a file format stored in the database itself will change, the new version after upgrading the database, but can not read the old files must be formatted file conversion work. 

Find the right NoSQL database, on the one hand to pick the database used by well-known sites, because these sites are usually well-known contributors to these databases is that they use in order to solve their own problems, it would be more actively to improve the database. 

Also, consider its technical capacity and ability to learn to master foreign technology development, although NoSQL database provides another low-cost distributed database, expand the node function automatically saves database maintenance manpower, however, relatively , also have to bear the risk of changes in technology is not mature enough time. Fast recognize four categories of mainstream NoSQL database 

Long before the term popular NoSQL database, it has appeared in a variety of non-relational databases, these databases have different characteristics, it is difficult as the relational database as a set of common ideas can all understand . Only individually understand each NoSQL database features and applications. 

There are four kinds of comparison of concern NoSQL databases are Key-Value database, memory database (In-memory Database), graphics library (Graph Database) and document database (Document Database). 

First Type: Key-Value type database 

Key-Value NoSQL database is the largest database type, the kind of information the biggest feature is the use of Key-Value Data architecture, canceled the original relational database architecture commonly used in the field (Schema), each respective data independent, so you can create a feature of distributed and high expansion capabilities. 

Include things like Google's BigTable, Hadoop's HBase, Amazon's Dynamo, Cassandra, Hypertable are all kind of Key-Value database. 

Google developed its own BigTable built on Google File System GFS, Google's own applications specifically for use, such as Gmail, Google Reader, Google Maps, YouTube and other application data are stored in the BigTable. Now Google is also open to other people to use BigTable to store the information through Google App Engine service. 

BigTable is like a lot of machine tables integrate all the information there is a table with a single data table can store the contents of PB grades. Google App Engine provides a GQL query language, allowing developers to use Select syntax to query the data in BigTable, but not like this GQL language SQL language syntax that can be used for cross-Join-table query. 

Because Google does not release BigTable and related cloud computing platform, and later appeared in another set of Google cloud computing reference architecture Hadoop platform and developed a HBase distributed database. Hadoop HBase database platform is used for storing Hadoop MapReduce parallel computing data carried. Similar to Google's BigTable, is stored in a large number of rows in the data table, the structure of each line also has a major Key value and any number of columns field. 

Amazon Dynamo developed distributed database is used in the Amazon network services, such as S3 storage service, but also take the Key-Value Storage way to build distributed high-availability environments. Amazon's shopping cart is to use the database Dynamo, Dynamo will copy the data to build a replica on many servers periodically synchronize with each other. However, due to the Dynamo can not ensure that every copy of the information instantly synchronize data in order to solve the problem of conflict and lost, Amazon has developed another conflict resolution techniques to ensure data consistency. 

In the Key-Value database type, there is a recently very popular NoSQL database, which is the power of the technology can be used in Cassandra. It was released in 2008, Facebook distributed database support Java platform. Facebook use Cassandra to store up to 120TB of stations within the mailbox (inbox) data, in March 2009 to take over the maintenance by the Apache Foundation, is now one of the top-level Apache project focused on the development. 

This master-slave and HBase distributed database architecture different, Cassandra is the same in each cluster node database, there is no master-slave relationship, so that when the build distributed database, Cassandra least as long as the establishment of two a server node will be able to perform this function and the role of the two nodes is almost exactly the same, you only need to specify better communicate with each other in the profile of IP URL. After starting the database, these two nodes will self replicate data, distributed storage, database access load balancing. 

2nd type: Memory Database Cache is the preferred tool known website 

Memory Database (In-memory Database) is the data stored in the memory NoSQL database, including the Memcached, Redis, Velocity, Tuple space and so on. In fact, like Memcached, Redis is a kind of Key-Value Data architecture of the database, but this kind of database change the data stored in memory to improve read in efficiency, mostly commonly used to cache web pages, web pages faster delivery speed, reduce the number of hard to read, but could not be saved after system shutdown. 

Memcached 2003 years to occur is an important tool for many well-known websites to improve the efficiency of web browsing, such as YouTube, Facebook, Zynga, Twitter and others have to use Memcached. Google App Engine application hosting service also offers Memcached service. 

On Facebook's most popular game Farm Ville farm to improve the game is to use Memcached fluency. Farm Ville number of users log on every day up to a million people, in order to allow users to read and write data during the operation does not need to wait for a delay, Farm Ville take two layered architecture, using Memcached to pass the station on the user's data, slightly then the entire batch will write data to the back-end MySQL database, stored on the hard disk. However, this risk framework is that when the system crashes, lost a whole batch of data is stored in memory of. 

In addition to veteran Memcached, 2009 appeared a new open-source memory database Redis. In addition to providing a distributed cache, Redis and Memcached biggest difference is that, Redis provides an information architecture that can automatically sort the data stored in Redis, allowing developers to obtain information sorted. 

Redis get VMware sponsored in March of this year. September has just released the new version 2.0, adds new design such as virtual memory, so that developers can put more information on the memory capacity of the quantity. USA classified ads website Craigslist and code hosting site Github are using Redis to accelerate access speed. 

3rd type: Document library for storing unstructured data 

File database is primarily used to store unstructured documents, such as unstructured data is the most common HTML pages. An HTML page structure not as a fixed field general form, each field has a specific data type and size. Such as Web pages, there are Head and Body structure, there may be Body element 10 paragraphs, paragraphs have text, links, images and so on. Data structure file databases are often loose tree structure. 

Many documents are commercial database database system, the concept of file database from IBM's Lotus Notes way to store files, XML database is a file database. Common source file database like CouchDB, MongoDB and Riak so on. 

With the significant increase in web storage and search index needs, CouchDB and MongoDB document database that 2 more and more attention. 2005 appeared in CouchDB just released version 1.0, it is also one of the top Project Foundation maintain Apache. CouchDB provides a RESTful's API, so that applications can access the database through the HTTP protocol, you can also use JavaScript as a query language.

When MongoDB back in 2009 which was appeared soon have released a stable version 1.6.1 can be used to store documents UTF-8 and non-UTF-8's. Unlike CouchDB can only store files UTF-8 format. MongoDB can also use JavaScript in your query instructions. Take the Master-Slave architecture, a Taiwan Master server with multiple data servers, each server redundancy between data, fault tolerance. 

4th type: Graphics library can be used to record the social relations 

The last category is a graphics library, which is not designed to handle the image database, but refers to the use of graphics architecture to store data architecture relationship between nodes, for example, a tree structure to organize affiliation or mesh structure to save friend relationships, geographic map data systems typically will use graphics database to store the relationship between each point and the neighboring points on the map, or use graphics database to calculate the shortest distance between points, the same concept can also be used to calculate the shortest distance between people dating. Graphics Library biggest feature is the expansion of the complexity of the force, the more complex data relationships more suitable for use graphics database. 

Data structure such information is not standard practice, basic graphics data includes node (Node), relations (Relation) and property (Property) three structures. For example, with nodes Facebook account to log on with a relationship to record a friend relationship with the attribute to describe this account and other personal information. Finally, you can use the network map showing the status of Facebook dating between users. Common graphics library such as Neo4j, InfoGrid, AllegroGrph so on. 

These four categories are more streamlined NoSQL database distinction, it can be used to quickly understand the characteristics and differences between NoSQL databases, another like Wikipedia is from the application point of the NoSQL divided into 10 categories, the classification method is particularly the Key-Value more information Reservoir separation of different types of sub-classification application, you can better understand more features NoSQL databases.

Is Big Data a hype?


Posted by Daniel J Su, [ Oct-13th , 2016]

Looking at the trends, cloud computing can be said that the big transformation computing architecture , big data is the Great Leap Forward information technology. Computing and information that the two dimensions of information architecture .

About 5 to 6 years ago, cloud computing is said to be the subject of speculation since all manufacturers are enthusiastic talk about cloud computing , it seems that no one can now cloud computing vendor information slightest .

Provides network mailbox , said he has long been a practitioner of the cloud model . E-commerce company as a cloud stocks were able to provide convenience store to pick up , but simply say that the cloud convenience store . Does not seem to add the word cloud , it will become obsolete in this era .

Today, five years later , and replaced with the Big Data debut. Jufan with relevant information , databases , data warehousing, storage systems, or even file transfers , etc., are all that big data vendors .

The situation with five years ago, cloud computing , as identical. Is Big Data is also speculation ingredients majority ?

"The Economist" 's Economist Intelligence Unit survey company , recently announced a big head of information for the investigation , the title hit on the " hype and hope " , the results of the survey response : Most executives agree that the company has a large data help , and even help to improve revenue , but in fact a large extent invested enterprise data is far less than expected.

According to the Economist Intelligence Unit's survey , more than 90 percent of executives agree that big data will help to understand the customer , so they can further enhance revenue . Of these, nearly half of the head of 45% or even higher expectations that enhance the effect of revenue up more than 25 %.

In addition, over 70% of executives are endorsed by big data can improve the productivity , profitability and innovation capability of enterprises care. However , large enterprises to import data speed is slow, but with the competent high expectations deviate , because nearly 58 percent of large enterprises in the areas not yet have specific information on progress .

In the case of today's manufacturers enthusiastic talk about big data , compared to the speed of the slow business adoption , there should be a part of the current hype gimmick ingredients. However , business executives for large data highly recognized nor ignored.

Most corporate executives are reactive resistance to many large data exist , but most of all the internal problems that already exist , such as the lack of communication between departments , selfishness , and so on . Therefore, in most big data to corporate landing, apparently still need a lot of time .

However, the development of science and technology took off like a rocket , only flying high . Once thought to be deliberately hype cloud computing , in the process of technological development has proven the value of its existence . People who take it to the hype has been left aside , and carefully brought something people can see the results of gradually .

Perhaps there is now a large data excessive speculation ingredients, but not the development of technology and others, is bound to the same big data and cloud computing trends , proved its worth in the next technological development process.

In fact , we already have some advanced applications can prove the value of big data. Microsoft Research Asia, the development of U-Air air pollution forecasting system , is a good example. The system can immediately predict any corner of the city 's air quality , the accuracy rate of more than 8 percent.

General air quality is to analyze historical data to predict air quality monitoring stations , but the only response of air monitoring stations around certain areas of air quality , but the city's transportation, construction, crowd a great impact on air quality , even in the vicinity of air monitoring stations , its air quality may also be very different because of the traffic flow . Therefore , only be provided by the analysis of historical data of air quality forecasts, the results are always great discrepancy with the actual situation , the accuracy rate of less than 6 percent.

This question is not meteorologists do not consider other factors that affect air quality , but limited to data analysis techniques are not readily analyze large amounts of heterogeneous data . Microsoft Research Asia is a big breakthrough in data , machine learning technology , to analyze historical weather information , traffic , crowd moving , urban locations ( such as railway stations, buildings, hotels , parking lots , parks , etc. ) and road construction , etc. the relationship between heterogeneous data in order to find accurate predictive models.

On the other hand , thanks to the large data processing method data , U-Air in the system can analyze large amounts of data five minutes good of the entire city , so it can provide real-time air quality forecasts , making this technology in practical applications can come in handy.

If there are no major information technology may never be able to break through the current method of monitoring air pollution bottleneck . Similar practices such large data beyond the past examples have ever occurred in the world .

Looking at the trends, cloud computing can be said that the big transformation computing architecture , big data is the Great Leap Forward information technology. Computing and information , not that the two dimensions of information architecture ; When the cloud and big data build up, is it not proclaim a new era of IT .

Hitachi Data Systems: CEO led the leadership to success of Big Data implementation 

Posted by Daniel J Su, [Sep-22nd , 2016]


Big Data because the data processing, data analysis related breeze, easily be considered technical issues. However, Neville Vincent from Hitachi Data Systems (HDS) to remind the CEO, do not put big data and other information technology mix, so doomed to fail.

Neville Vincent said, today's CEO definitely want to embrace big data, which he has three views. The primary reason is the company's information assets. CEO knows that employees are our most important asset, but may not realize that the information is actually everywhere an important asset, and its importance second only to employees, is the second most important business assets.


Secondly, he believes that this asset utilization data will help to improve revenue, profitability and productivity. Finally, when companies like to treat data as an asset, will gradually dispersed throughout the data originally centralized management, information silos issues will also be resolved. In a single pool of architecture, you cannot get through inter-departmental information flow problems, and enhance inter-sectorial collaboration and communication efficiency.


Although HDS company Neville Vincent services provide large information-related technologies, of course you want to appeal to a large data chief executive attention, but on his observations, big data is not a craze, some companies have realized that, if not immediately follow up on big data After five years it will be possible to beat competitors.


The retail trade in Australia, which top two supermarket chains Woolworths and Coles, this year in terms of big data move. One of the biggest supermarket chain Woolworths, not only to expand the customer analysis, more unexpected invest 20 million Australian dollars, made Australia's largest manufacturers Quantum half of the data analysis options. On the one hand is the key to technology, the future also plans to Woolworths analysis of data to identify, sold to other companies.


CEO today continue to face pressure from many sectors, including the pressure of market competition, improve profitability to shareholders upon request, so the pressure to increase profits. Neville Vincent said the chief executive to ask how they obtain information to cope with the ensuing pressure, CEO universal answer read newspapers, websites and so on. He said the CEO himself was analyzing data, but the human brain, although powerful, but lack of independence expansion force of the computer. If you know the CEO of rival social network can be analyzed from the inside you can find information on the key improvements to enhance productivity and profits, they will quickly adopt big data applications.


However, once the data to understand the importance of the long run but cannot directly import large data delivery information departments. Neville Vincent pointed out that often people ask: big data first by the marketing, sales, research and development which is oriented start? Or that the CIO or CMO to lead it? But his answers are directed at the executive.


Neville Vincent said, because the company's goal is to perform long-established, large data to be successful, you must first determine the objectives to be achieved, then through the relevant technologies, and professionals such as data analysis information to assist scientists in order to achieve the implementation of large data length set of corporate goals.


If the CEO is not the dominant large data occurs marketing, sales and other departments in order to solve their problems, have to import large data technology that the company is no lack of information on all kinds of small big project, but did not play the whole value. CEO Neville Vincent believes each camp to prevent these large data project, to focus resources concern business goals, through the information platform, so that all sectors of the flexible use of the information, and contribute to more efficient collaboration.

Of course, the technical problems involved are still big data by a professional IT department to be responsible, Neville Vincent pointed out as early as possible to allow the information to be chief executive in the business plan long been involved in the brewing process, rather than wait until after the decision-shaping, and only then large data issue to build CIO.

Large data must be from a business view, it must be allowed to become part of the information department of business processes, so much information to be successful.

Amazon Web Services Demystifying 

Posted by Daniel J Su, [2016]

Inventec planning on cloud computing for 2014-2016

Posted by Daniel J Su, [18th, April 2014]
Inventec is planning to switch its focus to the cloud computing and solar industries for 2014-2016, according to its chairman Mr. Richard Lee.

Most ODMs are gradually losing their notebook orders to the top-two makers, Inventec has decided to focus on non-notebook businesses for revenue growths.

For cloud computing, the company's server business will contribute about 25% of total revenues in 2013 and will achieve over 10% on-year growth in 2014. Handheld devices such as smartphones currently contribute about 10% in revenues and the business is expected to achieve 30% on-year growth in 2014.

In 2014, Inventec will push its notebook, all-in-one PC and workstation shipments to achieve 10% on-year growth.

Large-scale distributed storage

Posted by Daniel J Su, [14, March 2014]

Distributed storage goal is to use multi-server storage resources to meet the storage requirements can not be met by a single server. Distributed storage resources storage requirements the abstraction show and unified management to ensure that the data read and write operations industry safety, reliability, performance, and other requirements.

The past few decades, with the development of network technology, more and more network applications have to store huge amount of data needs, such as a search engine and video site, these needs gave birth to some of the best large-scale distributed storage technology For example, a distributed file system. 

Distributed File System allows users to access the local system generally access a remote server's file system, user information can be stored in multiple remote servers, distributed file systems are basically redundant backup mechanism and fault tolerance mechanisms to ensure the correctness of the data read and write; cloud storage services environment based distributed file system, and according to the characteristics of cloud storage configuration and improved. The following will introduce several distributed file systems and cloud storage services.

Frangipani is the scalability of good high-performance distributed file system, the system uses a two-tier service architecture: the bottom is a distributed storage service can automatically manage the high expansion, high availability of virtual diskette; in the dispersion storage service upper run is a distributed file system. JetFile is a P2P-based broadcast technology, to support a distributed file system to share files in a heterogeneous network environment. Ceph is a high-performance and reliable distributed file system, separated as much as possible information and data management, so as to obtain the maximum I / O performance.

GFS (Google File System) is a scalable distributed file system Google design. Engineers in considering the distributed file system design criteria based on, and found several different from traditional distributed file system requirements: First, PC servers vulnerable to failure caused by node failure, the failure of many reasons , the machine itself, the network caused by the administrator and the external environment, the need for the nodes in the entire system to monitor, detect errors, the development of fault-tolerant and fault recovery mechanisms. 

Second, in the cloud computing environment, a huge amount of structured data will be a very large file storage, general design guidelines need to change the file system is subject to small and medium-sized files (KB or MB middleweight) GB level, to adapt to the large file access. Third, write to the file system the vast majority of additional jobs, that is, in the end of the file to write data (write data in a file, in fact, rarely happens), and the data is written to, also usually read the order will not be modified, when designing a system to optimize focus on additional job, you can greatly improve the performance of the system. 

Fourth, when designing a system to be considered open, standardized interface, and hidden file system lower the load balancing, redundancy replication details, so that it can easily be a large number of upper system use. Therefore, GFS can effectively support large-scale massive amount of data processing applications. Figure 4.7 shows the the GFS system architecture.

Posted by Chien-Chung Chang, [28th, Feb 2014]

Substantial growth of information storage and computing, the need for higher computing performance

Posted by Daniel J Su, [21st, Jan 2014]

Thanks to the growth of storage technology, thanks to this stage in the PC storage capacity in GB TB for the unit, some companies may even have a hard disk capacity of up to thousands of TB, PB level is the amount of data The significant growth of the amount of data at the same time, if you want to analyze these data, the method is also different from the past.

The purpose of the traditional way of analysis, the common practice is regularly addressed, and there is a specific purpose or direction, and therefore only occasionally faced with a large number of complex data processing needs; while this massive amount of information era, we have to do the real-time data analysis. The same time, because of the type of data source is not only traditional computer equipment, as well as a variety of smart phones, as well as different sensing devices, such as computer-car GPS positioning information, a variety of data to the driving lane, or road imaging system analysis . Therefore, the system can collect data sources, and in the past there is a big difference, and often contain large amounts of unstructured data types content on the processing of these data, and traditional practices (such as databases and information warehousing applications) is clearly very different.

The response to such a large amount of data computing, today's processors enhance computing performance has been gradually using more technology, of which the most obvious approach is built-in multi-core technology.

In the past, the processors are built-in single core only, all operations are executed by the core, and Intel in 2005 launched the first dual-core Xeon processor, push the processor from single-core to multi-core computing performance improvements, which, like the increase in manpower in general.

The original single core architecture, at the same time can only do an operation, can be carried out after the completion of the next processor to accelerate the speed of operation, can only rely on the clock speed to enhance. However, under the conditions of the same clock, if the processor can be performed simultaneously a plurality of operation, the performance, there significant growth opportunities.

Intel has a new generation of server-class processor Xeon E5 series, the number of cores with up to as many as eight, plus works with Hyper-Threading technology of their own development, from the point of view of the operating system, a single server, up to then be able to use 16 cores, 2-way server, you can have 32 computing cores.

The clock speed of the processor is generally in the 2GHz to 3GHz, the basis of the highest clock speed of 3.3GHz, the automatic overclocking through Turbo Boost Technology up to 3.5GHz clock, this clock specifications , also faster computer or server-based computing. This technology is not only used as a clock automatically upgrade also allows idle processor cores Close operation, wait until the upgrade or restart your computing needs, in order to reduce processor power consumption.

Moreover, with the case of the core of the processor itself built more and more, and the clock is not reduced, the power consumption of the system has not improved, mainly to benefit from the improvement in technology of the processors processes.

To current technology, the Xeon series processors generation 45-nanometer process, a comprehensive evolution to 32 nm, and these processors are the lowest thermal design power of just 60 watts, 135 watts highest Compared to the past, the less time the number of processor cores, and thermal design power fairly even higher compared to the current processor is the same as having a better computational efficiency, and overall power consumption and reduces many . In addition, a new generation of processors continue to strengthen with the speed and bandwidth of the connection of peripheral devices, which also led to another level of performance improvement.

Increased by the core, so that their operational status is automatically adjusted in accordance with the conditions, so that the use of a single processor to achieve performance equivalent to the multiple processors, plus improved clock speed, and process technology improvements, and now we for the use of the processor, is enough to increase in the face of such a large amount of computing data, and power consumption will not increase.

Posted by Chien-Chung Chang, [16, Dec 2013]

Open Data Center Alliance cloud development maturity model

Posted by Daniel J Su, [22nd, Nov 2013]

Cloud development Open Data Center Alliance (Open the Data Center Alliance ODCA) is expected, to about 2013, users and application developers will begin SaaS provides software services from the original pure, began to shift to the composite type and is expected in 2015, will begin using the mixed type, and eventually will become a combination of public and private cloud joint operating mode.

Processor clock with the number of channels, improve memory and increase system access capabilities

Posted by Daniel J Su, [20th, Oct 2013]

Under the impact of the Big Data and the trend of cloud computing applications, many of the implementation of application software, and In-Memory practices, including In-Memory Database, In-Memory Analytic, etc., system memory speed and capacity requirements increasingly high.

Intel's next generation processor built-in 4 memory channels, more than the previous 3 channel, while a large number of memory channels significance lies in between the processor and the memory access can have both multiple The pipeline can transmit data at the same time.

Including 800,1066,1333 and 1600MHz latest Intel Xeon series processors support DDR3 memory clock, the highest clock 1600MHz memory, we can calculate that the memory bandwidth of approximately 12GB pulse / s, plus support 4 channel specifications, also has four data transfer pipeline between the processor and the memory is to allow large bandwidth and multi-channel design, more effectively and multi core processor of potential.

On the other hand, the memory capacity in support, the new generation of Xeon E5 series processors support memory capacity up to 768GB, is even larger than the previous generation support maximum 288GB Xeon E7 series processors even supports up to 2TB, which is the huge amount of data processing or the application of a large number of virtual machines are quite favorable

Posted by Chien-Chung Chang, [9th, July 2013]

IT cloud computing and the Big Data driven computing technology evolution

Posted by Daniel J Su, [20, June 2013]

Intel for the three major themes of the cloud, the Big Data, as well as security of coping strategies, and to put forward is expected to be completed in 2015, combined with automated cloud vision of joint operations and client induction

In recent years, the rise of the cloud (Cloud Computing), together with a large number of hand-held mobile devices and embedded systems brought huge amount of data (Big Data) wave of many people associated with the development of these applications, such as storage, network hard body manufacturers, software vendors or data analysis, but in fact, each computer, mobile phone, server or storage device that can be processed at the same time, the amount of data, but also powerful enough and their execution performance.

As a processor manufacturer Intel Cloud Summit 2012 held in Bangkok, Thailand, in August this year, the response to these two trends, published his views, and by previous statistics, analysis of the data center of the future will face bottlenecks.

A lot of growth, the future facing the PB level computing

Said Jason Fedder, general manager of the Network Systems Group, Intel Asia-Pacific and the China Data Center and worldwide, including computers, mobile phones and tablet PCs, a total of more than one billion devices, as these devices increase, it also brings data applications explosive growth, such as the sharp increase in the amount of network transmission and storage demand substantially grow.

According to the company's statistics and forecast to 2016, regardless of network, storage devices, workstations or high-performance computing environments processor, will be higher than the demand in 2011, doubled, and on the stability and the performance requirements are higher.

The car manufacturer BMW, for example, they want to manage in 2010 by the computer of the 90,005 thousand employees in 2012, have faced a million or even 10 million vehicles use the computer connection and these computers are built into a variety of sensors, collect various driving the process, the analysis of the back-end data center computing power and energy demand, are facing huge challenges

Posted by Daniel J Su, [16, May 2013]

Posted by Daniel J Su, [5, Apr 2013]

Intel 2015 cloud vision

Posted by Daniel J Su, [25th, March 2013]

Activities in the Intel Cloud Summit 2012, Intel's cloud vision in 2015, including automation (Automated), client-aware (Client Aware), the three major elements of the joint operation (Federated).

, Jason Fedder Intel vision of cloud computing applications in 2015, including reducing IT staff management automation (Automated) to automatically determine the perception of the client device (Client Aware) activities, as well as across the public cloud joint operation with the private cloud (Federated).

Client induction can automatically determine the user device and data types, including phone, tablet, laptop, and trip computer; the joint operating sucked information first compiled in the private cloud, and finally in the public cloud released for access, automation allows IT staff can focus more on innovative services, without waste management system.

Posted by Daniel J Su, [27, Feb 2013]

Posted by Daniel J Su, [14, Jan 2013]

Posted by Daniel J Su, [22, Nov 2012]

Fujitsu has developed a new 16-core processor, 

the performance is 2x of Fujitsu supercomputer "K Computer" processor

Posted by Daniel J Su, [20th, Oct 2012]

Fujitsu has developed a package of 16 CPU cores on a chip microprocessor "SPARC64 IXfx. 

Plans with the company listed in January 2012, high-performance computer (HPC)" PRIMEHPC FX10 ".

Fujitsu's new processor super computer (HPC) the of "SPARC64 Ⅷ fx" processor "K Computer" adopted successor

K Computer ranks first in the world in supercomputer ranking "TOP500".

The new products did not change the microarchitecture, just doubling the number of cores, 8 cores to 16 cores). Operating frequency from approximately 2GHz to about 1.85GHz. The computing performance less than 2 times Ⅷ fx, but to improve the performance of a unit of electricity consumed.

At the same time, the increase in capacity of the secondary cache to 2 times, and also improved the secondary cache control circuit. Because "increase the number of cores, of each kernel secondary cache capacity will be reduced."

There are also different from the original place. Such as manufacturing technology changes by the 45nm process to the 40nm process. At the same time, independent from manufacturing to manufacturing entrusted Taiwan Taiwan Semiconductor Manufacturing Company (TSMC). Import 40nm process, only to build a new production line will need to invest 1 trillion yen

To promote multicore processor technology is not only Fujitsu. Other vendors will also be in mass production in 2012 to supply far more than the 10-core processor products.

For example, IBM is scheduled to supply 18-core, Intel is scheduled to supply more than 50 nuclear products.

Including Fujitsu, each company's products are a lot of similarities, such as support vector processor SIMD * command. The biggest difference is the system structure used SMP UNIX systems servers * IBM and Intel. Fujitsu the original MPP *.

Fujitsu uses MPP is because the "super computer is positioned as a PC cluster * extension products" (Fujitsu new generation of technical computing Development Division, the HPC Chairman PC cluster Business Promotion Office of the Minister Hiroki Ito agent). Also used by making HPC PC cluster systems constitute the basic MPP MPP, PC cluster users easy software assets for HPC "(Ito). Low-cost can be said is a strategy to be able to get "graduated from the performance limitations of the PC cluster user.

* PC cluster = combination of multiple PC and HPC computing capacity. However, it is not easy to improve the effectiveness of performance.

Moreover, the Fujitsu further close to the PC. The file system uses a common PC cluster cluster file system. The system has the functions of the shared data in the system as a whole, it is possible to compensate for the shortcomings of the data dispersed MPP.

The added value once again focused on the processor

With all the nuclear progress, the processor will again become important.

Processor to achieve popularity, HPC and server systems, value-added is transferred from the processor monomer to systems integration (SI). Therefore, many manufacturers gave up processor development.

But in the future, the integration of multiple CPU cores on a chip, it means that the role of the SI will also be "integrated" in the chip. Manycore processor vendors are considering "data center", that is performed on a chip with data center processing. "If you give up the self-developed processor, it means to exit the PC business."

Fujitsu Fujitsu Forum held in Japan, released the world's fastest super computer processor the SPARC64 VIIIfx (Code: Venus). It belongs to the the 45nm process SPARC architecture, the number of cores from the previous generation SPARC64 VII 4 to 8, as well as built-in memory controller, the processor core package size is very large, floating-point operations capacity of 128GFLOPS introduced last year SPARC64 VII processor 3 times.

With the switch to 45nm, (formerly behalf 65nm) the SPARC64 VIIIfx publish, Fujitsu is finally in the hands of Intel and IBM regain fastest super computer processor manufacturer title. Fujitsu has yet to reveal the time-to-market and the price of the new processor.

It is reported that the processor will be used in research and development of next-generation super computer, will be extended to the field of medical, meteorological, environmental, space research and other scientific research potential with Intel supercomputer market confrontation.

* SIMD (Single Instruction Multiple Data) = one command processing large amounts of data computing its command.

* SMP (Symmetric Multiprocessing) = on multiple processors share the same memory, and the same data to reduce the inter-processor data exchange system configuration. Also known as the Shared Memory Multiprocessing.

* MPP (massive parallel processing) = super computer constitutes processor saves the data system.


Facebook's first data center outside the USA power by ABB

Posted by Daniel J Su, [12, Oct 2012]


ABB, the leading power and automation technology group, has won an order worth about $11 million from Facebook Inc.'s subsidiary Pinnacle Sweden, to power its first server facility outside the United States.

ABB will build two high- and medium-voltage air- and gas-insulated switchgear substations that will supply power to a data center being built in Luleå, a coastal town in northern Sweden. The data center will be the largest of its kind in Europe.

"The substations are designed to handle the high electricity demand of such facilities" said Peter Leupp, head of ABB's Power Systems division. "They will provide reliable and quality power supply to the server buildings."

The construction of Facebook's data center will be carried out in three phases.

The facility will consist of three server buildings with a total area of 84,000 square meters, equivalent to 11 full-sized soccer fields. The first building is scheduled to become operational in December 2012, and will have a substantial need for electrical energy to power and cool its servers.

Located near the Arctic Circle, Luleå's cold climate is well suited for the natural cooling of server buildings. This, along with a stable supply of clean energy from renewable sources as well as reliable communications and electricity networks, paved the way for the choice of Luleå as a location for its new center, making it a European node for Facebook's data traffic.

ABB will also install substation automation systems compliant with the global IEC 61850 standard and equipped with the latest protection and control products. The installed capacity of the substations will exceed the city's normal consumption on cold winter days.

ABB is the world's leading supplier of turnkey air-insulated, gas-insulated and hybrid substations with voltage levels up to 1,100 kilovolts (kV). These substations facilitate the efficient and reliable transmission and distribution of electricity with minimum environmental impact, serving utility, industry and commercial customers as well as sectors like railways, urban transportation and renewables.

There is a YouTube video from ABB on data centers available here:


'World's most energy efficient' Data Center used power distribution (PDU) provides by ABB


Posted by Daniel J Su, [19, Sep 2012]


The data center also uses power distribution units (PDUs) from ABB. These units enable effective management of the energy consumption of the servers and improve the reliability of electrical distribution in the center.

Data centers worldwide used 80 terawatt-hours of electricity in 2010, according to the US Environmental Protection Agency, equivalent to 1½ times the annual power consumption of New York City. A data center on average uses 100 times more power than an office building of the same size, making energy efficiency an important factor in data center profitability.

"ABB's PDU solutions help to more effectively manage the data center's capacity, and the energy consumption of the servers and of the cooling systems," said Academica's Vanninen.

The ABB power distribution units report when they require maintenance, which helps to reduce the number of interventions made. They can also be kept in operation while maintenance is performed, a feature that significantly reduces operational costs. Energy consumption is monitored in real time so that this can also be optimized.

"Our international experience in critical power distribution is based on decades of active development related to products, systems and services around energy efficiency," said Timo Kontturi, ABB's Sales Manager for data center industry in Finland.

ABB's offering for data centers includes motors and drives for heating, ventilation and air conditioning systems, which account for about half of the energy consumed in data centers. In addition, ABB has recently acquired a controlling interest in US-based Validus DC Systems, a leading provider of direct current (DC) power infrastructure equipment for energy-intensive data centers, as well as a stake in Power Assure, a developer of power management and optimization software for data centers.

Few of the tourists and faithful who gather in Uspenski Cathedral in the Finnish capital, Helsinki, are aware that banks of servers are whirring away in an underground cavern just beneath them. Academica, the information technology service company that owns the data center, is particularly proud of this facility, though not because of its unusual location.

"The data center in Katajanokka is the most energy efficient in the world," said Marko Vanninen, the company's chief executive officer.

Using sea water and heat exchangers to cool the servers, Academica's data center uses 80 percent less energy for cooling than centers that rely on traditional methods. The heat that is produced by the servers is fed into the municipal heating network.



Intel new server platforms in 2012


Posted by Daniel J Su, [26, June 2012]


Intel will launch new-generation platforms for use in servers beginning March 2012:

Xeon Romley-EP made up of Xeon E5-2600 series processors plus C600 chipsets will launch first followed by 64bit IA-64 Itanium processors (Poulson) matched with 7500 series chipsets.

Xeon Romley-EP 4S and Romley-EN platforms are also on the roadmap, according to industry sources.

Intel will launch 15 processors under the Xeon E5-2600 series including

8-core E5-2690 priced at US$2,057, 6-core E5-2640 and E5-2609 as well as E5-2603, priced at US$202.

In the 2Q-2012, Intel will launch seven new processors under its Xeon E5-2400 series price between US$192-1,440


ARM & LSI form new strategic relationship to offer multicore processors for energy-efficient cloud networking applications


Posted by Daniel J Su, [17, June 2012]


LSI Corporation announced an expansion of its long-term strategic relationship with ARM, a leader in microprocessor intellectual property (IP). The agreement will lead to new product solutions designed to address critical customer needs for accelerated performance as applications such as mobile video and cloud computing dramatically increase network traffic.

LSI will gain access to:

*The broad family of ARM processors, including the ARM Cortex-A15 processor with virtualization support and future ARM processors

*ARM on-chip interconnect technology, including CoreLink cache coherent interconnect system IP, for use in multicore applications

"Customers need high-performance, power-efficient solutions to help effectively manage the unprecedented growth in network traffic being driven by smartphones, tablets and cloud-based services, without impacting user experience," said Jeff Richardson, EVP and COO, LSI.

"The integration of LSI's leadership in SoC solutions with ARM's strong ecosystem and leadership in power-efficient cores will lead to exciting new multicore-based solutions for our customers."

Power efficiency is a vital design imperative for next-generation networking product designs.

Multicore solutions based on the combination of ARM and LSI's extensive portfolio of IP will enable networking applications to satisfy the ever-increasing bandwidth demands in the most power-efficient manner.

"We are pleased to extend our long-standing relationship with LSI and help them develop next-generation, differentiated solutions based on ARM technology," said Mike Inglis, executive vice president, processor division, ARM.

"The powerful combination of ARM advanced IP and LSI's leadership in SoC design will enable networking and storage OEMs to deliver high-performance, energy-efficient products to their customers."


Quanta build another new cloud computing R&D center for software engineers



Posted by Daniel J Su, [26, May 2012]


Quanta Computer will build up a new 10-story R&D center opposite its headquarters in Taipei, with construction to kick off in 2012 and take two years, according to industry sources.

The building will mainly house software engineers, the sources indicated. In line with its long-term business goal of developing cloud computing.

Quanta will be recruit 4,000 software engineers and currently has an R&D staff of 3,000-4,000, of which most specialize in hardware development and production, the sources noted.

Quanta has been transforming its business by providing solutions for clients, and will therefore hike the proportion of capex invested into R&D, according to CEO Barry Lam.

For developing its cloud computing business, simply providing servers, storage and other hardware devices is less profitable than a total solution approach and total solutions are mostly customized, making it necessary to develop hardware and matched software.

Quanta aims to hike the proportion of its total revenues from non-notebook products from 25% to 30% in 2012 and further to 50% in five years, the sources noted



Google video business ranking by ComScore


Posted by Daniel J Su, [12, May 2012]



F5 Networks into Data Center

Posted by Daniel J Su, [22, April 2012]


Foxconn build new cloud R&D buildings in Taiwan

Posted by Daniel J Su, [11 , Jan 2012]

Foxconn planning to have two buildings will cost about (US$66.4 million) and eventually house 3,000 software engineers

Foxconn Group on December 1 started construction of a 5-story and 12-story building for cloud computing R&D and technological innovation incubation in the Kaohsiung Software Park in southern Taiwan, with group president & CEO Mr. Terry Guo moderating a groundbreaking event.


Wyse acquisition of cloud computing infrastructure


management player Trellia

Posted by Daniel J Su, [28, Nov 2011]

Cloud client computing solution player Wyse Technology's recent acquisition of cloud computing infrastructure management player Trellia is expected to benefit its major ODM partner Inventec in terms of shipment volume, according to sources from the cloud computing industry.

Wyse has been expanding its operations in the cloud computing industry and has just recently signed a contract with China-based IT system retailer Digital China for cooperation in China's cloud computing market, the sources noted.

The sources pointed out that Wyse and Hewlett-Packard (HP) are the top-2 players in the cloud computing client market with their combined share accounting for 50%, and with Wyse's aggressive acts in the industry, research firm IDC has forecast that the virtual client computing market will reach a value of US$3.2 billion by 2014.

As Wyse's thin client shipment volume enjoyed a 52% on-year growth in the first quarter plus its annual shipment volume in 2011 is also expected to see a growth of more than 50%, Inventec's shipments of thin clients are also expected to grow 50% on year to two million units in 2011.

In addition to Wyse, Inventec has also been aggressively striving for thin client orders from HP.




Asus cloud solutions for private cloud computing

Posted by Daniel J Su, [24, Nov 2011]

Asustek Computer, on November 23, launched a new solution – Asus Cloud – for the private cloud computing market and will also rename its subsidiary, Asus WebStorage, that primarily handles cloud computing business to Asus Cloud.

As for the current hard drive shortages, Asustek said the issue will not seriously impact its server business and expects supply will return to normal by the first quarter of 2012


Asus target to tapping private cloud market

Posted by Daniel J Su, [21, Nov 2011]


Asus has offered cloud computing-based total solutions to compete for orders for private clouds and to promote its servers, according to the company.

Asustek has integrated cloud computing with its Internet-access devices to strengthen marketing of such hardware. Asustek has participated in open-bids for the procurement of servers, notebooks, desktops and all-in-one PCs in China, the company indicated.

In addition, Asustek has been promoting its cloud computing hardware/software total solutions under the brand Asus Private Cloud, with prices significantly lower than those quoted by large international suppliers, the company said.

In addition to cloud computing, Asustek has used its servers and technology to establish ESC4000, the largest GPU supercomputer in Taiwan, through cooperation with the National Center for High-performance Computing under the government-sponsored National Applied Research Laboratories.

Along with the promotion of private cloud solutions, Asustek expects sales of its servers to grow 20% each year.

Promise Storage launches new private cloud solution

Posted by Daniel J Su, [17, Nov 2011]

RAID storage solution supplier Promise Technology has cooperated with Taiwan's Institute for Information Industry to launch on November 16 its SmartApp cloud storage service system.

Although demand for private cloud systems has turned weak in the fourth quarter due to a shortage of hard drives, since demand is simply being postponed, the company expects demand will surge by the end of the first quarter in 2012.

SmartApp is a hybrid cloud system based on Intel processors and can equip as many as six SATA hard drives to provide 50-200 employees with 50-150GB of storage capacity each.

Since most first-tier network storage system providers are mainly focusing on public cloud solutions, SmartApp will target mainly small-to-medium enterprises, government and education units.


Dell to expand cloud computing business via Taiwan Design Center

Posted by Daniel J Su, [8, Nov 2011]

Dell on November 3 announced additional investment in Taiwan to mainly expand staff at its Taiwan Design Center from 600 members currently to 700 in an attempt to strengthen its ability to provide cloud computing solutions for business users in the Taiwan market.

Currently in Taiwan, Dell's clients for cloud computing solutions are in various industries including semiconductor, electronic components, PCBs, pharmaceuticals and cosmetics, telecom services, banking, insurance and financial services, Dell Taiwan indicated.

The Taiwan Design Center was established in September 2002 to develop notebooks and coordinate production with Taiwan-based makers. The additional investment will focus on offering cloud computing solutions tailored to different clients including servers and storage devices, according to Dell senior vice president for Enterprise Solutions Group, Brad Anderson.

While there is concern that the global economic depression may decrease enterprise budgets for cloud computing, Anderson pointed out that the depression will in fact help increase adoption of cloud computing solutions as enterprises look to increase efficiency and reduce operating costs.


Kingston co-founder said cloud computing enough to cover decrease in demand for DRAM used in PC market

Posted by Daniel J Su, [25, Oct 2011]

While marketing server-use DRAM products to business users, Kingston has offered a variety of memory products for use in terminal devices related to cloud computing applications, such as eMMC used in smartphones and SSDs built in tablet PCs, e-book readers and GPS devices, Tu pointed out. For eMMC products in particular, Kingston partners with Taiwan-based IC design house Phison Electronics for providing control ICs and related technology.

Cloud computing applications necessitate a large volume of servers for use in data centers and workstations, resulting in large demand for server-use DRAM, Tu indicated.

Kingston expects its shipments of DRAM modules, NAND flash products and SSDs to the China market in 2011 to increase from 2010 by 50%, 15% and 133% respectively, Tu noted. Of Kingston DRAM modules shipped to the China market during the first three quarters of 2011, 84% were for use in PCs, 11% in notebook PCs and 5% in servers, Tu indicated.

Although demand for DRAM used in PCs has been decreasing, demand for DRAM from cloud computing applications is sufficient to cover the decrease, and server-use DRAM, SSD (solid-state drive) and eMMC will be three memory product lines of large growth potential in 2012, according to Kingston Technology co-founder John Tu.


Wistron launch a tablet PC that support

cloud computing


Posted by Daniel J Su, [10, Oct 2011]

Wistron is set to launch a tablet PC that will support cloud computing and the company will push the device in the education and medical markets, according to Wistron president Robert Huang.

As for 2012, Wistron estimates it will ship three million tablet PCs.

Due to Taiwan lacks talents for software R&D, Wistron will work on its software business lineup in China.

Wistron currently has about 150 software engineers in Shanghai, China and 300 in Wuhan, China.

Robert believe Amazon's Kindle Fire will be the strongest competitor of Apple iPad and the reason is not because of the device's price, but Amazon's strong service.

Therefore, Wistron will work on strengthening its software service and will launch a tablet PC for B2B with the company's complete software service as well as cloud computing support


AMD aggressive about cloud computing business in China.

Posted by Daniel J Su, [30, September 2011]


AMD's server processor codenamed Interlagos will also have difficulty shipping on schedule and is expected to be delayed to November.


AMD, seeing the cloud computing market starting to take off, has been aggressive about gaining share in the server market and has already cooperated with the government of Beijing, China and Taiwan's cloud computing industry association to develop and design cloud computing-based products and technologies, according to David Tang, senior vice president, AMD, and president of AMD Greater China.

AMD has already signed an MOU with the Beijing government to establish a laboratory for R&D and designing cloud computing technology and will push their work to fill the gap in the industry chain, Tang noted.

Tang pointed out that since China's "125" plan is treating information technology (IT) as a strategic emerging industry with cloud computing being the most important base structure, AMD's cooperation with the Beijing government will assist the integration of the cloud computing industry, while boosting AMD's server platform performance.

As for ultrabooks and tablet PCs, Tang noted that AMD will continue to push new low-power-consumption APU products to satisfy market demand for thin and light products. AMD is currently cooperating with partners such as Acer and Micro-Star International (MSI) for tablet PCs and expects more clients will join soon.

For the support of operating systems, AMD currently will mainly focus on the Windows platform and has no plans to cut into the smartphone market.

AMD, on September 29, also announced it will reduce its performance forecast for the third quarter with expectations to see an sequential growth of 4-6% for the quarter, down from 10% originally, while gross margins will also drop from 47% to 44-45%.

The reduced forecast was mainly due to Globalfoundries' weak 32nm yield rates for manufacturing AMD's Llano APUs, but with the supply of Llano APUs recently starting to recover, motherboard makers are still optimistic about the APU's future performance.


MIT & Quanta team up to develop cloud computing technology

Posted by Daniel J Su, [25,  September 2011]

Massachusetts Institute of Technology (MIT) & Quanta Computer will team up to develop cloud computing technologies for health care applications.

The notebook EMS is also in talks with some hospitals in Taiwan about providing cloud computing services through Taiwan based mobile carriers such as Chunghwa Telecom (CHT) and Far EasTone Telecommunications (FET).

Since Quanta is providing a full lineup of cloud computing services, starting from upstream data centers with servers, storage and switches to the electronic devices at the end side

Which the company is adopting a new business model of renting software and equipment to users, and the new business model is expected to provide the company long-term and stable profits.

Wistron is currently working on developing IT products such as industrial controllers, data collectors and bridges, aiming to bring down its cloud computing-enabled hospital bed's ASP of (US$48k)

Meanwhile, the cloud computing medical equipment developed jointly by Wistron and Microsoft has already been adopted by some Taiwan hospitals.


Hybrid Cloud Computing Service in review

Posted by Daniel J Su, [22 September 2011]

Public Cloud Computing services are growing fast despite the fact that a lot of people do not fully trust them. 

Just look at the number of launched Amazon EC2 instances in the datacenter US East 1 region 

In 2008, the highest peak reached 20K instances, at the end of 2010 customers launched up to 140k instances, an increase with a 7 times factor,

The 1.0 version, was a virtual appliance which was rather slow and unreliable when moving VMs around. Just a few days ago, 

VMware has announced the 1.5 version which seems to be quite a bit faster, more reliable (checkpoint & restart) and is agent based. 

Citrix is also on the Hybrid cloud bandwagon with the Netscaler Cloud Bridge.

We asked a few hosting providers how they felt about the VMware version of hybrid cloud and the reactions were mixed. Several people told us that this would make offering a Service Level Agreement quite complex or even impossible. It is after all quite hard to offer a good SLA when your uptime is also dependent on the internet connection between the customer's datacenter and the hosting provider's datacenter. Your thoughts?

And Amazon is not the only dog in town. According to the same measurements, the Rackspace Cloud Servers have to serve up just as much instances per day. Translation to us hardware nuts: many people are hiring a virtual server instead of buying a physical one. 

But if you are reading this, you are probably working at a company which has already invested quite a bit of money and time in deploying their own infrastructure. That company is probably paying you for your server expertise. Making use of Infrastructure as a Service (IaaS) is a lot cheaper than buying and administering too many servers just to be able to handle any bursty peak of traffic. But once you run 24/7 services on IaaS, the Amazon prices go up significantly (paying a reservation fee etc.) and it remains to be seen if making use of a public cloud is really cheaper than running your applications in your own dataroom. So combining the best of both worlds seems like a very good idea. 

Even back when I visited VMworld Europe in Cannes in 2008, we were promised by VMware that "Hybrid datacenters" or "hybrid clouds" were just around the corner. The Hybrid Cloud would ideally let you transfer cloud workload  between your own datacenter and a public clouds like Amazon EC2 and the Terremark (now part of Verizon) Enterprise Cloud.

In 2010, the best you could get was a download/upload option. 

The excellent concept of hybrid cloud started to materialize when VMware launched their vCloud Connector  back in February 2011. The VMware vCloud Connector is a free plug-in that allows you to deploy and transfer virtual machines (VMs) from your own vSphere based datacenter to a datacenter that runs the vCloud Director.

Google support against Apple will be happen in Asia region

Posted by Daniel J Su, [18 September 2011]

When Apple sued HTC last year claiming the Taiwan smartphone vendor had violated twenty of its patents, Google noted that it would stand behind its Android OS and its hardware partners. 

With HTC now amending its own suit against Apple to include additional violations of patents it recently obtained from Google, it appears that Google can also walk the walk and is willing to show its loyalty to its partners

According to media reports, HTC has added nine more patents to the list of patents it claims Apple has violated, with the additional patents acquired from Google only last week. According to Reuters, HTC has also filed an additional suit against Apple alleging patent violations in iOS devices and Mac computers.

Commenting on the news, Digitimes Research analyst James Wang noted that this was not simply an action in a proxy war against Apple but that a show of loyalty was perhaps needed. Wang explained that after its recent purchase of Motorola, Google needed to show its partners could trust the company. In addition, with HTC launching the HTC Salsa and HTC Chacha with Google's rival, Facebook, Google was able to make a show of good faith by helping on the patent claim and perhaps bring the second largest Android smartphone maker back to the Google fold.

However, it was also about showing its loyal partners that Google has their back against Apple and that is something that vendors in China will be watching very closely as they move forward. In the China market, companies including Baidu and Alibaba have announced their own mobile operating systems, called Yi and Aliyun, respectively, that are based on Android but that do not have Google standing behind them in full support.

Baidu is especially likely to be utilized by vendors looking for a free open OS strategy in China, but worries remain that vendors will be under attack for patent infringement with Apple and Microsoft, Wang pointed out. Google is showing its Android licensees that they will be protected by Google if they stick to the official version of Android, but perhaps not by Baidu and Alibaba if they adopt their respective platforms.

AMD Plans to build $100 Million Data Center in Atlanta, GA


Posted by Daniel J Su, [22 August 2011]


The Sunnyvale, Calif.-based company cited affordable power and real estate as key factors in its decision. The maker of CPU & GPU called (AMD) Advanced Micro Devices is planning a $100 million data center project near Atlanta in Suwanee, Georgia.


“We will be consolidating the capacity from some (of) our smaller data centers in North America into a bigger state-of-the-art facility, which allows us to take advantage of the latest technology and achieve efficiencies of scale,”


AMD spokeswoman Pushpita Prasad told the Atlanta Business Chronicle Suwanee is already home to at least two major data centers, including a 350,000 square foot facility operated by 


QTS (Quality Technology Services) and a major data center hub for HP.


Future of Cloud Computing


Posted by Daniel J Su, [17 August 2011]


Thanks to cloud computing, there will be the most dominant names in IT 10 years from now.



We highlighted my thoughts on why the long-term future of the cloud, at least for consumer and small business, belongs to integrated "one-stop" cloud suites as why both Microsoft and Google are the 2 companies best positioned for this opportunity.However, I was also clear that success in this space for both companies is far from guaranteed.

There are

several potential competitors could overtake either--or both--companies, if they should fail to execute successfully. Here are some of my favorites, and a few that should be contenders but really aren't, at this point:




First, it should be noted that VMware is not trying to be a cloud provider itself but rather an arms dealer to those that are.


The question is, will it help these providers be "one-stop shops," if they so choose?


There is no doubt that VMware has made significant progress in moving up the stack from virtual server infrastructure to cloud computing and even to development platforms.


The aforementioned PaaS offering with, VMforce, is one example, but there have been many announcements around cloud application development and operations capabilities--and all signs point to there being much more to come.


The reason VMware is mentioned here, however, is its acquisition early in 2010 of Zimbra, the open-source online e-mail platform.


To me, that was a sign that VMware was looking at building a complete suite of cloud services, including IaaS, PaaS, and SaaS capabilities.


However, as far as I can tell, the SaaS-related investments have either gone underground or dried up completely.


Giving VMware the benefit of the doubt, I'll assume that it is still working its way up to the SaaS applications necessary to supply one-stop cloud services.


With the capabilities it has been working on over the last few years, being a contender in this space is not out of the question--or perhaps its ecosystem of partners will do it for the company.


However, it just doesn't have enough SaaS to be one today.


AWS Amazon Web Services

Amazon Web Services is, and will likely remain, the flagship cloud infrastructure company. It is also underrated as a PaaS offering, in that most people don't understand how much its services are geared toward the developer.

However, it is completely focused on selling basic services to enable people to develop, deploy, and operate applications at scale on its infrastructure. It does not appear to be interested today in adding SaaS services to serve small businesses.

That said, there are two things that may make AWS a major part of the one-stop ecosystem. The first is, if a start-up or existing SaaS provider chooses to build and operate its one-stop suite on top of AWS services. That is actually a very likely scenario.

The other would be if CEO Jeff Bezos sees an integrated suite of small-business applications as a perfect offering for the Amazon retail business. This would probably be the resale of other companies' software, but it would make AWS a one-stop shop worth paying attention to.




Not with Larry Ellison at the helm," some of you are probably thinking. However, you have to admit that when it comes to business software suites, Oracle certainly has the ammunition.


It has dabbled in SaaS already, and with the Sun acquisition, it has rounded out its possible offerings with and Java.


Oracle's biggest problem is its business model, as well as its love of license and maintenance revenue. If it can figure out how to generate revenue from SaaS that meets or exceeds its existing model.


I think that it'll move quickly to establish itself as a major player in the space and will quickly rise towards the top of the heap. In fact, the recent announcement of Oracle Cloud Office might be a sign that it has already started.


Today Oracle does not seem to be focused on being a cloud provider.


It killed Sun's IaaS offering--which always confused me--and has only introduced PaaS for private-cloud deployments. To date, it seems that it can't pull itself away from equating an enterprise sale with an on-premise, up-front licensing sale.


I am watching Oracle's moves in the cloud with great interest, for that reason.


If I am right--if one-stop cloud services are what many small and midsize businesses turn to in order to avoid building an IT data center infrastructure of their own.


If integration is the key differentiator for cloud services across SaaS, PaaS and IaaS--then I'm pretty comfortable with the observations I've made in this series.


If I am wrong, then integration of disparate cloud services will be a huge market opportunity. What it will come down to is what is easier to consume by small and medium-size businesses. For that reason, I'll place my bet on the one-stop model. What about you?

With a series of moves in the last year or two, has moved from simply being a customer relationship management software-as-a-service vendor to being a true cloud platform contender. With the broadening of its platform as a service environment to add Java (via VMforce, a partnership with VMware) and Ruby (via the acquisition of Heroku), Salesforce has made itself a very interesting option for developers.


However, given its SaaS roots, I'm convinced that CEO Mark Benioff has more up his sleeve than that.


Already, Salesforce has built up an impressive ecosystem with its App Exchange marketplace, but the real sign that it intends to be a broad business platform is the investment in Chatter, its enterprise social network and collaboration tool.


With a few more acquisitions and/or product offerings to expand its business applications suite (perhaps adding e-mail, a productivity suite, or even accounting applications)


 Salesforce will begin to look like a true "one-stop" leader. Frankly, with respect to the company, I'm already on the fence about whether the top two should become the top three.



IBM and HP

IBM and Hewlett-Packard are companies that can integrate a wide variety of infrastructure, platform, and professional-services products, so you can never count them out of a major IT market opportunity.


However, they seem to have shifted away from business software suites, with a focus more on IT operations and data management/analytics.


While IBM and HP will no doubt be players in enterprise IaaS and PaaS.


I don't see them making the investments in building, acquiring, or partnering for the basic IT software services required to meet the one-stop vision. Again, perhaps their ecosystems get them there, but they are not promoting that vision themselves.


Hosting-turned-cloud companies

Companies such as Rackspace, Terremark, and other hosting companies that have embraced the cloud for IaaS services are important players in the overall cloud model, but I don't believe that they are ready to contend, when it comes to integrated cloud suites, at this time. Their focus right now is on how to generate as much revenue as possible per square foot of data center space, and their skill sets fit IaaS perfectly. However, if VMware or another cloud infrastructure software provider builds a suite of services that they can simply deploy and operate, that may change quickly. They just are not contenders based on today's business models.


Telecom companies and cable operators

One possible industry segment that may surprise us with respect to one-stop cloud services would be companies such as Comcast , AT&T, Verizon Communications, --the major telecommunications providers.


They own the connectivity to the data center, the campus, mobile devices, and so on, and they have data center infrastructures perfect for a heavily distributed market like the small-business market (where each small business may be local, but the market itself exists in every town and city).


The problem is the same as it has been for decades: business models and regulatory requirements of these companies make it difficult for them to address software services effectively.


These companies have traditionally been late to new software market opportunities (with the possible exception of the mobile market). You don't see AT&T, for instance, competing with others in bidding for a platform-as-a-service opportunity. So until they show signs of understanding how to monitize business applications, they are not in the running.

In this, the first of two posts exploring the companies that can best exploit the cloud model, I'll identify those two companies and explain why they best fit the needs of a large percentage of IT service customers. Then, in the second part of this series, I will explore several companies that will challenge those two leaders, possibly taking a leadership spot for their selves.

But before I get into who these leaders are, I have to explain why success in cloud computing will be different in 10 years than it is today.


Why tomorrow's biggest opportunity don't represent today's clouds

Think about what cloud computing promises. Imagine being a company that relies on technology to deliver its business capability, but does not sell computing technology or services itself. Picture being able to deliver a complete IT capability to support your business, whatever it is, without needing to own a data center--or at least any more data center than absolutely necessary.


Imagine there being a widely available ecosystem to support that model. Every general purpose IT system (such as printing, file sharing or e-mail) has a wide variety of competing services to choose from. Common business applications, such as accounting/finance, collaboration/communications and human resources, have several options to choose from. Even industry specific systems, such as electronic health records exchanges and law enforcement data warehouses, have one or more mature options to choose from.


Need to write code to differentiate your information systems? There will be several options to choose from there, as well. Most new applications will be developed on platform as a service options, I believe, with vendors meeting a wide variety of potential markets, from Web applications to "big data" and business intelligence to business process automation. However, if you want (or need) to innovate the entire software stack, infrastructure services will also be readily available.

With such a rich environment to choose from, what becomes your biggest problem? I would argue that's an easy question to answer: integration. Your biggest problem by far is integrating all of those IT systems into a cohesive whole.


In fact, we see that today. Most cloud projects, even incredibly successful ones like Netflix's move to Amazon Web Services, focus efforts within one cloud provider or cloud ecosystem, and usually include applications and services that were developed to work together from the ground up. While there have been attempts to move and integrate disparate IT systems across multiple clouds, none of them stand out as big successes today.


While some may argue that's a sign of the nascent nature of cloud, I would argue that its also a sign that integrating systems across cloud services is just plain hard.


Why the most revenue drive by integrated services

Now imagine you are founding a small business like a consultancy or a new retail store. You need IT, you need it to "just work" with minimal effort and/or expertise, and you need it to be cost effective. What are you going to be looking for from "the cloud?"

There, again, I would argue the answer is easy: start-ups and small businesses will be seeking integrated services, either from one vendor, or a highly integrated vendor ecosystem. The ideal would be to sign up for one online account that provided pre-integrated financials, collaboration, communications, customer relationship management, human resources management, and so on.


In other words, "keep it simple, stupid." The cloud will someday deliver this for new businesses. But there are very few companies out there today that can achieve broad IT systems integration. I would argue the two most capable are Microsoft and Google.


"What?!?," you might be saying. "Both of those companies have been tagged as fading dinosaurs by the technorati in the last year. Why would anyone want to lock themselves into one vendor for IT services when the cloud offers such a broad marketplace--especially those two?"


To answer that, we need to look a little more closely at each vendor's current offerings, and stated vision.


Microsoft: it's all about the portfolio with Office 365 

stands out for its breadth of offerings. While its infrastructure as a service and platform as a service offerings (both part of Azure) are central to its business model, it's the applications that will ultimately win them great market share.


Already, offerings such as Office 365 provide cloud-based versions of key collaboration and communications capabilities for a variety of business markets.


However, Microsoft CEO, Steve Balmer, has also made it clear that every Microsoft product group is looking at how to either deliver their products in the cloud, or leverage the cloud to increase the utility of their products.


As every product group within Microsoft pushes to "cloudify" their offerings, I am betting similar effort will be put in to making sure the entire portfolio is integrated.

Combine the Dynamics portfolio with Sharepoint and Lync and add "Oslo"-based tools to integrate across system or organizational boundaries, and you've got a heck of an IT platform to get started with. Add in Azure, and you have the development platform services to allow you to customize, extend, or innovate beyond the base capabilities of Microsoft's services.



Google: Bringing many consumer success to business

What impresses me most about Google's move towards the cloud has been its pure focus on the application. Google doesn't put forth offerings targeted at providing raw infrastructure. Even Google App Engine, one of the poster children of the platform as aservice model, is built with making a certain class of applications--perhaps not surprisingly, Web applications--as easy to develop as possible. Most of the integration of the underlying platform elements has been done for the developer already.


However, it's when you look at its consumer application portfolio, and how it's modifying those applications for business, that you can see its real strength. Google takes chances on new Web applications all the time, and those who succeed--either by building a large user base, or by actually generating revenue--draw additional investment aimed at increasing the application's appeal to a broader marketplace. Google Mail is the most mature of these options, but Google Apps is not far behind.

What appears to be happening now, however, is a concerted effort by Google to build an ecosystem around its core application offerings.

The Google Apps Marketplace is a great example of the company trying to build a suite of applications that integrate with or extend its base Google Apps and mail offerings.

Add the company's nascent suite of communications and collaboration tools, such as Google Voice and Buzz, and signs of integration among all of their offerings, and you can see the basis of a new form of IT platform that will especially appeal to small businesses and ad hoc work efforts.


Why there are no guarantees in cloud computing

Both Microsoft and Google have the basic tools and expertise to deliver on the one-stop shop IT services model, and both have proven to me that they have the desire as well.


However, neither company is a shoe-in for success in this space.


There are 2 reasons for this, the most important of which is neither company has what I would call a spotless execution record. In fact, both have struggled mightily to impose change on their core business models.


Both companies will have to align their various efforts to see this vision through, even as it disrupts current markets.


Each has plenty of applications that show great promise, but both are also a long way away from proving they can deliver on a one-stop shop vision.

The other reason is that there are a variety of worthy competitors vying for the "one-stop" throne. You may have been asking by now about Amazon,, VMware, or the hosting companies, and telecoms.


In the second post of the series, I'll outline my favorites to displace the two leaders, including one that may surprise you.


In the meantime, I think cloud services targeting developers will still get most of the press for the next several years.


Achieving an integrated IT platform that serves multiple business markets is extremely difficult, and will take a true commitment and concerted effort by the company or companies that ultimately achieve that vision.



Cloud software-as-a-service (Saas) and independent software vendors (ISVs). 

Posted by Daniel J Su, [2 March 2011]

Moving to a managed cloud model for outsourced software-as-a-service delivery makes a lot of sense for independent software vendors (ISVs). However, I have seen far too many so-called “rookie mistakes” that could have been easily avoided had the ISV known what to look for in a service provider and what types of questions to ask.

Factors that cannot be overlooked in a service provider include the ability to not only deliver savings, security and breadth of choice, but also the capability to enable integration between technologies, applications and infrastructure on a global scale, both in the cloud and with legacy systems. Security, privacy and performance also cannot be overlooked.

Build Cloud Solutions VS. Buy Cloud Solutions


When you’re talking cloud, the first decision to make often involves whether to outsource managed cloud or build it yourself. Remember that by building your own cloud, your team will be responsible for administering and managing security and firewalls as well as other staffing and expertise. Also, it takes a long time to build your own cloud, so you’ll likely be slower to market and your technology may become outdated.

And don’t forget the technological expertise needed to swap out your legacy, dedicated systems with more versatile, cost-effective virtualization solutions. Operationally, your organization will need to implement a new set of policies and procedures to administer and govern the automated systems and quickly respond to end-user support issues.

What’s more, you’ll have additional personnel to hire and issues to address. The combination of technological and operational requirements requires staff with a new set of skills, as well as a different idea about the role of the IT department within the larger organization. In essence, the IT department must become a highly efficient internal service provider.

Are these issues ones you can tackle internally? It’s a lot to think about.

Cloud Security


When it comes to cloud, most of the questions I receive from ISVs are around security. “Will the cloud be secure?” “What do I need to do to protect my application?”

In short, cloud can be as safe as any other form of IT infrastructure. In other words, it’s only as safe as the security measures you have in place.

The technologies behind best-in-class security are both expensive and constantly changing. If you think you’re going to keep your equipment and software current, you’re likely going to quickly burn through your IT budget.

Outsourcing can help you protect your business more effectively. Ask potential service providers whether they can filter out threats at the network level – it’s a much more powerful method of protecting your IT infrastructure than doing it on site.

Ask service providers how they will minimize your exposure to common threats and identify and assess your system and application vulnerabilities. Do they offer 24/7 monitoring, management and response? They should.

Service-Levels of Cloud


ISVs have a wide range of requirements, and single service-level clouds may not fit all applications.

If you’re an ISV, you either offer a standard service level to customers or you have varying service levels based on software tiers and other factors. Be sure to review your cloud provider’s capabilities carefully. This may sound elementary, but it’s worth pointing out and remembering: You cannot offer more than a provider is capable of providing.

Ask for your provider’s maintenance windows. A good practice is to match the maintenance windows with your service provider.

Another area to explore is your services provider’s emergency changes. Things do go wrong from time to time, and how your service provider responds to those issues will affect your SLAs to your customers.

Lastly, how redundant is your cloud environment? It doesn’t start and stop at the hardware, network and storage layers but also continues into the facilities (i.e., power, battery backup, redundant and varied paths for network into the building). There is nothing wrong with asking for a data center tour.



ISVs often want to link between private and public clouds, or they may want to use cloud by tapping into their legacy IT environment to get to market faster.

The availability of hybrid cloud solutions – which allow you to tie private and public clouds to each other and to legacy IT systems – is important to solve IT issues related to temporary capacity needs (i.e., bursting) and to address periodic, seasonal or unpredicted spikes in demand.

Check with potential vendors to see if their assets work together to fully embrace the cloud model and deliver a combination of colocation, managed services and network that best suits your immediate and future needs.

Cloud Pricing


Have you ever tried comparing costs of clouds by vendor? It’s not easy to do, that’s for sure. For the most part, clouds are priced differently. To get the full picture, you need to contrast solution pricing versus individual element pricing.

If you’re doing research, here are a few questions to ask:

  • Does the cloud cost include data center services such as IP addresses, load balancers, VPN and monitoring?
  • Are there any hidden fees involved with storage?
  • Are backup services included or is there an additional charge?
  • What levels of security does the cloud include and what options are there for enhanced protection?
  • What are the network connectivity options and related costs?
  • What type of support is offered? For example, is 24/7 phone support included?

Cloud enables ISVs to implement their offerings in any market in record time. However, true cloud computing for ISVs needs to go beyond just an array of flexible storage and processing capacity. Be sure to conduct research, ask questions and find a solution that works for your needs.

Citrix Unveils NetScaler Cloud Bridge,

 Bringing Infinite Capacity to Enterprise Datacenters

Posted by Daniel J Su, [25 May 2011] 

Citrix Synergyâ„¢, where virtual computing takes center stage, Citrix announced NetScaler® Cloud Bridge, a solution that transparently connects enterprise datacenters to any off-premise cloud, 

making the cloud a seamless extension of the enterprise network. As the industry transitions from the “PC Era” to the “Cloud Era,” companies want to leverage low-cost cloud computing, but concerns about cloud lock-in, compliance and data security limit adoption.

 Cloud Bridge addresses these fears by delivering a transparent, secure and optimized tunnel between the on-premise enterprise data-centers and off-premise cloud datacenters, enabling IT to transparently shift web and application servers to multiple clouds while keeping data and other sensitive information safely within the enterprise datacenter.

Today, security and compliance concerns have largely limited enterprise use of cloud computing to non-mission critical workloads like development and test projects. Organizations are more comfortable maintaining the traditional level of control they are familiar with when critical data is kept onsite in the enterprise datacenter.

Cloud Bridge is the first offering to bring together into a single solution all the L2-7 traffic management, security and network acceleration functionality needed to integrate cloud networks with the enterprise. By making cloud provider networks look like a natural extension of the enterprise datacenter network, Cloud Bridge accelerates the rate at which enterprise can safely and securely use the cloud for production workloads. Cloud Bridge reduces the latency and optimizes the bandwidth of the enterprise-to-cloud network connection, while also encrypting this connection so that all data in transit is secure. The end result is that IT can keep sensitive data in their datacenters while moving compute to the cloud, and a consistent and seamless experience for the people actually accessing the applications is maintained.

What's New

Cloud Bridge combines the industry leading core L4-7 traffic management capabilities of NetScaler with the four key network services that make the cloud appear as a native extension of the enterprise datacenter:

  • Seamless Network: L2 network bridging makes the cloud network a natural extension of the enterprise’s exiting L2 network, making it easy to shift resources to the cloud without having to re-architect the application.
  • Secured Tunnel: Encryption capabilities native to NetScaler ensure that data remains secure as it traverses the network links between the enterprise and the cloud
  • Optimized Access: Network acceleration capabilities alleviate latency and speed issues by intelligently transferring data between the enterprise datacenter and the cloud datacenter.
  • User Transparency: Global server load balancing reduces server appliance costs by intelligently choosing either the enterprise datacenter or the cloud datacenter to host the application, and making that process seamless to people accessing the applications.
  • App Flexibility: Cloud Bridge also makes it easy to keep sensitive app components – like directories and data – safely inside the datacenter, while moving the compute parts of the app out into the cloud

Gigabyte X58A-OC world's first overclocking motherboard
 Posted by Daniel J Su, [4 March 2011]

Gigabyte Technology has announced the launch of the world's first motherboard designed from the ground up for extreme overclockers, the GA-X58A-OC. Its overclocking specific performance design defines a whole new category of enthusiast focused motherboards that introduces never-seen-before tweaking and insulating features.

Based on the Intel X58 chipset (Tylersburg) and supporting LGA 1366 socket CPUs such as Intel's new top of the range Core i7 990X Extreme Edition CPU, the X58A-OC takes advantage of all the performance benefits that the X58 platform has to offer including triple channel DDR3 memory support, 6 core 12 thread CPU support, and enough PCIe Gen 2.0 bandwidth to support 4-way CrossFireX and 3-way SLI for the best graphics performance in the world.

Gigabyte has listened closely to the overclocking community to make sure the X58A-OC has all of the overclocking features enthusiasts have been asking for, without some of the features that are unnecessary while overclocking, or can negatively impact performance - similar to a stripped down sports car where the aircon, radio, passenger seats, etc have been removed to reduce weight. Layout was a critical aspect of the design, not only in choice of components used, but also spacing of the components so that insulation of the board is as easy as possible, while still maintaining the most efficient data pathways between the various components. Gigabyte is also introducing five new main overclocking features on the X58A-OC that help to push the performance envelope of the X58A-OC even further including OC-VRM, OC-Touch, OC-PEG, OC-Cool and OC-DualBIOS.


Cloud Computing for Consumers at Storage Visions 2011

Posted by Daniel J Su, [9 January 2011]

This article continues our discussion of cloud computing at this year's Storage Visions conference, focusing on the consumer market. Consumers can benefit from cloud technologies to access entertainment, share documents, back up their data, and play sophisticated video games, all with relatviely low processor requirements on their local systems. While these nascent technologies will ultimately be widely used across the consumer market, many are still in their early stages of development. One of the most compelling solutions that is available to consumers right now is the Sugarsync"personal cloud" backup solution. Sugarsync had a booth at Storage Visions and Drew Garcia, Vice President Product Management participated in the panel "Saving, Sharing and Protecting Family Content" led by Liz Connor of IDC. Sugarsync's solution allows a user to store and access their data across PCs, Macs, tablets, netbooks, smartphones etc. They have apps available for iPhone/iPad, Blackberry, Android and even Symbian. Its a true cross platform tecnology for access to documents and media. They are currently offering 5GB of free storage via their website.


Enterprise Cloud Computing Solutions at Storage Visions 2011


Posted by Daniel J Su, [7 January 2011]




Cloud computing was a hot topic of discussion at this year's Storage Visions conference. Solutions were discussed for both public and private clouds for both the enterprise and consumer markets. This article focuses on the enterprise solutions. Mike Alvarado from the Product and Business Development Company led a panel entitled "Opportunities and Challenges for Consumer and Enterprise Cloud Storage" that touched on both of these markets.The main focus of the panel was how to deal with the ever increasing storage needs for data stored in public and private clouds while maintating security. Chad Thibodeau ofCleversafe raised the issue that "RAID is failing on the Petabyte" scale and posed a new solution to the issues of data loss that does not require the expensive method of simple data redundancy. By using a method called Information Dispersal Algorithms (IDA), slices of data are distributed to seperate storage devices across a cloud. This results in reduced power needs and increased reliability. Ingo Fuchs from NetApp discussed methods already available to manage a "Global Conent Repository" for studios, post houses and broadcasers to access conent anywhere at anytime. Tracey Doyle of Hitachi Data Systems expained there "Storage 3.0" strategy that allows for pay per use consumption of cloud storage for enterprise. This allows a company to scale up there usage at minimal initial outlay and risk. Chris Hamlin from Blackridge Technology explained that most of the security concerns with storing data in a networked environment can be alleviated using his company's "Transport Access Method" of first packet authentication. This drastically improves security at a reasonable cost.