Recent Posts
Search AbsolutelyWindows



Shiny New Thing: Microsoft Surface Pro 3 256GB

I am in possession of a new Microsoft Surface Pro 3 for a short time review.

Prior to the announcement of Surface Pro 3, I had obtained a Surface Pro 2 512GB for review here as a counterfoil to the HP EliteBook Folio 1040 G1. However, I returned that device when the Pro 3 was announced, for the very obvious reasons.

I had thought we (me and Logikworx EVP Rod K.) wouldn’t be getting Pro 3 devices until when the Core i7-based tablets ship, but the confluence of two factors made me get this unit: 1) Wifey was out of the country, and 2) I just had to try it.

I will be passing this unit off to Rod for his own tests when I’m done with it.Surface-Pro-3

This unit is a mid-spec device, sporting the following:

  • Intel Core i5
  • 8GB RAM
  • 256 GB storage
  • A red Surface Pro 3 Keyboard.

Having to pay for a keyboard, for a device Microsoft is touting as a MacBook replacement is beyond annoying: it is rather stupid, and small-minded. Microsoft should only make (sic.) apples-to-apples comparisons.

The thing is, I have been using the afore-mentioned HP EliteBook, and to great success, I might add. Moreover, once Microsoft starts shipping the 512 GB Core i7 devices, I will be getting that for myself.

Will Surface Pro 3 be good enough to wean me off the EliteBook?

We have to wait and see. As of today though, Surface Pro 3 finds itself playing second-fiddle to the EliteBook Folio 1040 G1.

Meanwhile, let’s do this!

© 2002 – 2014, John Obeto for Blackground Media Unlimited



According to VMware, OS X is coming, and will conquer the desktop.

I have to ask, just how many VMware certified professionals are there on Planet Earth?

40,000? 50,000? 100,000?

Actually, to narrow it down, how many VMware professionals can you expect to find at any decent VMUG gathering?

400? 500?

Why then, would VMware – the company – then commission a survey of 376 random vPeople in order to come to the quite insane conclusion that Macs are taking over?

I am not a statistician, but I can play one here, and declare, without fear of contradiction, that this is definitely NOT a representative sample.

Not of VMware professionals.

Not of of the stakeholder end users.

And most definitely, it is NOT a representative sample of the most important constituency involved in the decision making: the owners or corporate leaders who would make the actual buying decision.

Making matters even more absurd, why would VMware, the afore-mentioned company, not its eponymous product, have a marketing shill ‘report’ the findings as something sentient IT managers should give a flying funk about?

Seriously, why?

Are you telling me that they couldn’t conduct a survey of those same VMware professionals while they – the professionals – are on their eternal hamster-wheel-like certification courses, and get a more representative and infinitely more knowledgeable and relevant sample?

Let’s put aside the easiest answer: hubris, because they – VMware – are the current Kings of the [hypervisor] family.

Steve jobs - this survey is shit

It then leaves us with the next, and in this case, the only obvious answer: fear.

Yes, people: fear.a76d407984c1e806ac1d8fefa400035c

Whenever a dominant category-definer and leader is suddenly faced with formidable competition for the first time, their instinctive initial reaction is to sow seeds of fair, uncertainty, and doubt, aka FUD.

This inanity smacks of a survey conducted to validate a previously decided upon conclusion.

That said, I am saddened that VMware, a company I hold in high regard, would resort to something as infra dig as this silly survey.

Scarily, this, and the smack I hear uttered by several VMware professionals reminds me of the same kinds of hubristic banter Novell Netware ‘professionals’ used to console themselves with back in the day.

I mean, we all knew that Netware was invincible, and would win out, right?

In case you’re asking, OS X market share is currently about 5%. Which is where it has been since, well, forever.

© 2002 – 2014, John Obeto for Blackground Media Unlimited


The SmallBizWindows HP Proliant ML310e Gen8 v2 Review

smallbizwindows3The HP Proliant ML310e represents the absolute best value you can get in an entry-level server.

For the past 8 months, I have been in possession of an HP Proliant ML310e Gen8 v2 server.

This Proliant model replaced the venerable Proliant ML110 as the entry-level server offering in HP’s tower server inventory.

We exclusively use HP Proliant servers, both in test, in production internally, and and for our client firms that require it. We had always found the entry-level ML110 to be very capable, reliable, and extremely worthy of the faith we put into it. In addition, of course, our dollars.

The Proliant ML310e is HP’s entry-level tower server.

I know that seems like a misnomer since there is the HP Proliant MicroServer is available at the very bottom end. However, the Proliant MicroServer is not a true tower unit. This is.

Test unit specsc03780971
The Proliant ML310e as delivered came with the following specs:

  • Intel Xeon E3-1220 v3 CPU
  • 16 GB RAM
  • (2) 1TB hard drives
  • Dual NICs
  • HP SmartArray controller with a non-hot-plug 4-drive cage.

I upgraded the RAM on the ML310e to 32GB, and added two more internal 1 TB hard drives to bring the total to 4TB of local storage.

As with other HP Proliant servers, this model is solidly built, exposed wires kept to a minimum, and the interior accessible to users.

The Tests

I created two test environments for this server:

a) As a member server in the ongoing test environment at The Orbiting O’Odua.
b) As a departmental file  and EMR server for an outpatient clinic in rural Colorado.

Other participating servers for this review
The following units participated in the review:

i) HP Proliant ML350 G7; two units: at The Orbiting O’Odua, the other at MedikLabs.
ii) HP Proliant ML110 G6; three units, one at the O’Odua and two at MedikLabs.
iii) HP Proliant DL380 G7; the Orbiting O’Odua.
iv) HP Proliant DL385 G7;
the Orbiting O’Odua.
v) HP Proliant MicroServer; MedikLabs.
vi) HP Proliant MicroServer Gen8; The Orbiting O’Odua

For comparison, we also used the following Dell servers as baselines:

vii) Dell PowerEdge T410
viii) Dell PowerEdge T110

We used a HP PS1810-8G managed switch and a Cisco catalyst switch at the Orbiting O’Odua, and Cisco Catalyst switches at MedikLabs.

Installing the ML310e
c03780960In a word, iLO.

HP’s Integrated Lights-Out, iLO as HP’s server provisioning and management package is generally known, rocks.

I took the base server, logged on to iLO, selected the options I wanted, connected to the SmartArray controller and carved out the storage as I wanted it, pointed it at a network share, and that was it.

iLO downloaded available upgrades, installed Windows Server 2012 R2, set configuration parameters as I wanted them, and I was set.

That really was easy!

Believe me, if you don’t use iLO, well…it’s your fault.

Orbiting O’Odua

I inserted the ML310e into the test environment here at the O’Odua, as the fourth physical server in the Server 2012 Lab, joining a Proliant ML350 G7, Proliant DL380, Proliant DL385, and both the Proliant MicroServer and the Proliant MicroServer Gen8.

The constantly changing virtualization lab at the Orbiting O’Odua is used by to personally validate Hyper-v scenarios my staff and/or our outside consultants architect and deliver to us on behalf of our clients.

For this test, the ML310e was subjected to constant high loads over several days at a time. We then changed the configuration several times, all the while making sure that we kept server utilization very high because I wanted to see if not only the tested servers would be up to the task, I wanted to test them to the point of failure to see if the units would perform reliably over several reconfiguration events.

Proliant ML310e Gen8 v2 handled the tasks with easy aplomb, no doubt helped by the RAM increase to 32GB.


At MedikLabs, I used the ML310e as the general AD, file services, and EMR server for the attached outpatient clinic that MedikLabs is associated with.

We brought down the ML110 G6 file server that ran the lab, and replaced it with the ML310e. We also decided to replace the server hosting the in-house electronic medical records (EMR) used at the active physician’s clinic at MedikLabs, another Proliant ML110 G6, with the very same ML310e, utilizing a single ML310e for the tasks we had previously assigned to two separate servers.

For a four-month period, the ML310e was the workhorse of the clinic, and lab techs, running day-to-day operations, including the massive, unwieldy Generation One EMR, with its severely un-optimized Oracle database.

The Dell PowerEdge T110

We used a Dell PowerEdge T110 server with a similar hardware configuration as a reference server to see if Dell would be in the mix.

However, I have to note that Dell is not.

While the hardware was similar, the device was not.

In build, the PowerEdge felt like a slapped-together device, nothing more than the sum of its parts. While that was good a decade ago, it is not today. Servers need to be, and do more. Dell needs to do more.

I was not impressed.

smallbizwindows3What HP has done with the new entry-level Proliant ML310e is to raise the bar in performance, storage, and storage options, with the hot-pluggable hard drive bays, for example, reliability, implementation and management.

The ML310e Gen8 v2’s performance is several levels above that of the ML110 it replaces. Storage and RAM are expanded, and manageability, with iLO4, is quite smooth and easy.



The Absolute Best Entry Server

Resultantly, the HP Proliant ML310e is the recipient of the SmallBizWindows Absolutely Best Award.

This server is the very best value you can get for an entry-level server for a small business.

It is the very best  server in its category you can buy. Period.

I customized the front bezel of the ML310e thusly


© 2002 – 2014, John Obeto for Blackground Media Unlimited


The day I found out that deleting an Xbox Music playlist requires an Xbox console

Yesterday, I wrote the post below, detailing my annoyance at the inability to delete playlists from Xbox Music Windows 8x Metro app, and excoriating Microsoft for that inability.

I was wrong.

My #1 Son (chronologically counting) showed me my error as narrated below.

Crapware: apps and applications of ambiguous or little value preinstalled by either software or hardware OEMs with their product.

Screenshot (112)With any default installation of Microsoft Windows 8x, and with Windows Phone 8x, for that matter, Microsoft installs a few apps that aim to make those platforms immediately useful.

The music app for Windows 8.1, while simple titled Music, is in reality the Metro interface Xbox Music app, for which there is a browser-delivered version, and inexplicably, no native Windows desktop version.

I have used it since it was the default application for music. However, I have not been enamored of it mainly because the user interface, while heavy on graphics, was not as friendly and intuitive as it could be.

Earlier today while getting ready to initiate a call, I decided to listen to three versions of that beautiful song, “Santa Lucia” as sung by Andrea Bocelli, Dean Martin, and Enrico Caruso using the Xbox Music Metro app.

It was then I noticed multiple copies of playlists and a few defunct playlists on my account.

I tried to delete them, to no avail. So I went online to search for a solution.

What I found was this: deleting a playlist on Xbox Music requires users to do so from their Xbox 360 gaming console.

That is effing incredible!

I swore to myself that that nonsense could not quite possibly be true.

I then went back to the app, and tried again. Several right-clicks in, the app gave up the ghost and froze.

Restart, re-try, and it does the same.

The severity of the issue then really hit me: the app is useless, and shows a lack of understanding of consumers.

For goodness sakes, the developers of the app figured that it was okay to require users to own an Xbox console in order to perform basic housekeeping functions!

That is beyond stupid

Moreover, you can be sure it is not the way to world domination.

I have uninstalled the Xbox Music app from all desktops I work from.

I will also avoid it until sanity returns to Microsoft, or they acquire consumer smarts, and improve the entire user experience.

For goodness sakes, the late, and for me, lamented Zune, wasn’t this addled!

This entire nonsense reminds me of the default setting in Windows Media Player that physically deletes your media files from your computer when you remove them from your Windows Media Player library.

I will wait while you settle down from that little bit of madness. And remember, that has been the default setting over several generations of Windows and Windows Media Player!

UPDATE June 25, 2014: I was wrong.

Completely wrong.

As part of his rudimentary programming training with his Windows Phone Developer Account and Windows Phone AppStudio on his Nokia Lumia 928, #1 Son created an app that pulls RSS feeds from some of his favorite sites. One of them happens to be this blog.

Upon reading my post yesterday, he emailed me – from about 15’ away! – to let me know that he was sure I was wrong, and he could delete Xbox Music playlists on his HP Envy 15t Touch.

Really? I called him into my office and told him he was wrong.

He offered to show me, and earlier today, he did.

Screenshot (120)

This is the screen I had looked at, while right-clicking on the playlist name in the left-hand column, as is the convention in Windows, in order to bring up a context menu. Which wasn’t there. So it didn’t work.

Screenshot (121)a

What #1 Son did was different: he looked at the description in the main column, saw the icon with the horizontally-centered ellipsis, and determined that it contained more options. Which it does, as clicking on it brings up more options.

Which must be a Windows 8x Metro convention.

Anyways, he was right. I was wrong.

You can delete playlists from the Xbox Music apps.

My compliments to Carmen C. and Brandon L. from Microsoft who both insisted that my assertion couldn’t be true. They were right, as well.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



I was HP Discover 2014

For most of last week, I was at HP discover Las Vegas 2014.

HP Discover is HP’s premier IT professional’s event, which is held annually in Las Vegas Nevada for North America and in one of the cities of you in Europe for the rest of the world.

As a result, over the next few days I will be posting a series of blogs, podcasts, and videos taken at the event in in order to bring my perspective on the needs of the small, medium, and midmarket businesses into the conversation.

If you follow the links below, you will find different views on the event from the independent bloggers embedded into the event as well.

Calvin Zito’s Around the Storage Block Blog on has a complete list of the independent bloggers embedded into the event.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



The SmallBizWindows HP EliteBook Folio 1040 G1 Review

smallbizwindows2The HP EliteBook Folio 1040 G1 has been my primary PC for the past month.

The EliteBook Folio 1040 G1 – really HP? Was that the shortest name you could foist on this device? – is an Ultrabook-format laptop offered by HP.

It is very thin, sports a solid-state hard drive for storage, and for this review unit, come with a 14” full HD 1080p monitor.

A list of the specs for this laptop is disclosed in my “Shiny New Things” post announcing the EliteBook Folio G1 here.

The HP EliteBook Folio 1040 G1
This is a very slick, thin, light, and in my use of it, very powerful laptop.

Based on that, I wanted to compare the EliteBook with the two devices I use as laptops daily: my trusty old HP EliteBook 2740p, and the device that supplanted it, the Microsoft Surface Pro 2 128 GB.

(I had ordered a received a Surface Pro 512 GB in anticipation of this EliteBook, but I returned it for two reasons: 1) it became obsolete the minute Microsoft announced Surface Pro 3, and 2) it was more spec’d out than this review EliteBook Folio G1.)


OOBE and Setup

The system came with Microsoft Windows 7 and a set of discs containing recovery, Windows 8.1 install, and driver discs for both operating systems.

I didn’t even let the EliteBook Folio to get booted up as I attached a portable optical drive to one of the USB ports and installed Windows 8.1 from it at the very first system boot.

I performed a complete wipe and total reformat of the Ultrabook, went through the install process in the very short period of time I have come to appreciate with a clean Windows install, and initiated the Windows Update process.elite_folio_product5_tcm_245_1545829

Everything went swimmingly.

So swimmingly that I assumed that devices hadn’t been autodiscovered and provisioned into Windows 8.1.

I was wrong.

ALL devices had been discovered and installed.

Nonplussed, I decided to perform another wipe, and use a fresh, non-HP Windows install disk to re-install Windows 8.1 on the device to see if drivers had been streamlined into that install disk, and/or whether a default disk would afford users the same level of driver discovery the HP install disc had delivered.

It did as well.

That, I like.

After Windows Update had fully run its course, I installed Microsoft Office 2013, Microsoft Expression, and a couple of proprietary applications I cannot do without in the course of my business day.


As you can see from the comparative photos below, this is a very svelte device, even compared to the Surface Pro 2 tablet. The difference in size from the EliteBook 2740p Tablet PC is even more pronounced.

However, the EliteBook Folio 1040 G1 comes standard with 2 USB 3.0 slots, a DisplayPort slot, a microSD slot, and a 720p webcam in that diminutive package.

Oh, and it looks good!


The EliteBook Folio, like the other devices here is equipped with an Intel Core i5 processor, though both the Surface and the Folio 1040 have the very latest Core i5 CPUs.

The EliteBook Folio 1040 G1’s performance is very snappy.

The combination of a fast central processor, Intel’s latest HD graphics, and a solid state disk have combined to make this device worthy of being faithfully compared to the flag bearer in this space, the Apple MacBook Air.

Expression Encoder was used to transcode several videos under differing loads without any perceptible degradation is the user experience.

Battery life

While the brochures for the EliteBook Folio 1040 G1 tout an all day battery, that assertion has absolutely no bearing in the real world. It has a 5.5 – 6.5 hour battery life in real world situations where a mix of browsing, desktop productivity apps, and other ancillary tasks are being performed. Still, getting up to six and a half hours consistently from a laptop without trying to be insanely conservative with battery usage is very good.



It is a sealed system, just like the Surface Pro 2.

Coming from the PC world where there are infinite user-configurable upgrade possibilities, it was a shock. However, it looks like that’s the way it will be for laptops in the future, as evidenced by Microsoft Surface and MacBook Air devices, for it helps with reliability.


The EliteBook Folio 1040 G1 in use

As much as I loved the old EliteBook 2740p, there was never any doubt in my mind that I was using a laptop.

However, that has changed.

The EliteBook Folio 1040 G1 is the first laptop since my old Compaq Portable 386 that I have carried in my hands without the benefit of a laptop or messenger bag.

Yes, I know the Compaq Portable was in reality a luggable. But, it had a handle which was needed to carry that 20-pound plus behemoth.

Thankfully, this EliteBook Folio 1040 G1 weighs under 4Lbs. however, it is the very slim, folio-like form factor – I see what you did there with the name, HP – of the laptop that made it such a breeze to carry about. I have carried it around that way since, sometimes with my laptop bag slung on my shoulder.

It is very unobtrusive, and I like it that way.

However, the EliteBook Folio 1040 G1 is more than looks.

It comes with a gorgeous 1080 full HD screen that positively pulsated with vibrant colors. There is a full complement of HP security gear, which I confess, I replaced with the MDM solution we use at Logikworx. Despite t sealed-in nature of this Ultrabook, I appreciate the speedy SSDs, the included Bluetooth.

It also came with an agnostic microSIM slot, and NFC, both of which I haven’t had the need to utilize.

What I wanted to utilize but did not get to work despite the installation of drivers, was the fingerprint reader.


The following items, I did not like

  1. Fingerprint reader
  2. Just a single hardware mute button, no hardware volume controls
  3. Track pad. It is still hit or miss for me, after about a month. That isn’t acceptable.


I went into this with one goal in mind: see if the EliteBook Folio 1040 G1 would be able to beat the Surface Pro 2 in all aspects. I brought along the EliteBook 2740p as a baseline reference to see if HP had done something I have publicly implored them to do: design aspirational devices.

For number one above, the Folio completely shone. While I was unfortunately unable to use the touchscreen aspects of the EliteBook Folio – grrr, HP! Windows 8. Windows 8!!! – it came ahead of Surface Pro 2 in performance. Usability was even, and the larger screen size of the EliteBook solved the one pet peeve about using Surface Pro 2 daily: the small screen size*. The beautiful 14” screen made sure I did not have to scroll endlessly in Excel.



As to design, it seems that HP has had a wakeup call. Apart from the HP Workstation team, and heck, even the Proliant team, HP’s desktop and laptops have had the kind of design you would only find desirous in Ye Olde Soviet Union. They brought fug to the word ugly. Really, they did. It is refreshing to note that that is no longer the case. I hope they stay the course.


smallbizwindows2This product is indicative of what HP can do when it removes the shackles of mediocrity from hampering the creativeness of both its design and engineering corps.

I have no fear of contradiction when I declare that it is the best laptop computer I have ever used, Microsoft Surface Pro 2 notwithstanding.

It is sleek, powerful, light, full of desirable expansion, and was able to impressively execute every task I assigned to it.

It did so with a beauty not generally seen in the PC space.

As a result, we are gracing it with the SmallBizWindows Superstar Award.

I would be remiss if I do not explain that if this review laptop had been equipped with both a touchscreen and 8 GB of RAM, there is no doubt in my mind that it would have been glossed with our highest accolade, the SmallBizWindows Absolute Best Award.

This laptop is that good.

I will revisit this review upon the general availability of the just-announced Microsoft Surface Pro 3.

Stay tuned.

The HP EliteBook Folio 1040 G1 homepage is here.

Other Reviews of this device

A diverse group is reviewing this device. Some of them are

  • GearDiary; Judie and I had a few conversations over this laptop, and her thoughts are here.
  • Hector Russo did the same for here
  • Ernesto Pellegrino reviewed it for here.
  • Techazine reviewed it here.
  • Geekazine’s review is here.
  • Wahl Network’s is here, and
  • Jake Luddington’s video preview is here.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Parsing HP’s perspective on the Software-Defined Datacenter - Part 3

This is Part 3 of a three-part post based on my understanding of the basics of the definition of the software-defined datacenter as HP sees it, and as espoused by Helen Tang, HP Converged Infrastructure Worldwide Solution Lead in this blog post here.

Part 1 was about the basics of what HP sees the SDDC as, and in Part 2 were my thoughts on her list of requirements for the SDDC journey.

In the article by Helen Tang on the HP Converged Infrastructure blog, she delves into what HP is doing in order to make software-defined bear fruit with HP products. She says,

Not that I’m tooting my own horn here (and I have every reason to do so), but HP provides some very compelling offerings in each of these areas. For example:

  • At the core is: The continued leadership with the HP BladeSystem; we’re leading the Software defined networking revolution with the largest number of SDN enabled switches shipped; and, we’ve been shipping software defined storage for 6 years, yes, way before it was cool.

Very true.

Before it was ‘cool’, and a long time before the market leader in networking heard of it, HP had been evangelizing software-defined. If I remember correctly, I first heard of HP’s Converged Infrastructure initiatives back in 2010.

  • Our HP OneView Management solution is true software-defined management, simplifying the management of both physical and virtual environments. It’s simple, it’s fast, it’s extensible so you can automate the roll-out of hundreds of active/active highly available Virtual Connect configurations at the push of a button ... or deploy a vSphere cluster from VMware vCenter in just 5 easy steps. HP OneView is the only solution that integrates server, network, and storage management together within VMware vCenter.

With every iteration, HP OneView gains more functionality. This is a product that I believe has legs. At this time though, I do not have enough information on it to determine if it is suitable enough to manage software-defined assets in a heterogeneous hardware environment.

  • Moving from custom brick and mortar data centers to modular data centers and PODs, HP gives you the flexibility, efficiency, and intelligence needed for your facilities to plug into the same “matrix” that also allows your facility to be controlled by the admin, user and application.
  • HP is dedicated to driving an open, extensible SDDC architecture that’s able to run the world’s largest and most demanding data centers. From HP Labs down to each of our business units, HP is working feverishly on making this happen. We’re thinking big, a 20 million-node data center that’s all visualized and managed from one place – powered by photonics, memristors and uber efficient compute units. Yes, you bet, those are definitely in the cards.
  • And as thousands of customers can attest, HP has strong services credibility and a global force of some of the world’s pre-eminent data center consultants. We’re here to help drive your strategy, plan your roadmap, and to deploy SDDC solutions on-premise, cloud, and/or as a managed service.

The point of this introduction to SDDC is that the vision of SDDC is exciting, it’s real, but it’s not fully baked just yet (regardless of what you might hear from other vendors). You need to have the right source delivery model (traditional IT infrastructure - networks, servers, storage, management - and Cloud solutions) ... but it also requires the ‘right shaped’ organization (people, process, governance, operations) ... and the ‘right size’ in terms of facilities, buildings, power and cooling, etc. These are all areas HP has been, and is, investing in to bring the SDDC promise to life.

As you can see from the above, software-defined is not something HP woke up to last night, and decided to get on the bandwagon with.

In fact, my first interaction with HP with regards to software-defined was at the 2009 HP Tech Forum. Branded “Converged Infrastructure”, HP at the time told us of future need for a highly-dynamic, and needs-aware series of computing devices that would the only route that would satisfy rapidly-evolving datacenters.

I must confess to all that at the time, the briefings seemed somewhat esoteric to me.

However, over the years, the HP Converged Infrastructure vision has been clarified, and diced into simple chunks.

From servers to networking to storage to management, HP has in that interregnum imbued all of their infrastructure products with their Converged Infrastructure DNA.

For example:

i) In servers: HP Proliant servers are the global leaders in x86 servers, and for good reason: with every iteration, Proliants get better, more reliable, the built-in management suite gains more functionality, no doubt due to the telemetry reported by the embedded ‘Sea of Sensors’ in every Proliant.

ii) In storage: HP 3PAR. HP IBRIX. HP LeftHand. Need I say more? One of the things I like about HP is the fact that when the company sees an available superstar that could fill a hole in their offerings, they initiate a purchase for it. In storage, HP has quietly put together a portfolio that covers all aspects of data storage from high-availability to cold storage, using tapes.*

iii) In networking: HP beefed up by purchasing 3Com. Now, NP Networking is taking market share, and achieving great design win, such as the complete conversion by DreamWorks Animation Studios to HP Networking devices.

From my perch, HP has been very proactive in trying to not only anticipate the direction the market is going, but also in putting together a very comprehensive portfolio that not only works together, but is also engineered to work just as well in a heterogeneous, multi-vendor hardware/software environment.

Apart from HP being very strategic to my firm and our clients, HP was the first company I know of to articulate a coherent software-defined roadmap that has remained relevant by being dynamic and evolving as things change, new business needs arise, and new functionalities for a software defined world are created.

I like that.

For it shows that they might be the vendor that helps me future-proof my datacenter.

* I forget the current product names of the HP storage technologies listed above.

Comment here or email me at

The source article is While Software-Defined holds the promise of changing everything – you need to do your homework, and is on HP’s Converged Infrastructure Blog

The AbsolutelyWindows Software-Defined Series

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Security seems like an afterthought with some cloud services

Spotify today.

Last week, it was eBay.

It seems kind of nuts, but don’t these companies rushing to provide cloud-based services even take the time to consider security, both infrastructure and user security, in the architecting of their products?

It is quite tiring to continually read of some new-fangled service that has exposed users and their PIIs to breaches that could have been avoided with their product being created from the ground up with security in mind.

And if security is frowned upon by the product’s owners because it isn’t their core competency, one wonders why they don’t , won’t, or haven’t availed themselves of available user authentication platforms such as Microsoft Account, etc.

From my perch, one would think that doing so might actually help them speed their wares to market, since that is something else they don’t have to worry about.

That said, don’t get me started on the bonehead response by eBay to the hack(s) exposed 145 million users to potential attacks.

Source: Chicago Tribune

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Parsing HP’s perspective on the Software-Defined Datacenter - Part 2

This is Part 2 of a three-part post based on my understanding of the basics of the definition of the software-defined datacenter as HP sees it, and as espoused by Helen Tang, HP Converged Infrastructure Worldwide Solution Lead in this blog post here.

Part 1 was about the basics of what HP sees the SDDC as, and this, Part 2, are my thoughts on her list of requirements for the SDDC journey, and Part 3 will be on what I think of what HP is publicly doing in order to enable this.

The 3-part series may be interspersed by a post on my reasons for focusing on HP.

In the HP Converged Infrastructure blog post, While Software-Defined holds the promise of changing everything – you need to do your homework, the author, Helen Tang, (@hpdatacenter) list the following prerequisites for a successful journey to software-defined awesomeness.

What You Need to Do to Get There
As you can probably imagine, a lot of changes must be set in motion to pave the road to SDDC. For starters, you need to do these three things:

  1. Abstract and virtualize all workloads and activities

On the face of it, this part should be relatively easy. As long as DC components are easy for virtualization, it should be swiftly accomplished. If your DC components aren’t ready for virtualization, you shouldn’t think of a software-defined datacenter in the near term.

  1. Automate all your underlying infrastructure to completely eliminate manual configurations

This is where work actually starts.

As I think of our little ‘datacenter’, my thoughts center around what products can’t be manually configured, what applications would need ISV input in order to work?

  1. Migrate all applications to modern operating systems that support application aware APIs

Another hard one. However, if your LOB applications require a server operating system older than Windows Server 2008 R2, then you have a problem. You HAVE to bring you environment forward.

These three prerequisites, though small in number, require a lot of thought before the commencement of an SDDC upgrade/migration.

What SDDC pre-requisites do we need from our vendors?
But that’s just what you need to do yourself. To get there, you’ll also need a lot of help from your strategic vendors. If we’re talking about the complete picture, what you really need are:

    • Software Defined Infrastructure Solutions

Of course. If not…..

    • Converged Management Software (that span physical and virtual)

This, and the point above are the reasons why choosing the right SDDC solution is of paramount importance.

You seriously cannot go with the Flava-O-The-Month provider, and expect that solution to either fit your current needs, or have the resources to develop for your future needs.

    • Flexible Facilities – yes, don’t forget your physical data center facilities

Unlike cloud ISPs, most firms cannot acquire new physical plants to build out their datacenters, so flexibility and reuse must be the norm.

    • Reference Architecture that is open and extensible

AKA ‘every little bit helps’. Just like HP’s Cloud Maps, being able to refer to reference architectures and best practices would greatly ease, and enhance the architecting of bespoke solutions.

    • Open Source that is enterprise ready, hardened and fully supported

Open Standards, I agree with. I need some time on Open Source.

    • And very likely, consulting services to help your teams make each of the component transitions in moving to an SDDC approach

I look at the acquisition of consulting services as a must. Good consultants always reduce the amount of time and resourced that companies have to deploy in order to adequately implement a solution

Part 3 follows shortly.

The source article is While Software-Defined holds the promise of changing everything – you need to do your homework, and is on HP’s Converged Infrastructure Blog

The What is Software-Defined Series

· Someone please explain software-defined to me

· The Software-Defined Datacenter, a perspective from HP’s Chris Purcell

· Software-Defined Storage: A ChalkTalk by HP’s Calvin Zito

· Parsing HP’s perspective on the Software-Defined Datacenter Part 1



Parsing HP’s perspective on the Software-Defined Datacenter

This is Part 1 of a three-part post based on my understanding of the basics of the definition of the software-defined datacenter as HP sees it, and as espoused by Helen Tang, HP Converged Infrastructure Worldwide Solution Lead in this blog post here.

Part 2 follows almost immediately, and will be on my thoughts on her list of requirements for the SDDC journey, and Part 3 will be on what I think of what HP is publicly doing in order to enable this.

The 3-part series may be interspersed by a post on my reasons for focusing on HP.

In my post here, I asked about just what, software-defined is.

HP’s Calvin Zito (@HPStorageGuy), Social Media Strategist for HP Storage, responded with a link to his informative ChalkTalk video introduction to software-defined storage, and Chris Purcell (Chrispman01), Influencer Marketing Manager for HP Converged Systems chimed in with this very good breakdown.

For which I thank them both.

Last week, HP’s Converged Infrastructure Worldwide Solution Lead, Helen Tang, posted an excellent perspective on a software-defined datacenter. Titled While Software-Defined holds the promise of changing everything – you need to do your homework, her blog post is in simple, jargon-free English.

I found the article to be very interesting, since it deals not only with the software-defined concept as it relates to computing components, but it also takes a holistic view of a software-defined datacenter, an ideal which is a goal I am pursuing not only internally, but for client computing assets we manage.

I will be using aspects of the article as a guide, if you will, on my own software-defined journey.

In this blog series however, I will attempt to deconstruct her article, and add my thoughts, questions, and concerns to the mix.

While Software-Defined holds the promise of changing everything – you need to do your homework
A statement made in the afore-mentioned article is the fact that datacenters of the [near] future need a complete full lifecycle approach that is at once integrated, streamlined, automated, efficient, and simple.

That, concisely, is what I am sure everyone wants. The problem is getting there.

In simple terms, the vision of a ‘Software Defined Data Center’ – or SDDC – is where ‘control-enabled’ business applications will influence their own infrastructure based on business conditions; in concert with an application control plane that prioritizes all resource requests. The software defined environment is policy-based and controls virtually all aspects of the data center”

This is where issues start to crop in, in my opinion.

While I like the fact that we would be able to dynamically reconfigure datacenters based on changing business needs, this requires a level of automation and management across several datacenter components, from servers to networking to storage, and more.

I believe it requires that most dreaded of clichés, the single pane of glass management. Current SPOG management schemes are fraught with several issues, the most glaring of which is the constant need to drop from that supposedly all-seeing SPOG to a lower-level dedicated management tool for deep configuration of particular DC components.

It also requires levels of interoperability and interconnectedness within hardware, software, and management platforms and schemas that are currently unforeseen in the industry.

Would this be allowed to happen?

Stakeholders have traditionally not interoperated unless faced with marginalization or extinction. Or worse, irrelevance.

“….SDDC is a promising way to better align IT to the speed of your business with open [sic] choices regarding how best to consume and/or deliver IT to maximize business value and IT agility.”

This, is the Holy Grail for datacenter and indeed, computing management.

Helen then makes a very apropos segue into the rôle of hardware in this software-defined future. And in doing so, she lays down what I believe is a warning billboard:

“Be very, very careful when you hear anyone say that hardware is no longer relevant in our brave new software defined world.”

Caveat emptor, indeed.

In this seeming headlong rush into software-defined nirvana, I am quite leery of every Tom, Dick, and Harry startup that claims to have solved the issue. I am actually more nervous of the our industry stalwarts that position their current wares as software-defined without being able to tell me why it is just that.

So, what is HP doing about the Software-Defined Datacenter?

Well, HP has been on the software-defined beat since HP Tech Forum 2010 when their initiatives, under the Converged Infrastructure umbrella, was announced.

As a result of their prescient planning, they seem to have quite a lot of coherency about their SDDC strategy. According to Helen, HP’s goal to enable IT to optimize the rapid creation and delivery of business services, and doing so reliably, has not changed.

She then makes following three points:

#1 It’s about both Physical and Virtual
SDDC enables IT to optimize the rapid creation and delivery of business services, reliably, through policy-based automation, from the infrastructure up to the application using a unified view of physical and virtual resources – it is not wand waving and the creation of a magic realm. It is however, enabling application level controls for your entire data center, from infrastructure to operations and management, spanning physical and virtual.

The inference here is that the software-defined datacenter must be virtualized.

I am puzzled by this.

My assumption, prior to now, has been that while there would be an appreciable use of virtualization in the software-defined datacenter, or SDDC, such virtualization would be complementary to the use of ‘the physical’.

This point requires that I learn more.

#2 It’s about control at three different levels
We’re talking about control of all functions and resources in the data center from 3 perspectives: the application, the users, and the IT administrator. It’s going to be an integrated environment that weaves together control from these three angles, in a holistic, automated and streamlined fashion – this ensures dynamic, efficient control of IT services.


Management, and effective, efficient, granular management, at that, is required.

This cannot be shirked. If not, the goal of a software-defined datacenter would never be realized.

#3 It’s about Open choices
SDDC aligns business and IT like never before by providing open choices regarding how best to consume and/or deliver IT for maximum agility, security and business value. To be effective, this must be accomplished through an integrated abstraction layer, that is Open Source that does not lock you in to any single vendor’s vision. Now the industry has not completely defined this layer yet. HP and other IT industry key players are driving this – as a key piece to the SDDC puzzle. And we intend on making this SDDC abstraction layer rock solid and enterprise ready, even though it comes from open source.

I cannot disagree with this any more strenuously.

Although I agree completely that extremely granular and deep-reaching interoperability among all stakeholders in a datacenter will be required to make this goal a reality, I do not see why it has to be open source.

While I would like to see all parties in my SDDC future adhere to open standards, I fail to see the need to make that requirement an open source requirement.

One of the reason I am against it, is that I cannot shake the feeling that the use of open source components means that the ISV is either not up to the task of developing the required components in-house to open standards, or has lazily decided to offload the component development to a stereotypical image of open source developers I cannot shake.

For full disclosure, I am not a fan of open source. Not in the very least. The recent “Heartbleed” incident has not helped the open source cause for me, and has made me move more indelibly against it. It would take a momentous development to remove my steadfast distaste for it.

Moreover, I see it as absolving software developers of required responsibilities should issues arise, as the audit trail stops at the faceless community.

I have a problem basing my business, and any financially backed QoS to this.

What am I not seeing?

Part 2 follows shortly.

In a follow-up blog post, I will divulge why HP seems to be hogging most of my attention on this topic. I invite you to email comment here or email me at

The source article is While Software-Defined holds the promise of changing everything – you need to do your homework, and is on HP’s Converged Infrastructure Blog

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Jon Peddie Research CAD survey

jon-peddie-research-logo_v4My friend Andy just linked me to this survey by Jon Peddie Research on your multi-user CAD/CAM environment.

What they would like to know, is your own experience.

So, treat this like elections in Tammany Hall or Chicago: vote early, vote often.

Answers will remain confidential.

More information on this:

Jon Peddie Research is conducting a survey to better understand how people work in multi-CAD environments. We would like to know how you work with DWG files and the issues you may have encountered preserving data integrity.

We would appreciate if you could take about 10 minutes of your time to fill in this survey and tell us about your experience. Your comments will help us understand how people work with DWG data.

Your answers will remain entirely confidential and will only be viewed as compiled responses to the surveys.

We recognize that your time is valuable and so we are holding a drawing at the conclusion of the survey (May 23rd) with the following amazing 11 prizes:

  • Two Apple iPad Air tablets,
  • Two Microsoft Surface Pro2 tablets,
  • Two Samsung Galaxy Note 10 tablets,
  • Five one year subscriptions to our bi-weekly report, TechWatch.

Click on the following link to access the survey:

About Jon Peddie Research.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Shiny New Things: HP LaserJet M525f & LaserJet P3015x

We have received the multifunction HP LaserJet M525f and LaserJet P3015x devices at LogikLabs.

The M525f is a color MFC device while the monochrome P3015x is a monochrome, print-only device.

As we are wont to, we will throw everything at them to see what they are made of.



HP LaserJet M525f


HP LaserJet P3015x

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Software-Defined Storage: A ChalkTalk by HP’s Calvin Zito

In response to my call for more information of “software-defined”, Calvin Zito, Blogger and Social Media Strategist for HP, directed me to his ChalkTalk on introducing software-defined storage.

Thank you, Calvin.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



The Software-Defined Datacenter, a perspective from HP’s Chris Purcell

Last week, I asked for perspectives on what software-defined is.

I am pleased to post this guest post by Chris Purcell, who is the manager in charge of Influencer Marketing for HP Converged Systems.

I worry when people start to talk anything about software defined as it starts to sound like where we were around 5 years ago with cloud that started consuming many headlines. These headlines quickly become the subject of cloud washing, which quickly created more confusion, leaving IT with more questions than answers.

If I took my marketing hat off, I would state that a software define datacenter is more an aspirational goal of many companies, than anything really tangible at the moment. I know HP is working very swiftly to come up with a industry solution that can be used in this space…which will no doubt appear and be grandstanded at a future Discover event, but that date still TBD.

In the meantime tools like HP OneView as a converged Management platform, starts to reveal the power of what a software defined datacenter of the future really could start to provide users (IT and consumers). Currently the datacenter is partitioned with many management tools…networking tools, storage tools, server tools…tools to monitor temperature, the list is endless. They are all specialized and need subject matter experts to use them, they are typically siloed to where it is difficult to see everything happening across the entire datacenter infrastructure. Also think about an end user trying to get services out of the datacenter…it’s hard and without any type of cloud interface it is typically slow and confusing.

Without a full industry definition around a software defined datacenter…think about it this way. A common management tool across IT that allows the management of all infrastructure across server, storage and networking…not a manager of managers, but one software application across everything. And from a consumer perspective a common software interface to use those services without a hand holder IT to act as that engagement manager. Okay, you could argue that consumers can already get these type of online services via the cloud, but there is still a many manual steps under the water that IT has to go through to provision these service correctly.SDD

The slide above might help illustrate the changes that I think will be happening pretty quickly to enable a software define datacenter approach. But as I mentioned at the top of this email, a software definer datacenter is still an aspiration goal and not a tangible element within the datacenter at the moment.

Chris Purcell
Manager, Influential Marketing
HP Converged Systems

We thank Chris for taking the time to provide some edification on this.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



HP ConvergedSystem 300 for Microsoft Analytics Platform

s300-4Earlier today, HP announced availability of the new HP ConvergedSystem 300 for Microsoft Analytics Platform.

A member of HP’s award winning ConvergedSystem family of cloud-ready and cloud-enabled converged systems, this device is a box – if you can call it that, for Microsoft’s implementation of Hadoop, which, in typical Bad Microsoft fashion, is call HDInsight, a moniker which does nothing to inform the IT populace at large about what it means!

For those wondering, the current ‘Big Data’ landscape looks as depicted in the image below.


This is today.

Already, people are drowning in data being returned from the very limited number of sensors deployed today. And as we all know, data is useless, until you can put it to work.


If we go ahead and multiply, by several logarithmic orders the number of sensors about to be deployed as part of the very heated Internet of Things initiatives sweeping the world.

Without meaningful sorting, slicing, dicing, analysis, and compartmentalizing of the data, it is quite worthless.

Into this, steps HP and Microsoft, trying to deliver an affordable, yet powerful solution.


HP brings market-leading hardware and services prowess into this, while Mighty Microsoft brings its unparalleled superiority in software into the fray.

Using HDInsight, HP and Microsoft have created an end-to-end product that combines dashboard, predictive modeling, and search into a scalable appliance.

This device is built to handle not only the velocity of data incoming, but also to handle the variety of day being input into it using its native data warehouse.



As you know, Microsoft HDInsight is engineered for querying both structured and unstructured data.


HP ConvergedSystem 300 for Microsoft Analytics Platform is built to be scalable, with the added benefit on onsite upgradeability.



Users will be able to leverage H’s extensive service organization in the planning, deployment, and technical support of this product.



As I learn more about HP ConvergedSystem for Microsoft Analytics Platform, and possibly get a deep dive into it at the forthcoming HP Discover event in Las Vegas, I will add to this in a subsequent blog post.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Shiny New Thing: HP EliteBook Folio 1040 G1 Ultrabook

black_stretch_logoThe HP EliteBook Folio 1040 G1 is the latest device to land here at AbsolutelyWindows for a long-term review.elite_folio_product4_tcm_245_1545827

This is one sweet device:

Numero uno: as an Ultrabook™, this unit is sexily svelte.

It is also very powerful, sporting the latest Intel Core i5 processor, Intel vPro, and a SanDisk SSD for storage. It has a backlit keyboard, a new-fangled largish touchpad, and a bright 1080p full HD screen.

It came with Microsoft Windows 7 <urgh/>, which I replaced immediately. Not even going past the EULA acceptance page before I attached a USB optical drive to the system and replaced that sucker with an install of Windows 8.1 Pro, which was included in the box.

As you’d imagine from a system based on the Intel Core i5, this unit is fast, light, and ready to business.

I installed Office 2013 on it, connected it to both my freebie OneDrive and my OneDrive for Business accounts, and it was on.

Unless a review scenario comes up, at this time, I think I will perform the review of this device personally, hopefully posting initial thoughts next week.

HP EliteBook Folio 1040 G1 Specs

  • Intel® Core™ i5
  • 256 GB mSATA SSD
  • Intel HD Graphics 4400
  • 14" diagonal LED-backlit 1080p
  • 2 USB 3.0, 1 of them w/ charging
  • 1 DisplayPort
  • 1 microSD slot
  • 720p HD webcam
  • Spill-proof backlit keyboard

A selected group is also reviewing this device, and I will update this post with links to their experiences as I get them.

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Microsoft changes ad agencies. Not too soon!

ms-001About time, wouldn’t you say?

Seriously, in the WIIFM category, what had Microsoft’s old ad agencies done for Microsoft, well, you know….lately?

Actually, since The Rolling Stones started Windows 95 up?

Nothing, is correct.

Seriously: they botched the surface launch, which featured a large number of ads starring a bunch of prancing twenty-somethings snapping the integrated kickstand on Surface tablets.

Interestingly, and incredulously, the ads NEVER told us why we should be interested enough in Surface, or even why we should buy it!

The ads failed to tell us anything about the product they were supposed to sell to us!

Did Microsoft learn?


They then came out with the generally ho-hum Xbox One ad campaign.

How did that work out?

Let’s see: if you can discount the muddled messaging, and the let’s-shoot-ourselves-in-the-foot-and-then-put-bloody-foot-in-our-mouth mishaps from the Xbox team regarding Xbox One, the subsequent ad campaign did absolutely nothing to help the fortunes of that device.

5+ million Xbox One units sold, you tell me?

Well, no thanks to the ads, I tell ya.

The purchases so far have been as a result of the pent up demand for a new console, and there’s absolutely no reason why the decidedly superior Xbox One should be trounced in the marketplace by the inferior PlayStation 4, the $100 Kinect markup notwithstanding!

Here’s hoping that these new agencies come up with interesting, intelligent, and most importantly, extremely aspirational ad campaigns.

Here’s the Windows 95 ad, for those young'uns too young to remember.

Microsoft’s Windows 95 “Start Me Up” ad

Source: Microsoft chooses new agencies for creative and media services

Related: Microsoft picks new advertising agencies, takes a bite out of Razorfish

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Someone please explain software-defined to me

Everywhere you go these days in the information technology field, something software-defined hits you in the face.

You cannot escape it.

It is in networking, servers, storage, everywhere!

However, when asked for a succinct description of just what, exactly, software defined <insert field or product here/>, no vendor seems to be either able to do so, or have a clue!

It is a bloody mess.

It is almost as if software-defined <anything> has devolved to the days when vendors who didn’t have suitable products, ‘cloudified’ their products by attaching the moniker “cloud” either as a prefix or suffix to the names of their existing products in other to hoodwink unsuspecting end-users into buying their tired, warmed-over products.

Thankfully, the world-at-large was not fooled.

They quickly dubbed these misadventures as “cloud-washing”, a sobriquet which made the accused hasten to deliver actual cloud, or cloud-enabled products.

Which brings us to “software-defined”.

For companies with datacenters of any size, making the right choices with regards to technology trends is essential. These trends could be in performance, capacity, power and cooling, or manageability.

Therefore, the technology leader at said firms has to sort through the chaff and determine what is needed in order to not just place, but stay ahead. Or at least, abreast of the category leaders. For managed services, it is a must!

As a result, I find myself devouring all information I can find on this topic, and asking the following questions:

  • What is software-defined?
  • Is it an add-on I can strap to the thousands of servers we currently have?
  • Would I have to perform an R&R (rip-and-replace) for our networking equipment?
  • Will our storage have to live in new boxes?
  • Are current management platforms and frameworks obsolete?
  • Do we have to junk our newly-developed and implemented custom managed services dashboard and platform?
  • What is the rôle of hardware, and indeed, hardware OEMs in this new software-defined universe?

Someone, help me here!



Microsoft finally prices Windows right

ms-001In the two weeks since BUILD 2014, there seems to be a renewed vigor among tablet, smartphone, and large-format smartphone/small-format tablet (NO, I won’t use that word!) hardware OEMs about the creation of new Windows and Windows Phone devices.

This is very good, and bodes well not just for the fortunes of Microsoft, but also for the entire Windows ecosystem of users, software ISVs, retailers, tech professionals, and partners, of which I am one at the day job.

This looming expansion of hardware offerings will be the rising tide that lifts all boats, as we will soon be able to place devices according to customer needs and budgets, and not be limited to a couple of hardware OEMs in tablets, and a handful of device manufacturers in mobiles.

Why the newfound interest?Win81


Sensible pricing.

Yes, people of Terra: sensible pricing.

Incredulously, and in the face of great competition, Microsoft had clung to vestigial pricing from the days when its products ruled.

Think about it, for these two product segments.

In smartphones, when Microsoft was about to perform its mobile OS reset, it still priced the then new Windows Phone OS relative to a) the cost of the old Windows Mobile operating system, and b) as if the new operating system would be such a gee whiz thingy that all comers would gladly pay for.

Erhh, not so.

If you thought that point of view was nuts, then the decision to charge would-be developers of apps for that mobile OS, then with a 0.00% share the amount of $99 per annum, was way beyond nutso, it was completely mental!

Traipsing over to windows on tablets, and such, I understand that Microsoft was still in the business of squeezing all the Latinum they could from the OEMs that were producing Windows products.

While I did discount companies like Acer, HP, and the like as complete whiners, a position that hasn’t changed, because their products offerings were truly shitty, I have to concede that Microsoft didn’t make any concessions to them monetarily with respect to having them dev innovative products for Windows RT (or whatever it is called today) and Windows on small form factor – relative to laptops – devices.

In another instance of dereliction of duty, the honchos at Microsoft seemed stunned at the market failure of Windows, and were quite unable to make necessary course corrections with respect to product pricing in order to goose the market.

The quite silly end-user incentives they came up with were childish, and unsurprisingly not well received.

It was as if they had forgotten two things about selling: incentivize the OEMs into delivering desirable products at several price points, and as I blogged here, solving the last mile hurdle.

Seriously, outside of the inhabitants of #1, Microsoft Way, in Redmond, it was clear to every sack of 53% water on this planet that Windows pricing had to be normalized relative to a) market conditions, b) market perceptions, and c) hardware OEM profitability.

Satya Nadella Walks In
Satya’s anointing as Microsoft CEO was a surprise to me since prior to his appointment to that post, he didn’t come across to me as someone with, you know, the ‘vision thing’.

I voiced my concerns publicly, and I was talked down, being informed that Satya’s vision was well known internally to Microserfs, and that I’d be surprised, and impressed.

Well, he certainly impressed me with his declaration at BUILD 2014 that Microsoft had indeed freed Windows OEMs of all devices under 10” screen size of the cost of obtaining Windows licenses.

Smart. Very smart.

In the face of the fact that the primary competition for Microsoft’s affection in this space, Google’s Android, was free.

(If you notice, I didn’t mention iOS. iOS devices are the mindshare leader, and currently out of reach for Microsoft to vanquish.)

I mean, were Microsoft execs in a time warp set at 2001 when they owned all computing outside mainframes and could dictate to the world?

You really have to wonder: just what the funk took Microsoft so long?

© 2002 – 2014, John Obeto for Blackground Media Unlimited



Two theories on why Twitter is not growing as fast as it used to

For the second quarter in a row, Twitter has disappointed me, and The Street, with slowing growth, and worse yet, increasing user disengagement.

Personally, I was not that surprised at that.

I believe there are two distinct reasons for this

1. The lack of a vibrant Twitter client ecosystem

This might surprise a lot of folks, but there has not been a vibrant 3rd-party Twitter client ecosystem for quite a while now.

In order to capture more eyeballs, Twitter bought up just about all the larger Twitter client ISVs, shuttered or ignored them, and tried driving everyone else to official Twitter apps.

As if that was not enough, Twitter came up with a convoluted app token scheme that had a 100,000 token limit, thereby limiting the upside for Twitter client app ISVs.

Meanwhile, the official Twitter apps are, well, shitty!

That does not help.

2. Twitter’s 140-character limit is now quite limiting

In plain English, I believe the strict 140-character limitation needs to be rethought, as it currently unnecessarily limits conversations.

There isn’t any conversation flow mode.

Well, except if you go to Twitter web.

There’s no way to add to a thought longer than 140-chars, and most Twitter clients do not have APIs exposed that allow mobile clients to show it.

That limitation stops, or at least stifles conversations.

For engaged users such as myself, this is frustrating. For others, it may have driven them away.

While I, to quote that imbecile Herman Cain, “don’t have any [scientific] facts [or data] to back that up”, I know what stops me from engaging more on Twitter, and I have to believe that since I am not an outlier, others must feel the same way.

© 2002 – 2014, John Obeto for Blackground Media Unlimited