Zigurd Mednieks's Blog
March 10, 2020
The Direct-to-consumer (DTC) Lemmings

Currently, there is a shake out in direct-to-consumer, also known as "DTC," startups. You may wonder what distinguishes the direct to consumer category from, simply, e-commerce, which is a venerable and already consolidated category. Why has a lot of supposedly smart money spawned a large number of doomed startups?
While lemmings periodically have population explosions, they don't actually leap into the sea. So let's take a closer look at the breeding grounds of DTC startups to see why there are many more of them than the e-commerce ecology can support. Winter is coming.
A superficial reading of this wave of e-commerce startups can seem to find justification for their existence in the apparent opportunity to disrupt dominant consumer brands that have the power to deny competitors access to retail distribution and mass media marketing. This is why you see startups bringing mundane items like shaving razors, mattresses, bicycles, eyeglasses, and even penis-stiffening pills to the direct-to-consumer e-commerce channel, with hip makerketing on social media, podcasts, and other alternatives to broadcast media.
They keep breeding in part because the list of product categories that have potential for disruption is almost limitless. Consumer products are a much larger part of the US economy than technology. But they are also a brutal business. Barriers to entry are, in principle, low. Building barriers isn't a matter of technology.
In many consumer product categories, dominant brands have attained fat margins through near-monopoly power. They don't put a stranglehold on their retail channels just to be evil. Survival is at stake.
Through e-commerce efficiency, bypassing brick and mortar retail, and taking a little less margin, DTC brands look to be the next Gillette, or Sealy, etc. by being less evil. In addition to a price break, many DTC brands claim to provide better service, "sustainable" materials, or some other element of commercial wokeness.
They do this in a very hostile environment. "Winter" comes in the form of Amazon and Walmart. Both have house brands that perform much the same function as DTC brands: If their suppliers are too fat and happy, they will create a house brand product that takes price-sensitive customers away from brand-name products.
What's left for DTC, then? There is the wokeness factor, which you might already think is weak sauce. There is also the chance to become actual innovators: Once a DTC startup passes a certain size, they can justify a technology effort to "build a moat" around their business. If they can mine customer data, or innovate in their supply chain, or find some other justification for building a technological advantage specific to their product category, and if that advantage turns out to have significant value, they will attain at least three valuable goals:
Raising the cost of entry to their categoryIncreasing their competitive advantage over traditional consumer brandsMaking themselves attractive as an acquisition target for Amazon, Walmart, or the traditional consumer brands they are disrupting
That's the story they tell their investors. Sometimes it works. Wayfair, an online furniture retailer, is racing to build a defensible advantage before Amazon turns their attention to a product category that hasn't been a focus for Amazon, and that poses some challenges for Amazon's physical and technological infrastructure.
Most DCT startups never cross that chasm.
But they keep coming toward that cliff. DCT startups are easy to start. They have also been bred into the supply of business graduates. Business schools run incubators and startup classes that are intended to give students a taste of the mechanics of creating a startup. These incubators and courses pump a lot of students through the startup experience in one semester. Time pressure discourages starting tech-based businesses because tech is hard, creating a liaison with a school's engineers is hard, and implementing tech can be slow. The result is a commonplace attitude in business: "Can't we do without all this engineering?" Nevermind actual science.
Will the DTC winter change the circumstances that flooded the startup world with DTC? As long as DTC remains fashionable, and as long as venture funds' limiteds demand to be in the fashionable thing, there is money for DTC, and eager young business grads willing to take that money and run right over the precipice.
Published on March 10, 2020 06:11
January 9, 2020
Over-optimized

The Boeing 737MAX crisis is a crisis of over-optimization.
"Over-optimization" is a curios term: "Optimal" means the best possible state of being. So how can something be over-optimized? You wouldn't say someone is "over-healthy," and much less would you expect that person to be ill. Yet over-optimization killed two plane-loads of people and grounded a plane that cost billions to develop, idling billions of dollars more in finished goods sitting on aprons, runways, and parking lots.
Boeing made a safe and very efficient airliner. Then, in pursuit of even greater efficiency, they made it less safe. Boeing over-optimized.
There are other elements to this tragedy, especially in in that Boeing could have chosen to mitigate some of the risks they created in over-optimizing by adding redundant sensors, better warnings of failures, etc.
Boeing could also have done, months ago, things like what they announced in January of 2020, that 737 MAX pilots should receive simulator training before flying the plane. Boeing created a very complex problem for themselves in multiple dimensions of how to manage this crisis. But, at the center of this problem is an over-optimization.
There are also a lot of things that did not contribute to this crisis: That the 737 MAX is an evolution of an old design did not contribute to the 737 MAX crisis. The 737 design evolution has kept the plane modern, otherwise it would not be a candidate for further development, and it would not be competitive with newer designs. The position of the new larger engines did not make the 737 MAX "unstable." The new engines change the flight characteristics of the plane, but not in a way that makes it more dangerous.
The way the 737 MAX evolved from previous designs forms the context of Boeing's fatal decisions. Boeing could have made different decisions. In theory, they could have designed a new plane. But the decisions they made to evolve the design and to put new engines on the plane did not themselves make the plane less safe.
Airliners have to be efficient. If a new airliner costs too much to develop, it will have to be priced higher to make back the development costs. If a new airliner burns too much fuel because the engines are outdated, it won't sell because fuel costs are a significant part of airlines' total costs. Had it not been for the crisis stemming from two crashes, and Boeing subsequent response, the 737 MAX would be a case study in brilliant product development strategy.
But then Boeing took optimization one step farther: Instead of training pilots to fly a 737 with somewhat different flight characteristics, Boeing decided to eliminate the need for pilot training entirely.
By now, the acronym MCAS has been much in the news. It stands for Maneuvering Characteristics Augmentation System. It is the system Boeing used to make the 737 MAX fly like the previous generation 737 by automatically adjusting the angle of the horizontal stabilizer, also referred to as "trim." MCAS is the proximate cause of the 737 MAX crashes. If the sensor input to the MCAS system is faulty it will move the horizontal stabilizer to point the nose of the plane down when it should not do so. If this happens at low altitude, pilots have only seconds to intervene manually to prevent a crash.
Because the purpose of the MCAS system is to make the 737 MAX fly just like the previous generation of 737, it was not mentioned in the 737 flight manual. This may seem like a terrible omission, but previous generations of 737 have automatic trim adjustment system, too. Pilots are trained to use electrically assisted or manual trim adjustments if needed, which are literally a pair of cranks in the cockpit that operate the horizontal stabilizer trim by means of cables and pulleys.
Over the course of developing the 737 MAX, the extent to which the MCAS system adjusted trim was increased fourfold, making recovering from a failure more difficult. Unlike a similar system used on another Boeing aircraft, the 737 MAX MCAS system reads input from only one sensor, instead of a redundant pair of sensors, making a failure more likely. The 737 MAX cockpit controls also make recovery from an MCAS failure more difficult. Boeing made optional an indicator light that would inform pilots of a sensor failure and only 20% of the 737 MAX fleet was fitted with that option. But, when working correctly, MCAS did the trick: The 737 MAX flies like the plane it replaces. No pilot retraining needed.
The sequence of decisions leading to the crashes is easy to see in hindsight. But neither Boeing nor the FAA saw it at the time. Even after the crashes Boeing's response appeared to be to shift blame and avoid having to reengineer MCAS, add a redundant sensor, or retrain pilots. Billions of dollars were at stake. Hundreds of unsuspecting people, on new modern airliners, died.
Published on January 09, 2020 20:34
September 5, 2019
Whatever Happened to the Amazon "Ice" Phone?

Fire & IceThe most important difference between Amazon's then-rumored "Ice" phone and the previous Fire Phone is that it was to incorporate the Google ecosystem: Google's package of proprietary applications on top of the open source Android OS. Almost all Android phones, other than those for Chinese and Russian domestic markets, are made this way
Amazon has used a derivative of the Android OS for their Fire OS, without the Google Play Store and Google's application suite, and used Amazon's own ecosystem. Amazon's Fire OS is a very credible derivative of Android. But, unlike Amazon's tablets, which sell well to customers seeking an inexpensive device for videos, reading, and music, Amazon's attempt at a Fire Phone was a disaster of major proportions. Imagine getting in your car and finding that Google Maps is not only missing, but unavailable on your Amazon phone.
Amazon's "Ice" phone would have been a change welcomed by customers: Android phones are well-liked and widely bought because Google's ecosystem of communications, mapping, and personal information applications make for an excellent smartphone experience. Amazon's own-brand products are also well-liked for low prices and generally decent quality. Having Amazon's suite of shopping and media apps pre-loaded would be convenient for many users.
DetenteWhat this change would amount to is that Google and Amazon realize they are not fundamental strategic competitors, despite competition in some very important market segments. It would also amount to a realization that creating a first-tier smartphone experience has become such an elite pursuit that it would take a very expensive and risky effort, even for a company with a CEO who makes space rockets as a sideline, to create such an experience entirely on Amazon's own ecosystem.
Amazon is not alone in deciding not to bet the enormous resources it takes to make a phone ecosystem. Microsoft has made peace with Android, as well. Microsoft and Amazon have left the smartphone platform business to Google and Apple in order to concentrate on what makes Microsoft and Amazon distinctive. It is more important to them to maintain and extend their respective areas of dominance on Apple and Google's mobile platforms than it is to be fully vertically integrated competitors in mobile platforms, despite the obvious importance of mobile platforms in computing in general.
Amazon's move is important in other ways: It's good for Android. Amazon sells a lot of inexpensive Android phones. Some are quality products. But many cheap Android phones are stuffed with bloatware and, worse still, spyware for monetizing your data.
Amazon "Ice" phones, if the pricing of Amazon's tablets are an indication, would undercut all but the cheapest phones on price but be well engineered in both hardware and software. An Amazon Android phone with Google's ecosystem included would be an excellent choice of a low-priced Android phone, maybe the best Android phone for the price.
"Maybe the best Android phone for the money" was never on anyone's mind when they looked at the Fire Phone. It was "Huh, can I live with a phone that's only got Amazon's ecosystem?"
Amazon could extend this approach to its Android-based tablets. They could access many of the same benefits, even though the "killer apps" like mapping are less compelling on a tablet. But, for many customers, it would be a large amount of added value without diminishing the affinity to Amazon these customers have.
Why no product?There are issues under the surface that make decisions like this harder than just "What's best for the customer?" For example, on an Android phone with Google's proprietary software components baked in, Google will get valuable information about Amazon customers. If Google decides to compete more directly and more effectively with Amazon, the strategic cost of giving Google that information goes up. This is not a decision without entailing risk. But the alternative is to continue to market Amazon apps for Android devices where Google gets that same information anyway.
Why no further rumors, and no announcements? Either Google or Amazon could have gotten cold feet and deemed their "frenemies" too dangerous to cooperate with. Overlapping platform capabilities, such as OS update mechanisms, can have proved too difficult to resolve. Alexa and the Google Assistant might never be on speaking terms.
While you can now "cast" your Amazon Prime Video streams to your Google Chromecast devices, imagine all the touchpoints where Google and Amazon's platforms would have to mesh, or at least avoid clashing, in a phone that incorporated both ecosystems. Mechanisms for trusting and vetting applications would have to be harmonized across two app stores, or Amazon might have to decide to phase-out their app store.
A quick search confirms that there is no news of an "Ice" phone since 2017.
Published on September 05, 2019 14:18
August 15, 2019
5G: Hype vs Reality
Telecom companies, their suppliers, and politicians are putting 5G in the newsThere have been a lot of news stories about 5G, a new mobile wireless standard. The theme many of these suspiciously similar articles is that 5G is going to transform everything. I'll tell you what to expect in reality, and what is wishful thinking on the part of the telecom industry, and why telecom service providers and equipment makers are hyping fantasies.
5G is a better radio5G means better mobile devices and a better mobile network. There are three main reasons 5G is better:
5G introduces a new radio technology that makes more efficient use of radio spectrumThe network behind those radios will be faster and have lower latency5G enables using more of the radio spectrumThere are many factors in the increased sophistication in 5G radios. These are the most important:
Encoding the digital data more densely into the radio signalTransmitting and receiving signals simultaneouslyA wider range of strategies for encoding dataUsing multiple antennas for input and output (MIMO)Forming and steering beams of radio entergyUsing more radio spectrum, if it's available, for an individual user's dataThe 5G radio is an impressive feat of technology. Sometimes you will see 5G referred to as "5G NR." The "NR" stands for New Radio.
In addition to increased sophistication in radio technology, 5G makes use of radio bands above 6 gigahertz, also referred to as millimeter wave bands. 5G relies on these high frequency bands to provide very high speed links, in the gigabits per second.
Gigabits per second sounds pretty great, but millimeter wave radio bands have limitations: radio waves in this range can't penetrate walls, or even some windows. They need a line-of-sight between the sending and receiving antennas. They go less far when it rains.
The ability to form and steer beams of radio energy, called "beamforming," can enhance the ability to use millimeter wave bands, but using this capability effectively is challenging.
5G capabilities require a lot of computing power. Simultaneous receiving and sending requires twice the radio hardware and computing power. Beamforming requires a lot of computing, too. So does fitting more bits into the same amount of spectrum, and using more spectrum.
The economics of putting increasingly complex digital radio technology into mobile devices is favorable because billions of people buy and use mobile devices. 5G radios will be significantly more expensive at first. This year, the first 5G phones will be more expensive by hundreds of dollars. They will be thicker and heavier. Their battery life will be poorer. But, as more people buy 5G devices, he cost of making 5G chips, and the incredibly complex 5G software and chip designs, is destined to decline. The chips will get more efficient and less expensive. But that will happen over the course of several years. There is no substitute for time when refining chip designs.
Over the next two or three product generations of 5G chips and the phones that use those chips, things will be back to normal and 5G phones will cost just a little more than 4G phones and battery life will also be back to normal.
If this were the whole story the result would be that data on your mobile device will be somewhere between the same and ten times faster, and occasionally, in some locations, 100 times faster. But 5G is not just a sophisticated radio.
5G is a better, faster, and expensive networkThe design of the network of radios and their connections to network nodes that run that radio network and connect it to the internet is profoundly ambitious, in multiple dimensions:
The physical infrastructure of the 5G radio network, when it operates in millimeter wave bands connecting to phones, requires a vast number of radios, referred to as "small cells."The performance goals of the 5G network are difficult to attain and sustain5G infrastructure will be very expensive and complex to buildSome of the technical wizardry of 5G will be hard to do reliably in real-world networksA 5G network will cost about twice as much to run and maintain as a 4G networkThe advantage of a large number of small cellular radios is that some difficult scenarios can be addressed. 5G networks in ports stadiums and conference halls, for example, will be able to offer enough capacity for attendees to get mobile wireless connections far better than they do now. These are the places where millimeter wave bands, lots of small cells, really transform the ability to solve network engineering problems. The question is: Are there enough sensible use cases to make wide deployment of small cells viable?
5G network operators can build 5G networks gradually. 5G telecom industry standards provide more than ten different approaches to designing and building a 5G network, divided into two broad categories: Standalone (SA), where a pure purpose-built 5G network connects 5G devices, and Non-standalone (NSA) networks where the 5G New Radio is used with mostly 4G network infrastructure, alongside 4G radios. The availability of gradual paths to 5G means that, at first, on most networks, in most places, the 5G experience will not feel different from 4G.
The fully realized vision of a 5G SA network that has extremely low end-to-end latency, and that can do exotic things like "network slicing" requires an incredibly expensive reconstruction of the mobile network to fiber optic backhaul and high performance network nodes, plus extremely complex control software.
Network slicing refers to the ability to run multiple virtual networks with different performance characteristics on one set of network hardware. The use cases envisioned for network slicing include providing a "slice" for public safety users.
The risk 5G network operators who take a more aggressive approach to building out a 5G network face is that the new sources of revenue that 5G is supposed to enable, like public safety agencies renting a network slice, 5G in factories, telemedicine, and IoT device makers using 5G extensively, don't actually materialize.
There will be islands of 5G nirvana if only because most network operators want to showcase the highest performance capabilities of 5G and test whether a complete 5G implementation adds up to a qualitative difference in the mobile user experience and in the kinds of products they can sell. Most of the planet will be 4G for a long time to come.
5G has the potential to be a market trainwreck. For example, using millimeter wave bands for cars on the go is challenging because even car windows can block millimeter wave signals. That's right: 5G implies a mobile network that has trouble with signals penetrating cars. Solutions, like antennas embedded in car windows, have been proposed.
The capital and operating costs of 5G have driven bother equipment makers and network operators to speculate on sometimes fanciful uses of 5G, and to reach for markets that may not materialize. In reality, many aspects of the 5G marketing vision will never happen.
Parts of 5G are a fantasyWhen you specify the capabilities of telecom network equipment and write standards for those networks you create scenarios called "use cases." In 5G, this process of envisioning scenarios for how the network could be used includes things that are, to put it kindly, speculative.
Here are just some of the unlikely futures embodied in 5G use cases:
5G is a critical enabling technology for autonomous vehicles5G is going to transform factories5G IoT devices will be everywhere Telemedicine is enabled by the 5G generational transition in mobile technologyTelcos will be in the business of providing network cache and computing resourcesMany people will be walking around with VR or AR headgear onLet's start with autonomous vehicles or "self-driving' cars. This is a typical "use case" that looks good if you assume the roads can be blanketed with both 5G radios and very low-latency 5G networks and network nodes. The fact is that makers of autonomous cars cannot rely on any mobile network connectivity at all, and can even less rely on or wait for pervasive low-latency 5G networks that implement features like network slicing to isolate vehicle to vehicle traffic on a dedicated set of network resources. Autonomous cars, and vehicle to vehicle (V2V) communications, are going to happen with or without 5G.
Factories and telemedicine, similarly, could use 5G radio technology, but why would they? Why would a telemedicine application make use of wireless connections when a wired connection is available? Why would a factory use wireless to gain a small increment in flexibility when moving and reprogramming factory machines makes up the vast majority of the cost of reconfiguring a production line. If you sense a bit of déjà vu, it is because telemedicine was trotted out as a key use case for 3G and 4G, and it still isn't.
One of the most speculative use cases in 5G hype is augmented reality (AR) and virtual reality (VR). Imagine tens of millions of AR users walking around with headgear on, relying on low latency to prevent the motion sickness that comes from the feeling that one's vision doesn't quite correspond to one's movement. This requires "edge computing."
AR and VR are both nascent technologies and are unlikely to be either enabled or blocked by the pace of 5G. It is also an open question whether they are going to break out of niche applications. We are unlikely to have to worry about people randomly vomiting due to lag, as they move through "mixed reality," on a regular basis.
The theory behind "edge computing" is that applications will have to be pushed out into the mobile network operators' networks to satisfy latency requirements, and that network operators will develop a huge business in assuring that applications like AR, and media like Netflix shows, get to users' phones in the optimal way.
Mobile network operators do not have the expertise that Google, Amazon, Apple, and Microsoft have in operating flexible computing resources. The implication that 5G will turn mobile network operators into the new platform for content delivery and software-as-a-service is unlikely to happen.
Seeing the world through telecom-colored glasses is not new. When 3G was launched, all sorts of fanciful scenarios were part of the public relations push. to promote the idea that 3G capabilities were transformational. But Steve Jobs and the iPhone prevailed in shaping the role of 3G: The mobile experience is the internet experience, and the role of network operators is to deliver the bits and get out of the way. So it will be with 5G.
Parts of 5G are bad5G is good. 5G is massively over-hyped. But is it actually harmful?
Unfortunately there are aspects of 5G that are outright bad:
Deep packet inspection is designed-in"Edge computing" means "not neutral"5G positions incumbent network operators as "too big to fail"5G is portrayed as a strategic issue critical to national securityThe ability of 5G networks to guarantee the ability to deliver large critical flows in limited bandwidth and the ability to deliver low latency depends on the ability to discriminate between traffic. Some of this will be preconfigured for, for example, emergency communications. But, in many cases, 5G features depend on deep packet inspection. This means the network is looking at the content of your traffic and deciding how to treat it. This compromises end to end security and, obviously, privacy.
Deep packet inspection is inimical to end to end security, and vica versa, which makes it hostile to network neutrality. In a neutral network, no applications are favored over others. Network neutrality is key to innovation: The low cost of launching a new internet application using on-demand computing and storage resources from providers like Amazon has created an explosion in innovation. Having to negotiate with a mobile network operator for transport priority, storage, and computing resources in a 5G "edge computing" architecture makes launching apps much slower and more expensive.
Not the least harm is the impact of 5G capital requirements on a highly indebted industry. 5G's requirements for the number of radios, how they are connected, and the fiber network and new network nodes required to build the complete vision of 5G puts network operators on a collision course with property owners, governments, and bond markets.
The two largest US mobile network operators, AT&T and Verizon roughly doubled their long term debt from about $60 billion, each, in 2010 to about $120 billion, each, in 2017. In the same period, operating cash flow remained roughly flat for AT&T, and declined for Verizon. (Source: lightreading.com)
The strains are showing: Among leading industrial nations, although the US has excellent 4G availability, the US ranks poorly for network speed and lanancy. (source: opensignal.com) Network performance is strongly linked to capital spending on networks. How can we expect carriers to deliver orders of magnitude better performance in 5G if they do so poorly building and operating their 4G networks?
This is one reason why part of the 5G hype is that 5G is a national strategic priority, and that China, the bogeyman of the day, will overtake the US if the US does not embark on a crash program of 5G upgrades, with government money and giveaways like free access to municipal resources.
In fact, China's own network operators are cautious. Yang Jie, chairman of China Mobile recently stated:
These are not the words of economic soldiers in a command economy marching lockstep toward 5G domination. But that hasn't stopped analogies to a "missile gap." In an opinion piece in Newsweek Newt Gingrich, writes:
5G will be mostly the sameWhen it comes to 5G, don't fear it. It will make your phone faster. It will make the mobile network better. More data will fit in the same spectrum. Eventually, it won't also make your phone hotter and run out of battery.
Don't buy in to the hype. Guard your wallet from people claiming 5G is a strategic imperative. Be vigilant against network operators leveraging 5G to capture what are currently open and competitive elements of the internet infrastructure.
5G will be there, when and where it makes economic sense. So will WiFi 6, but that is another story.
5G is a better radio5G means better mobile devices and a better mobile network. There are three main reasons 5G is better:
5G introduces a new radio technology that makes more efficient use of radio spectrumThe network behind those radios will be faster and have lower latency5G enables using more of the radio spectrumThere are many factors in the increased sophistication in 5G radios. These are the most important:
Encoding the digital data more densely into the radio signalTransmitting and receiving signals simultaneouslyA wider range of strategies for encoding dataUsing multiple antennas for input and output (MIMO)Forming and steering beams of radio entergyUsing more radio spectrum, if it's available, for an individual user's dataThe 5G radio is an impressive feat of technology. Sometimes you will see 5G referred to as "5G NR." The "NR" stands for New Radio.
In addition to increased sophistication in radio technology, 5G makes use of radio bands above 6 gigahertz, also referred to as millimeter wave bands. 5G relies on these high frequency bands to provide very high speed links, in the gigabits per second.
Gigabits per second sounds pretty great, but millimeter wave radio bands have limitations: radio waves in this range can't penetrate walls, or even some windows. They need a line-of-sight between the sending and receiving antennas. They go less far when it rains.
The ability to form and steer beams of radio energy, called "beamforming," can enhance the ability to use millimeter wave bands, but using this capability effectively is challenging.
5G capabilities require a lot of computing power. Simultaneous receiving and sending requires twice the radio hardware and computing power. Beamforming requires a lot of computing, too. So does fitting more bits into the same amount of spectrum, and using more spectrum.
The economics of putting increasingly complex digital radio technology into mobile devices is favorable because billions of people buy and use mobile devices. 5G radios will be significantly more expensive at first. This year, the first 5G phones will be more expensive by hundreds of dollars. They will be thicker and heavier. Their battery life will be poorer. But, as more people buy 5G devices, he cost of making 5G chips, and the incredibly complex 5G software and chip designs, is destined to decline. The chips will get more efficient and less expensive. But that will happen over the course of several years. There is no substitute for time when refining chip designs.
Over the next two or three product generations of 5G chips and the phones that use those chips, things will be back to normal and 5G phones will cost just a little more than 4G phones and battery life will also be back to normal.
If this were the whole story the result would be that data on your mobile device will be somewhere between the same and ten times faster, and occasionally, in some locations, 100 times faster. But 5G is not just a sophisticated radio.
5G is a better, faster, and expensive networkThe design of the network of radios and their connections to network nodes that run that radio network and connect it to the internet is profoundly ambitious, in multiple dimensions:
The physical infrastructure of the 5G radio network, when it operates in millimeter wave bands connecting to phones, requires a vast number of radios, referred to as "small cells."The performance goals of the 5G network are difficult to attain and sustain5G infrastructure will be very expensive and complex to buildSome of the technical wizardry of 5G will be hard to do reliably in real-world networksA 5G network will cost about twice as much to run and maintain as a 4G networkThe advantage of a large number of small cellular radios is that some difficult scenarios can be addressed. 5G networks in ports stadiums and conference halls, for example, will be able to offer enough capacity for attendees to get mobile wireless connections far better than they do now. These are the places where millimeter wave bands, lots of small cells, really transform the ability to solve network engineering problems. The question is: Are there enough sensible use cases to make wide deployment of small cells viable?
5G network operators can build 5G networks gradually. 5G telecom industry standards provide more than ten different approaches to designing and building a 5G network, divided into two broad categories: Standalone (SA), where a pure purpose-built 5G network connects 5G devices, and Non-standalone (NSA) networks where the 5G New Radio is used with mostly 4G network infrastructure, alongside 4G radios. The availability of gradual paths to 5G means that, at first, on most networks, in most places, the 5G experience will not feel different from 4G.
The fully realized vision of a 5G SA network that has extremely low end-to-end latency, and that can do exotic things like "network slicing" requires an incredibly expensive reconstruction of the mobile network to fiber optic backhaul and high performance network nodes, plus extremely complex control software.
Network slicing refers to the ability to run multiple virtual networks with different performance characteristics on one set of network hardware. The use cases envisioned for network slicing include providing a "slice" for public safety users.
The risk 5G network operators who take a more aggressive approach to building out a 5G network face is that the new sources of revenue that 5G is supposed to enable, like public safety agencies renting a network slice, 5G in factories, telemedicine, and IoT device makers using 5G extensively, don't actually materialize.
There will be islands of 5G nirvana if only because most network operators want to showcase the highest performance capabilities of 5G and test whether a complete 5G implementation adds up to a qualitative difference in the mobile user experience and in the kinds of products they can sell. Most of the planet will be 4G for a long time to come.
5G has the potential to be a market trainwreck. For example, using millimeter wave bands for cars on the go is challenging because even car windows can block millimeter wave signals. That's right: 5G implies a mobile network that has trouble with signals penetrating cars. Solutions, like antennas embedded in car windows, have been proposed.
The capital and operating costs of 5G have driven bother equipment makers and network operators to speculate on sometimes fanciful uses of 5G, and to reach for markets that may not materialize. In reality, many aspects of the 5G marketing vision will never happen.
Parts of 5G are a fantasyWhen you specify the capabilities of telecom network equipment and write standards for those networks you create scenarios called "use cases." In 5G, this process of envisioning scenarios for how the network could be used includes things that are, to put it kindly, speculative.
Here are just some of the unlikely futures embodied in 5G use cases:
5G is a critical enabling technology for autonomous vehicles5G is going to transform factories5G IoT devices will be everywhere Telemedicine is enabled by the 5G generational transition in mobile technologyTelcos will be in the business of providing network cache and computing resourcesMany people will be walking around with VR or AR headgear onLet's start with autonomous vehicles or "self-driving' cars. This is a typical "use case" that looks good if you assume the roads can be blanketed with both 5G radios and very low-latency 5G networks and network nodes. The fact is that makers of autonomous cars cannot rely on any mobile network connectivity at all, and can even less rely on or wait for pervasive low-latency 5G networks that implement features like network slicing to isolate vehicle to vehicle traffic on a dedicated set of network resources. Autonomous cars, and vehicle to vehicle (V2V) communications, are going to happen with or without 5G.
Factories and telemedicine, similarly, could use 5G radio technology, but why would they? Why would a telemedicine application make use of wireless connections when a wired connection is available? Why would a factory use wireless to gain a small increment in flexibility when moving and reprogramming factory machines makes up the vast majority of the cost of reconfiguring a production line. If you sense a bit of déjà vu, it is because telemedicine was trotted out as a key use case for 3G and 4G, and it still isn't.
One of the most speculative use cases in 5G hype is augmented reality (AR) and virtual reality (VR). Imagine tens of millions of AR users walking around with headgear on, relying on low latency to prevent the motion sickness that comes from the feeling that one's vision doesn't quite correspond to one's movement. This requires "edge computing."
AR and VR are both nascent technologies and are unlikely to be either enabled or blocked by the pace of 5G. It is also an open question whether they are going to break out of niche applications. We are unlikely to have to worry about people randomly vomiting due to lag, as they move through "mixed reality," on a regular basis.
The theory behind "edge computing" is that applications will have to be pushed out into the mobile network operators' networks to satisfy latency requirements, and that network operators will develop a huge business in assuring that applications like AR, and media like Netflix shows, get to users' phones in the optimal way.
Mobile network operators do not have the expertise that Google, Amazon, Apple, and Microsoft have in operating flexible computing resources. The implication that 5G will turn mobile network operators into the new platform for content delivery and software-as-a-service is unlikely to happen.
Seeing the world through telecom-colored glasses is not new. When 3G was launched, all sorts of fanciful scenarios were part of the public relations push. to promote the idea that 3G capabilities were transformational. But Steve Jobs and the iPhone prevailed in shaping the role of 3G: The mobile experience is the internet experience, and the role of network operators is to deliver the bits and get out of the way. So it will be with 5G.
Parts of 5G are bad5G is good. 5G is massively over-hyped. But is it actually harmful?
Unfortunately there are aspects of 5G that are outright bad:
Deep packet inspection is designed-in"Edge computing" means "not neutral"5G positions incumbent network operators as "too big to fail"5G is portrayed as a strategic issue critical to national securityThe ability of 5G networks to guarantee the ability to deliver large critical flows in limited bandwidth and the ability to deliver low latency depends on the ability to discriminate between traffic. Some of this will be preconfigured for, for example, emergency communications. But, in many cases, 5G features depend on deep packet inspection. This means the network is looking at the content of your traffic and deciding how to treat it. This compromises end to end security and, obviously, privacy.
Deep packet inspection is inimical to end to end security, and vica versa, which makes it hostile to network neutrality. In a neutral network, no applications are favored over others. Network neutrality is key to innovation: The low cost of launching a new internet application using on-demand computing and storage resources from providers like Amazon has created an explosion in innovation. Having to negotiate with a mobile network operator for transport priority, storage, and computing resources in a 5G "edge computing" architecture makes launching apps much slower and more expensive.
Not the least harm is the impact of 5G capital requirements on a highly indebted industry. 5G's requirements for the number of radios, how they are connected, and the fiber network and new network nodes required to build the complete vision of 5G puts network operators on a collision course with property owners, governments, and bond markets.
The two largest US mobile network operators, AT&T and Verizon roughly doubled their long term debt from about $60 billion, each, in 2010 to about $120 billion, each, in 2017. In the same period, operating cash flow remained roughly flat for AT&T, and declined for Verizon. (Source: lightreading.com)
The strains are showing: Among leading industrial nations, although the US has excellent 4G availability, the US ranks poorly for network speed and lanancy. (source: opensignal.com) Network performance is strongly linked to capital spending on networks. How can we expect carriers to deliver orders of magnitude better performance in 5G if they do so poorly building and operating their 4G networks?
This is one reason why part of the 5G hype is that 5G is a national strategic priority, and that China, the bogeyman of the day, will overtake the US if the US does not embark on a crash program of 5G upgrades, with government money and giveaways like free access to municipal resources.
In fact, China's own network operators are cautious. Yang Jie, chairman of China Mobile recently stated:
"Capital expenditure, including 5G, will be lower than last year's total amount."China Unicom will...
"tighten the purse strings on 5G as it requires huge amount of investment."...said Chairman and CEO Wang Xiaochu.
These are not the words of economic soldiers in a command economy marching lockstep toward 5G domination. But that hasn't stopped analogies to a "missile gap." In an opinion piece in Newsweek Newt Gingrich, writes:
“Go” is an ancient Chinese board game based on encirclement and territorial control. It is the most ancient and complicated board game in the world. Beijing is engaged in a concerted strategy of encirclement and control of wireless. But too many nations in the West are content to “let the chips fall where they may.”To be fair, it's good to see Newt has let go of free market dogma. He proposes a remedy to 5G problems: a government funded nationwide 5G network:
The project should be nationwide, with broad geographic coverage—in contrast to current operators’ plans for targeted, urban-specific 5G rollouts, which leave rural America in a 3G or 4G world. This will benefit those on the wrong side of the digital divide while making possible a wider range of innovative uses of the network. These include precision agriculture, automotive and trucking telemetry, telemedicine, and many other advancements.This closes the circle perfectly. All the fantasy use cases get implemented, and mobile network operators' debt doesn't increase. The peril from China is held at bay. America strides forward into the 5G future.
5G will be mostly the sameWhen it comes to 5G, don't fear it. It will make your phone faster. It will make the mobile network better. More data will fit in the same spectrum. Eventually, it won't also make your phone hotter and run out of battery.
Don't buy in to the hype. Guard your wallet from people claiming 5G is a strategic imperative. Be vigilant against network operators leveraging 5G to capture what are currently open and competitive elements of the internet infrastructure.
5G will be there, when and where it makes economic sense. So will WiFi 6, but that is another story.
Published on August 15, 2019 12:00
May 31, 2019
A $99 Android Tablet That Doesn't Suck

The last of my Android tablets, a Nexus made by LG, died over a year ago. It had stopped being updated years before that. This is why I had not bought a replacement:
I don't like Samsung's Android extensions and bloatware, and how that delays updatesI won't buy a cheap tablet with an out of date version of AndroidThe Pixel Slate is a software platypus, part Chromebook, part Android tablet, and expensiveI don't use Alexa, and I don't like the lack of Google Play Services on Amazon Fire tabletsExcept for Samsung, Google has done a terrible job cultivating tablet manufacturers to make good Android tablets at good prices. Amazon Fire tablets, which are great for consuming Amazon media content, don't run a lot of apps I use. The choice has been between Samsung, or cheap and underpowered tablets running versions of Android that are obsolete right out of the box, and never updated thereafter.
Recently, I was listening to a tech news podcast and heard, in passing, about Walmart selling inexpensive Android tablets running Android 9, or "Pie." Maybe it's time to try another tablet!
It's OnnOn Walmart's web site, they looked mostly like other inexpensive Android tablets, except for the up to date version of Android. They are branded "Onn," which is Walmart's house brand for electronics, mostly TVs and phone accessories. This is the first time I heard about Walmart having a house brand for electronics.
It's hard to findWalmart's web site is a mess. Without Google having indexed the Walmart site better than Walmart has, it would have been impossible to find the Onn tablets. And if I hadn't found a review that mentions the Onn brand, I might not have found them at all. If you ever wondered if Amazon has to worry about Walmart, you can stop now. There are semi-literate peasants in the developing world selling their crafts on e-commerce sites with better search and filtering. I won't insert an url here, it will probably change or be wrong by the time you read this. Just google "Onn 10 inch tablet." You'll get there.
In fact the first time I found a page on Walmart's site about these tablets, it told me seven of the model I wanted were in stock in Framingham. I was in Newton at the time and on my way home, so that was convenient. When I got to the Walmart in Framingham that model was not in stock. I searched again to see if I should ask the clerk to check their stockroom or store database, but when I looked at the site again, Framingham was not listed among the stores stocking the model I wanted. The clerk insisted they never had them. So off to Hudson, still on the way home. Walmart! What's wrong with your web site?
Found itThere are at least three models of Onn tablet: 8 inch, 10 inch, and 10 inch with a keyboard-folio case. I was feeling like a high roller so I went with the big one with the keyboard case, for $99. Returning things to Walmart is simple, so I bought it even though none of the Onn tablets were on display at the Hudson store. The Framingham store has the 8 inch tablet on display. I was able to verify it was in fact running Android 9. Other than the screen and keyboard, the CPU, GPU, memory, etc., and software are identical in all models:800 x 1280 screen resolution1.3GHz MediaTek quad core processor with GPU2GB RAM16GB flash plus micro SD slot (unpopulated)Android 9 "Pie"0.3 megapixel front-facing camera2 megapixel rear-facing cameraThat's right: $99 for a 10 inch tablet with a keyboard case. Charger and cable included.
Using itThe case has a magnetic closure and attachment point for the tablet. The keyboard is better than the Logitech bluetooth keyboard I sometimes use with my Pixel 2 phone. Every bit the equal of a quality aftermarket keyboard-case for an iPad or Samsung tablet.
A $99 tablet has to be fiercely cost-reduced. It looks nice, though. It's thicker than a new iPad, but not as thick and heavy as Amazon's tablets. The screen "glass" appears to be plastic, which is another reason to go for the model with a case. Subjectively, it seems better made than the vast majority of inexpensive Android devices. But I'm not going to be careless about this device. It feels nothing like my Pixel 2 phone that has survived a few drops without a protective case. It's not flimsy, but I don't think a lot of crashworthiness was in the budget.
The cameras are mediocre. The audio is OK, and has enough power to drive magneplanar HiFiMAN headphones, but can't hold a candle to the Fiio usb dac/amp I normally use with a Mac laptop.
Battery life is mediocre. While the specs don't list milliamp-hours (mah), they do claim 5 hour battery life, and that's about right for a mix of uses. Good enough for a longish movie. Long enough for domestic flights, or to have the Starbucks people start looking at you like you should move along. The battery life is the only aspect that's actually a disappointment, to me. The other compromises are very tolerable.
The screen, with a not very high 1200x800 resolution is surprisingly tolerable. Text is crisp. Video looks great. Off-axis color and brightness does not deteriorate until about 45 degrees.
Performance is adequate. Google's Docs and Sheets apps are not laggy. YouTube is smooth, and the UI is responsive. App loading is slower than on my Pixel 2, but not annoyingly.
I would have been willing to pay more for a better screen, though I have to wonder why if this screen seems perfectly fine, and I'd like a bigger battery. But this tablet delights far more than it disappoints, and the price makes it an excellent value. $99! With a nice keyboard case!
Is there any future in it?One reason to consider the Onn tablets is that they run the current version of Android. The newer the version of Android you start with, the longer the useful life of the product. This is not only for the obvious reason that you are not starting with an old version of the OS, but also because Google has made Android easier for device manufacturers to update in more recent versions. In new license agreements, Google is requiring manufacturers to provide security updates for two years. Walmart's Onn brand might be legit enough not to flout those license requirements. The amount of Walmart bloatware on these tablets isn't onerous, and should not delay updates to the OS.
Walmart has done a good job with these tablets. Google is doing a better job enabling and incentivising manufacturers to keep their devices up to date. Walmart has a further reason to keep these tablets running smoothly: Walmart wants to be a viable competitor to Amazon in e-commerce. Walmart owns the Vudu streaming service, which is preinstalled on the Onn tablets. The Onn tablets also have Walmart's e-book reader and market preinstalled, as well as other Walmart e-commerce apps. They should be motivated to create and sell tablets that put and keep these apps in customers' hands.
On the other hand, Walmart's web site is so terrible that it makes one doubt their commitment to competing with Amazon, and that casts doubt on whether the Onn tablets will play the same role as Amazon's Fire tablets.
If we're lucky, this product is the first of a wave of Android tablets with reasonable prices and up to date OS versions.
Published on May 31, 2019 08:27
October 29, 2018
Legal Compliance Is Insufficient in Stopping Hate Speech on Social Networks
US law is especially liberal when it comes to free speech. This puts hate speech, however you define it short of a call to violent action, under the protection of the First Amendment. For the foreseeable future, the fight against hate and violence must operate in this context.
This feels unsatisfactory when the killer of eleven people in a Pittsburgh synagogue vented his hate on the social network Gab leading up to the attack, even announcing his intention to act.
Social networks can, in fact, be much better at containing the problem of online hate speech. You should raise your expectations of social networks to help solve the problems of fascist, antisemitic, racist, and sexist speech. Here's why:
Social networks and hate speechSocial networks are privately run, and have the freedom to limit any kind of speech on their platforms, for any reason. You have no right to have Facebook publish your posts. For example, selling illegal drugs or passing around copyright works belonging to large publishers will promptly get your account suspended.
Social networks also have the responsibility to not be enablers of hate and violence, and of other abuse of their platforms such as influence campaigns by foreign adversaries.
This fact should be a useful moderation of free speech absolutism. Social networks should provide a product where freedom of expression is maximized without sliding into the muck of hate speech, just like any business that provides a public gathering place should want to provide the best possible enjoyable experience to customers. Creating a harmful environment is irresponsible. But, obviously, this sounds idealistic in the current environment.
When the management of a social network like Gab claims that the entire purpose of Gab is to go out to the boundaries of free speech, they are ducking their responsibility. Gab has the right to be irresponsible in that way, but they don't have a right to be helped by hosting providers, ad networks, payment providers, and other toolmakers of the internet ecosystem. Gab is rightly finding itself isolated. When more responsible platforms fail to perform responsibly, they not only fail to serve their customers, they fail society and responsible commerce as a whole.
Minority groups and women suffer disproportionately from harassment by online haters, so much so that education and recruitment in internet technology industries is negatively affected by online harassment. The supply of adept engineers and management in social networks themselves is being constrained by their own inability to reign in online hate and harassment.
It is time to do better. Social networks already have the tools to do better. It is time to apply social network data and analytics to that task.
Social networks have the tools to be responsibleFacebook has billions of users, all over the planet. You can readily imagine that being responsible for preventing the spread of hate and insidious hostile propaganda is not easy. One can't realistically expect that every individual with serious potential for violent action can be identified.
Social networks face difficulty in dimensions other than scale. It is the special talent of social network "stars" who attract large followings, and who earn big incomes on social networks, to manipulate social networks to their advantage. That means that social networks are literally incentivizing people who are especially good at subverting the system, and creating a culture of sophistication in outthinking their algorithms and incentives.
Not least is the problem that social networks have a financial disincentive to root out automated subversion and bad behavior. Social networks are valued on the basis of the number of people visiting them and engaging in activity. Fake activity, like software "bots" that mimic human users count toward the numbers used to attract ad revenue to social networks.
Nevertheless these facts can't excuse the current poor performance of social networks in cleaning out their dark, hostile netherworlds of trolls, "shitposters," nazis, and racists.
Social networks have honed the art of finding out your desires even more accurately and objectively than you are self-aware of those desires. Social network analytics have been built to a remarkable level of refinement because they are the engines of the social network business model: You are monitored and measured for every signal you emit. Your desires are what they sell to advertisers, quantified, tested, and proven to be far more effective than any medium that preceded social networks.
It's not just you. Social networks know your social graph. They know the strength of those connections. They know the frequency and amount of your interactions. They know your connections' desires better than you do. They know the human context of your desires in ways you inaccessible to you.
For the reason that they know you so well, that they know everyone who uses their platforms so well, we should expect that their ability to identify and isolate hate and violence should be much better than is currently apparent. They don't need to rely on being able to distinguish a harmless ranting madman from one that will pick up a rifle and start shooting. They have context. They have everyone's connections. They know the likelihood you will act to buy something. They have the tools to discern the blowhard from the possible gunman.
But we should not be satisfied by social networks merely detecting the hateful. Social networks have the tools to reduce the harm from the hateful.
Have higher expectationsNot only can social networks use their sophisticated tools to detect bad behavior, they also have the potential to isolate and reduce the impact of that behavior. They can turn the tools of the badly behaved against them: Social networks use bots to create the impression of activity for relatively benign purposes like promoting the use of multiplayer games. That is, social networks have their own tame bots.
Just as hate speech mongers use bots and other techniques to subvert productive conversation on social networks, the networks could use their own automated technologies to isolate hate speech, turn the haters against one another, and leave them shouting into the wind, blind to the fact that nobody is listening.
You can bet that social networks use all tools in their toolbox to keep you engaged and sell you stuff. You should expect them to be at least as sophisticated in the service of ridding your social network experience of trolls and bots.
Don't accept excusesDon't take meeting the minimum standard of legal compliance as an excuse. Social networks have rid themselves of people intent on the crime of sharing a music recording, surely they could try harder with the neo-Nazis. They've got the tools to detect and quarantine this disease. It is time for them to act.
This feels unsatisfactory when the killer of eleven people in a Pittsburgh synagogue vented his hate on the social network Gab leading up to the attack, even announcing his intention to act.
Social networks can, in fact, be much better at containing the problem of online hate speech. You should raise your expectations of social networks to help solve the problems of fascist, antisemitic, racist, and sexist speech. Here's why:
Social networks and hate speechSocial networks are privately run, and have the freedom to limit any kind of speech on their platforms, for any reason. You have no right to have Facebook publish your posts. For example, selling illegal drugs or passing around copyright works belonging to large publishers will promptly get your account suspended.
Social networks also have the responsibility to not be enablers of hate and violence, and of other abuse of their platforms such as influence campaigns by foreign adversaries.
This fact should be a useful moderation of free speech absolutism. Social networks should provide a product where freedom of expression is maximized without sliding into the muck of hate speech, just like any business that provides a public gathering place should want to provide the best possible enjoyable experience to customers. Creating a harmful environment is irresponsible. But, obviously, this sounds idealistic in the current environment.
When the management of a social network like Gab claims that the entire purpose of Gab is to go out to the boundaries of free speech, they are ducking their responsibility. Gab has the right to be irresponsible in that way, but they don't have a right to be helped by hosting providers, ad networks, payment providers, and other toolmakers of the internet ecosystem. Gab is rightly finding itself isolated. When more responsible platforms fail to perform responsibly, they not only fail to serve their customers, they fail society and responsible commerce as a whole.
Minority groups and women suffer disproportionately from harassment by online haters, so much so that education and recruitment in internet technology industries is negatively affected by online harassment. The supply of adept engineers and management in social networks themselves is being constrained by their own inability to reign in online hate and harassment.
It is time to do better. Social networks already have the tools to do better. It is time to apply social network data and analytics to that task.
Social networks have the tools to be responsibleFacebook has billions of users, all over the planet. You can readily imagine that being responsible for preventing the spread of hate and insidious hostile propaganda is not easy. One can't realistically expect that every individual with serious potential for violent action can be identified.
Social networks face difficulty in dimensions other than scale. It is the special talent of social network "stars" who attract large followings, and who earn big incomes on social networks, to manipulate social networks to their advantage. That means that social networks are literally incentivizing people who are especially good at subverting the system, and creating a culture of sophistication in outthinking their algorithms and incentives.
Not least is the problem that social networks have a financial disincentive to root out automated subversion and bad behavior. Social networks are valued on the basis of the number of people visiting them and engaging in activity. Fake activity, like software "bots" that mimic human users count toward the numbers used to attract ad revenue to social networks.
Nevertheless these facts can't excuse the current poor performance of social networks in cleaning out their dark, hostile netherworlds of trolls, "shitposters," nazis, and racists.
Social networks have honed the art of finding out your desires even more accurately and objectively than you are self-aware of those desires. Social network analytics have been built to a remarkable level of refinement because they are the engines of the social network business model: You are monitored and measured for every signal you emit. Your desires are what they sell to advertisers, quantified, tested, and proven to be far more effective than any medium that preceded social networks.
It's not just you. Social networks know your social graph. They know the strength of those connections. They know the frequency and amount of your interactions. They know your connections' desires better than you do. They know the human context of your desires in ways you inaccessible to you.
For the reason that they know you so well, that they know everyone who uses their platforms so well, we should expect that their ability to identify and isolate hate and violence should be much better than is currently apparent. They don't need to rely on being able to distinguish a harmless ranting madman from one that will pick up a rifle and start shooting. They have context. They have everyone's connections. They know the likelihood you will act to buy something. They have the tools to discern the blowhard from the possible gunman.
But we should not be satisfied by social networks merely detecting the hateful. Social networks have the tools to reduce the harm from the hateful.
Have higher expectationsNot only can social networks use their sophisticated tools to detect bad behavior, they also have the potential to isolate and reduce the impact of that behavior. They can turn the tools of the badly behaved against them: Social networks use bots to create the impression of activity for relatively benign purposes like promoting the use of multiplayer games. That is, social networks have their own tame bots.
Just as hate speech mongers use bots and other techniques to subvert productive conversation on social networks, the networks could use their own automated technologies to isolate hate speech, turn the haters against one another, and leave them shouting into the wind, blind to the fact that nobody is listening.
You can bet that social networks use all tools in their toolbox to keep you engaged and sell you stuff. You should expect them to be at least as sophisticated in the service of ridding your social network experience of trolls and bots.
Don't accept excusesDon't take meeting the minimum standard of legal compliance as an excuse. Social networks have rid themselves of people intent on the crime of sharing a music recording, surely they could try harder with the neo-Nazis. They've got the tools to detect and quarantine this disease. It is time for them to act.
Published on October 29, 2018 15:37
May 31, 2016
Telirati Tips #1 Sony RAW Noise and Bricking Problems and Solutions

Here we'll take a short break from mobile telecommunications, IoT, project management and other Serious Topics to cover a little photography. I recently found some commonplace problems with my camera, and solutions to those problems:Noisy RAW filesBricked cameras when updating
I set out to see if a firmware update would cure a problem with excess noise in RAW images from my Sony a6000, and on my way to find out, I discovered that Sony's Mac OS X firmware updater is a flaming bag of poop that bricked my camera. What I learned on my way to a solution is probably applicable to other similar Sony cameras.
The Sony a6000 is a wonderful camera. I bought one when it first came out as an upgrade from my NEX-5. In silver, it has a classic look without pandering to hipster faux 1950s rangefinder affectations. With 24 megapixels in an APS-C sensor, it packs prosumer DSLR specs into an under $1000 compact camera body. Sony's mirrorless product line got me back into photography, starting with the NEX-5, which is a modern classic of industrial design and a tour de force of camera technology packed in a tiny magnesium body. I especially like shooting with an old Canon f1.4 50mm lens on an adapter/speed booster that brings the effective wide-open aperture to around f1.2, with a scalpel-fine depth of field.
I also enjoyed treating the sensor in the NEX-5 as if it were an electronic sheet of film, using RAW image data and digital darkroom software like Rawtherapee to perform the kinds of correction modern cameras normally do for you. The problem was that the RAW files uploaded from the a6000 were excessively "noisy." Areas that should have been smooth were speckled with what looked like random noise. So I was constrained to using the jpeg files, which were, really, just fine. But it continued to annoy me that I wasn't getting at exactly what the lens laid down on the sensor.
Recently it occurred to me I should check the firmware versions. I downloaded the firmware updater from Sony's site, I borrowed a Mac to run it on, and proceeded to run the updater. The updater informed me I was upgrading from firmware version 1.00 to version 3.10. Excellent! With so many missed updates I felt my odds were good that there was a fix for noisy RAW files in there somewhere.
The updater has a spartan use interface with a text area purportedly reflecting the state of the update and prompting me to perform various steps like connecting the usb cable and selecting the correct mode on the camera for the updater to run. It appeared that the update completed correctly, based on what was on the updater was telling me. I clicked on the "Finish" button and, somewhat to my horror, the camera did not restart. The screen was blank. A red LED near the battery door was on. Turning the camera off and back on did not help. Nor did pulling the battery. Re-running the updater yielded the same result.
A search for similar problems turned up a lot of untested advice: Turn it off, try again, take out the battery, etc. None of those nostrums helped. I started to search for official support from Sony for bricked cameras. None. You're on your own.
It turns out only one thing matters: The Mac must not enter a power-saving state during the update. It if does, it may appear that the update has completed, but the updated firmware will be corrupted and it will not boot. If you find yourself with a bricked camera, do this: Pull the battery and put it backTurn the camera onExit the updater appStart the updater appConnect the camera to a usb cableFollow the steps in the updater, skipping those like checking the version which can't be performed with a bricked cameraMake certain the computer does not enter a power-saving stated by periodically moving the mouse cursorIf you follow these steps, your camera should turn on when the update is completed.
The really good news is that the firmware update appears to have fixed the "noisy RAW files" issue! I am happily using my favorite digital darkroom workflow again.
Published on May 31, 2016 07:56
May 20, 2016
The QUIC Brown Fox Jumped Over the Top of Carrier Messaging, or Allo, Duo, WebRTC, QUIC, Jibe, and RCS, Explained

At Google I/O 2016, Google announced two new messaging products: Allo, for text messaging, and Duo, for video communications. These are the most recent in a series of messaging products Google has created, none of which have succeeded in attracting a really large user community the way that other messaging products have done. Google doesn't release figures for monthly active users of Hangouts, while WhatsApp has a billion users, Facebook Messenger and QQ have 850 million, and WeChat has about 700 million. The stakes in messaging are very high, and, so far, Google is an also-ran.
In 2015, it looked like Google might go in a different direction, perhaps acting as a spoiler for proprietary messaging apps that don't interoperate and don't use carrier protocols like SMS and MMS. Google bought a company called Jibe that makes next-generation messaging servers for standard telecom protocols called Rich Communications Services, or RCS. If Google based a messaging system on RCS it would be inherently open and would interoperate with any client or server implementing a compatible RCS profile. Standards and interoperation could be a shortcut to wider use.
Are Allo and Duo the first shots fired in that battle? The short answer is "No." It looks like Allo and Duo have nothing to do with Jibe RCS, or RCS in general. Instead they are aimed at providing a better messaging experience, providing messaging privacy, and providing decent performance in challenging network conditions. Duo uses QUIC, a protocol that combines all the things, like throttling and encryption, that one has to build on top of UDP to do efficient and secure multimedia communications on wireless IP networks. Duo's claimed advantage is better performance in conditions where other video messaging apps could become unusable. But the signaling to set up tDuo video calls is WebRTC, not RCS. The protocol used to move video call payload is QUIC.
Here is some information on QUIC: https://www.chromium.org/quic
End users may be getting whiplash from Google's changes of direction, and the tactical approach they are taking with a product for each kind of partnership or competitor.
Moreover RCS messaging gets viewed askance because carriers are required to provide lawful intercept (LI) capability - a built-in law enforcement back door - for their messaging as well as for calls. Therefore, if Google provides RCS signaling and messaging for a carrier, or if Project Fi is a carrier, they would also have to provide LI for RCS-based messaging. Users of messaging apps that go "over the top" (OTT) of carrier networks are increasingly aware of security and are choosing more-secure apps like Whatsapp and Telegram.
To provide a high quality response to increased security awareness, Google is using Open Whisper Systems's (OWS) encryption for a secure mode in Allo,and the QUIC protocol stack has end to end encryption built in for real-time communication. OWS makes open source encryption products that have a first tier reputation among security experts. Allo and Duo should have some of the best security for communication available.
Despite all the confusion Google has managed to create, the technologies behind these products, especially QUIC, are still of interest, and it remains possible that OWS end-to-end encryption could end up in Google's as yet unannounced RCS-based products.
Published on May 20, 2016 11:43
Telirati Analysis #18 The QUIC Brown Fox Jumped Over the Top of Carrier Messaging, or Allo, Duo, WebRTC, QUIC, Jibe, and RCS, Explained

At Google I/O 2016, Google announced two new messaging products: Allo, for text messaging, and Duo, for video communications. These are the most recent in a series of messaging products Google has created, none of which have succeeded in attracting a really large user community the way that other messaging products have done. Google doesn't release figures for monthly active users of Hangouts, while WhatsApp has a billion users, Facebook Messenger and QQ have 850 million, and WeChat has about 700 million. The stakes in messaging are very high, and, so far, Google is an also-ran.
In 2015, it looked like Google might go in a different direction, perhaps acting as a spoiler for proprietary messaging apps that don't interoperate and don't use carrier protocols like SMS and MMS. Google bought a company called Jibe that makes next-generation messaging servers for standard telecom protocols called Rich Communications Services, or RCS. If Google based a messaging system on RCS it would be inherently open and would interoperate with any client or server implementing a compatible RCS profile. Standards and interoperation could be a shortcut to wider use.
Are Allo and Duo the first shots fired in that battle? The short answer is "No." It looks like Allo and Duo have nothing to do with Jibe RCS, or RCS in general. Instead they are aimed at providing a better messaging experience, providing messaging privacy, and providing decent performance in challenging network conditions. Duo uses QUIC, a protocol that combines all the things, like throttling and encryption, that one has to build on top of UDP to do efficient and secure multimedia communications on wireless IP networks. Duo's claimed advantage is better performance in conditions where other video messaging apps could become unusable. But the signaling to set up tDuo video calls is WebRTC, not RCS. The protocol used to move video call payload is QUIC.
Here is some information on QUIC: https://www.chromium.org/quic
End users may be getting whiplash from Google's changes of direction, and the tactical approach they are taking with a product for each kind of partnership or competitor.
Moreover RCS messaging gets viewed askance because carriers are required to provide lawful intercept (LI) capability - a built-in law enforcement back door - for their messaging as well as for calls. Therefore, if Google provides RCS signaling and messaging for a carrier, or if Project Fi is a carrier, they would also have to provide LI for RCS-based messaging. Users of messaging apps that go "over the top" (OTT) of carrier networks are increasingly aware of security and are choosing more-secure apps like Whatsapp and Telegram.
To provide a high quality response to increased security awareness, Google is using Open Whisper Systems's (OWS) encryption for a secure mode in Allo,and the QUIC protocol stack has end to end encryption built in for real-time communication. OWS makes open source encryption products that have a first tier reputation among security experts. Allo and Duo should have some of the best security for communication available.
Despite all the confusion Google has managed to create, the technologies behind these products, the technologies behind them, especially QUIC, are still of interest, and it remains possible that OWS end-to-end encryption could end up in Google's as yet unannounced RCS-based products.
Published on May 20, 2016 11:43
January 1, 2016
Telirati Analysis #17: Google jukes around Oracle's copyright play, and what Oracle is missing out on

Android is client JavaAndroid applications are, by several orders of magnitude, the dominant form of client Java software. The only widely used interactive Java applications, other than Android apps, are integrated development environments (IDEs) which are big, complex software creation tools.
Oracle is breaking the business of software creationOracle, which now owns the leading proprietary implementation of Java, should be grateful that client Java has been revived. Instead, Oracle has decided this is an opportunity to legislate poorly established parts of intellectual property law, vexing Google, Android developers, and tool-makers in the Android ecosystem. Oracle has made various claims, one of the most destructive to software development in general is that software interface specifications, usually known as "APIs" or "application programming interfaces" can by copyright protected.
This claim is both deleterious to the whole software business and nonsensical. It is like claiming that the information that your washing machine uses 3/8 inch bolts to mount the motor is covered by copyright. It is longstanding doctrine that facts like the size of a bolt can't be copyright protected from dissemination. Similarly, the symbolic names and data types used in method calls has, for decades, been assumed to be a similar set of facts. Oracle may, however, succeed in lawyering this into a knot of what will become onerous claims of proprietariness throughout the software industry.
If Oracle prevails, many published APIs will come under copyright claims and license fee demands. This is an industry-wide disaster in the offing.
Fortunately, Sun liked the GPLSun Microsystems has been of two minds about Java, sometimes claiming it is proprietary and sometimes working to assure the software industry that it is an open standard. To promote the latter, they created OpenJDK, an implementation of Java licensed under the GNU General Public License, an open source license that strongly discourages claims of proprietariness in derivative works. This is in contrast to the Apache license Google adopted for all the non-proprietary parts of Android that Google created, which allows OEMs and integrators to hold their enhancements to Android as proprietary code.
Where's the Java?We've used the name "Java" loosely. Most people would say "Android runs Java," but, strictly speaking, that's not true. There is no Java in an Android device. All the Java bytecode in an Android application is converted to Dalvik bytecode before being packaged in an "apk" file. The Android runtime environments (Dalvik and ART) don't know anything about Java bytecodes. It may look like Java code is being executed with the expected Java-like semantics, but it isn't.
So, where's the Java? Up to now, you had to get the Oracle a certain version of the JDK (Java development kit), freely available from Oracle's web site, in order to create Android software. That's because Android used interface specifications from the Oracle JDK.
Google embraces OpenJDKBy embracing OpenJDK, Google has sidestepped potential licensing demands, at the cost of having to step up and contribute to an open source project that hasn't kept up with the proprietary JDK. This is good for Google, even though it should not have been necessary, and it is good for all Java developers, because it prevents Oracle from imposing licensing fees on anyone using the Oracle JDK. It is also a step toward an open and unencumbered Java standard.
And that's a pretty good beginning to a New Year of Java development.
What should Oracle be doing?Oracle's stand has been spiteful, contrary, and vexing to the whole software industry, which relies on the ability to use information about APIs. Oracle is willing to overturn many apple carts just to mess with Google. This is a management style that is on it's way out in the software industry, and it paints Oracle as a has-been, fighting a rearguard against nosql databases eroding the grip it had on the database business. The web doesn't need Oracle, and Oracle appears to be thrashing about, lawyering for money instead of making new things to sell.
Imagine what Oracle could have done by cooperating with Google: Development tools, Java technologies, and vast new product areas to extract new revenue streams. Many of these opportunities have passed by due to Oracle's litigiousness. Being a ruthless bastard is one of those strategies that can be made to look good for a while, and then it stops being effective and turns into a burden.
Published on January 01, 2016 12:25