A few months ago and coinciding with Apple’s last keynote held on October 30th, a friend was surprised that he had not yet written in this blog about Apple and its product strategy. The truth is that I’ve thought several times about using Apple as an example of how to launch a product on the market, but so much has been written about the subject that I prefer to deal with it from the opposite perspective: what mistakes to avoid when launching a product. And what better example to illustrate these errors than one of my favorite products of all times, a product ahead of its time, a novel product, quality, technological but that failed miserably: the Segway PT.

 

Before we get into flour with the errors, let’s do a little memory for the millenials who believe that the electric skates were invented by Xiaomi. The segway was launched in 2001 by Dean Kamen with the aim of revolutionizing world transport. It was a 2-wheel parallel electric vehicle with gyroscopic system and high handlebars.

After the initial launch the sales figures were not as expected and although the company continued to develop the product, it was clear that it had remained a very expensive niche product. The company changed owners several times and in 2011 it had one of the most surreal events in the business world: the company had been acquired by Mr. Heldesen and during the trial of one of the Segway models on his estate, he fell down a cliff and was killed. Recently in 2015 it was acquired by the Chinese company Ninebot, one of the giants of the new market of electric scooters, who is in the process of updating the Segway product catalog.

 

Let’s look at some of the mistakes Segway made when launching its product:

  • Unrealistic expectations: when launching any consumer product, it is important to manage expectations well so as not to cause frustration to customers. The launch of the Segway is remembered as one of the most anticipated and expected. Before launching it, celebrities who saw it said “it was going to change the transport”, that “it was a game-changer”, that it would sell “10,000 units a week” or that “it would be the product in reaching the million dollars of sales in less time in history”. The media campaign, PR and creation of buzz was spectacular but with the great risk of not being able to meet expectations (as it did). The reality is that in this hypercompetitive world, there are very few companies/persons (e.g. Apple or Elon Musk) whose announcements are expected by the market and who can therefore use it as a marketing tool. Take-away: avoid giving expected sales figures when launching as well as using concepts such as “game-changer”, “revolutionary” or “product of the century”.

 

  • Target not defined: the Segway was intended to be such a generic product that it did not even bother to segment its target market. It sought to compete not only with any means of transport (bike, motorcycle and even car) but also to change the habits of those who move around on foot. The problem was that the characteristics of the product (price, functionality, autonomy, etc.) did not fit with many of these segments. Why would someone with a bike spend 5,000 € on something that had already solved? was it possible to move on the sidewalks? with a limited autonomy would I be able to replace motorcycles and cars? Obviously the only market segment that reacted to the launch was the techies-early-adopters, but as could be seen in the early months, there are not so many that could be spent the € 5000 that cost. Take-away: be prudent and conservative when making a business case by selecting a limited but known and realistic target. There will always be time to extend it.

 

  • Choosing the wrong channel: Segway chose the general media as the loudspeaker for its hype campaign (in fact it was launched in the Good Morning America program). But for a product without a clear application and with a high cost perhaps it was not the best option. Talking about the product in general TV, radio and press programmes probably discouraged the luxury/elite segment but failed to attract the mainstream market. Take-away: choose the right channel for the product’s target and communication capabilities (budget, materials, etc.). Quality of impact is more important than quantity.

 

  • Bad timing: arriving too early or too late. In the case of the Segway, it was too early: expensive new technology, unprepared regulation, applications not worked on…priority was given to launching the “technological marvel” rather than executing a business plan. After the launch came the attempts to reduce the price with cheaper versions, develop applications (sightseeing, fleet vehicle for police, airports or even Segway-polo) but it was already late the entire subsequent history of the Segway was perceived as an attempt to minimize the initial great failure. Take-away: design a good product, study the market well and cross your fingers because it is very easy to analyze afterwards the reasons for the success or failure of a product but as the case of the segway or Microsoft with their mp3 players, the market reaction is difficult to predict.

 

  • It is easy to conclude that the launch plan must be consistent with the overall product plan. In this way it will be easier for product attributes, pricing policy and marketing tools to match the launch channels and most importantly: get the right message to the right audience. Not optimizing this process will cause the cost of launching per unit sold to go up or in other words: with a closed budget for the launch, less sales will be achieved.

In the previous post on this blog commenting on Lazard’s latest LCoE report, I made some comments on the competition between renewables and more specifically between wind and solar as the most viable and cheapest renewable sources of generation today. As a result of this article, I received several comments on this subject, some asking for more multi-technology auctions where the prime is the lowest committed price and others remembering that not everything is the price when planning the generation mix. The truth is that it is a hot topic as news like the last mixed auction in Germany where the solar PV took all the projects in front of the wind make more and more voices are heard advocating mixed auctions where, as expected, the solar offers unattainable prices for other technologies.

But is this the right approach? should the price be the only driver? is the solar PV really going to be unbeatable in mixed auctions? for those who are short of time, here are the quick answers: Depends, No and Yes. For those who have a few minutes to go deeper into these questions, here are some arguments …

 

Will PV solar be unbeatable in price?

It is very likely that, at absolute minimum price, yes. The current level of cost plus the future reduction potential will make this technology unbeatable in a few years in terms of costs. Let’s look at some of the reasons

 

1.Volume

It is one of the main drivers to reduce the cost and as we see in the graph, the volumes of the solar are already much larger than those of wind and the trend is that this difference is becoming larger.

One of the keys to explain this is that wind is limited to the grid-connected segment with large generation projects. However, solar has some residential and commercial business segments that bring a lot of volume to the sector. As can be seen in the following graph, almost half of the volume comes from these segments that do not exist in wind power

The reason is obvious: the solar resource is the same in urban and non-urban environments while the wind resource is very scarce in urban environments.

 

2. Technological and logistical complexity

Anyone who has seen the installation of a wind farm and a solar park already knows what I mean. The solar avoids much of the technical complexities of the wind: no work at height, no need for large cranes, no heavy elements, no mechanical elements that rotate, the dimensions of the components are small, and so on. For me, an example that clearly shows this difference are the floating projects: in the case of wind are projects worthy of a chapter (or two) of the mega-structures program, while the solar, are panels with floats underneath.

Another very illustrative example of this difference is the O&M: in solar is basically to clean the modules well and occasionally change small elements while, in wind, any corrective is already complex only by height and dimensions.

 

3. Potential cost reduction

Here are 2 important elements to try to guess the potential of each technology in cost reduction:

  • Raw materials: In solar, the element that weighs the most in the cost (the modules) are basically connected semiconductors. It is therefore based on silicon as well as chips so it could potentially follow a Moore’s law-type curve of cost/size reduction (I know this is very debatable, so I say potentially). Wind, however, is mostly steel, fiberglass/carbon and concrete, elements that have less capacity to follow important reduction curves.

 

  • R&D: this is key in technological sectors. The technology where the most investment is made is usually the one with the greatest advances. An indicative figure for investment in R&D are the published patents and, as can be seen in the data compiled by IRENA, patents in solar in 2016 are twice as high as those in wind power.

 

 

Should price be the only driver?

Obviously not. Both solar and wind are intermittent, so any responsible planner must be very cautious about this. But even though both are intermittent by nature, there are clear differences

 

1.Hourly distribution

We are talking about one of the great weaknesses of the solar system. It is obvious that the hours of sunshine per day are what they are and do not cover one of the peaks of demand of the day (that of 20h).

 

However, the wind is fairly evenly distributed throughout the 24 hours of the day, so it helps in all peaks of demand.

This is where the batteries come in and their ability to “move” the solar for a few hours towards the late-night peak. But the reality is that in about 10 years, the cost of the batteries will not be such that it can be a “default” complement to the solar.

The need to incorporate the time periods in the auctions because a kWh at 11 am and at 8 pm is not worth the same. If this were done, the solar would not be able to compete (for the moment) with the wind in certain hourly sections, which would make the results of the mixed auctions more balanced.

 

2. Local value

I know this is a sensitive issue and nobody wants to talk about local content, tariffs or quotas, but you have to be aware of the differences: 9 of the top 10 module manufacturers and 4 of the top 5 inverter manufacturers are Chinese. In wind power, there is only one big Chinese manufacturer in the top 5. In addition, the local investment per MW installed (installation, logistics, O&M) in the case of PV solar projects is much lower than wind. The reality is that the rate of return on investment in the local European or Spanish economy in the case of solar projects is much lower than that of wind projects.

 

What seems clear is that the times of brotherhood between wind and solar when fighting against the common enemy “thermal” are over and each time there will be more cases competing in the same markets and for the same customers. We will see how it turns out, but what is clear is that for the world, any result will be beneficial.

Last week, the financial advisory and asset management company Lazard launched the 12th edition of its Levelized Cost of Energy Analysis where it makes a comparative analysis of the average cost of producing a MWh with different generation technologies (here the complete pdf). Since 2008, when the first edition was launched, it has been a reference in the sector to see how renewables were reducing the cost of generation and approaching what seemed like a chimera a few years ago: the grid parity. But the news is that this is no longer news: that renewables (wind and solar) are now clearly more competitive in new installation than conventional is something that almost no one (with some criteria) discusses. Now the goal is what I would call the point of no return of renewables: that new wind or solar installations are cheaper than keeping conventional already amortized.

 

And this is what, for the first time, the Lazard report indicates may be happening. That’s why all the headlines in various media such as PV Magazine or El periódico de la Energía have highlighted this fact. Let’s take a closer look at Lazard’s report on LCoE:

  1. New renewable installation vs. conventional marginal cost

As already mentioned, this is the big news. According to Lazard, in certain cases, both the wind power and the new solar installation (both without subsidies) can have a lower generation cost than the cost of generating coal or nuclear power plants that are already depreciated and that, therefore, their cost is the marginal cost of operation, fuel, maintenance, etc.

  • The first thing that is surprising is that we are already at that point. Bloomberg NEF, which is probably the most reputable consultant in the sector, published this year its New Energy Outlook where it predicted that this point would be reached at the earliest in 2028 in projects in Germany and in 2030 in China, in both cases versus Coal installations. If the comparison is with gas installations, the date is brought forward to 2022 and 2023 for China and Germany respectively. It is already known that the profession of fortune teller is especially difficult in the world of energy but 10-12 years difference is a lot even for this world. Surely there is an explanation for such a difference (criteria, assumptions, etc.) but I personally, if I have to choose one of the two sources, I stay with Bloomberg.
  • The consequences of reaching this point are very relevant: to begin with, it would give a free hand to force closures of highly polluting plants (e.g. Coal) without economically damaging operators. This, together with private initiatives by the utilities themselves, would accelerate the penetration of renewables.
  • Apparently, the previous point seems very positive, but if we go a little further, a massive introduction of renewables would cause a fall in the pool price, which would make projects without PPA less attractive, which could be very detrimental to the renewables themselves.

2. Wind vs Solar

Another thing that catches my attention in the report is the very low LCoE of wind (29 $/MWh) while the cheapest solar is at 26 $/MWh. It is striking because the price levels that are being reached in the auctions seem to indicate that the soil of the solar is lower than that of the wind. Let’s look at some relevant details in this comparison

  • Analyzing the assumptions of the analysis, we see that the capacity factor of Wind Onshore is set at a range of 55-38%. That 55% corresponds to the minimum value of 29$/MWh and personally it seems to me something unreal for onshore projects. I find it hard to think that there are projects with 4,800 net equivalente hours. What’s more, the offshore range is 55-45% which seems more real but that the maximum range of both ranges coincides is very rare. I would put a range of 50-30% for onshore.

As for the lifetime of the installations, 20 years are assumed for wind. For Onshore it may make sense (life extension currently has capex associated) but for Offshore it should clearly be 25 or even 30 years.

  • Historical reduction ratios are spectacular but seem to indicate some floor for wind while solar seems far from reaching some floor (Bloomberg predicts in solar an additional 30% reduction by 2025).

  • It seems clear that solar will be cheaper than wind (if it is not already) and this coupled with being less capital-intensive, less technological risk and very short installation times, makes it look like an unbeatable rival in future multi-technology auctions. But as Jose Luis Blanco, CEO of Nordex-Acciona said at the recent EnerCluster event in Navarre: “although solar may be cheaper, the value of wind MWh will always be higher”. And this is something that legislators should bear in mind when planning future auctions: for regions such as Navarre or countries such as Spain or even for Europe, the return on investment in wind power is very high as a large percentage of the investment is local or regional while, in solar, the trend is that each time the manufacture of all the HW focuses more on China.

Be that as it may, the trend is unstoppable: renewables are already the most installed source of generation and will be even more so in the coming years. Now it is the turn of legislators and market planners to put in place the right mechanisms so that there is no risk of dying of success.

When I worked at Gamesa, we developed a customer satisfaction project that was based on conducting face-to-face interviews with all customers, collecting answers to more than 50 closed-ended and 30 open-ended questions. It was a very powerful project where more than 100 interviews were conducted in 20 different countries. The information obtained was invaluable and we presented it with different segmentations and KPI’s so that business conclusions were drawn from this information and an action plan was launched in order to correct aspects that could be improved or to reinforce things that were very well perceived. This action plan with its results was then presented to the clients who had been interviewed, so that the feedback circle was closed and ready to start another round of surveys. As you can guess, it was a complex project that lasted 18-24 months and involved many people in the company.

The fact is that I tell this not because I am one of the parents of the project but because I remember that 5 or 6 years ago I was in one of the external audits and after telling the auditor all the excellence of the project, he tells me “all that sounds very good but … what is your NPS?”

 

For me, this story comes to condense the best and worst of the famous Net Promoter Score™ or NPS: it is a simple metric and well-known almost everyone but in turn is able to eclipse the true objective of customer satisfaction surveys.

 

Before going into further analysis of the NPS, let’s remember that it was a metric published in 2003 by Frederick F. Reichheld, Bain & Company and Satmetrix. It is based on asking the question “how are you likely to recommend our product to a colleague or friend?” with the answer being a scale of 0 to 10 with 0 being the most unlikely and 10 the most likely. The novelty of the metric was the segmentation of the answers and the summary of the results in a final NPS value that only takes into account the highest and lowest scores.

NPS metric explained

The truth is that seen this way there is no doubt that it is a very attractive metric for its simplicity and ability to synthesize aspects such as brand loyalty, potential growth, customer perception, and so on. And it was precisely this that made the NPS the metric preferred by analysts as it was simple and could be correlated with future growth. This not only caused companies to be measured externally by their NPS, but many companies also used it as an internal metric, often associated with payment of variables or bonuses. And we already know what happens when something is associated with remuneration: what the objective is to improve the NPS as a metric and not the product or service so that the NPS comes out better.

 

Let’s take a closer look at some of the limitations I’ve encountered when applying the NPS:

  1. The interviewee must be the decision maker: this makes the NPS perfect for B2C but limits it a lot in B2B. In relations between companies where the purchase decision does not depend on a single person, the answer to the NPS question is still a subjective perception of the interviewee. This can be alleviated with several interviews to different contacts, but it is still very subjective and leaves out many key aspects in B2B relationships.
  2. All the answers are worth the same: it is perfect for very homogeneous client portfolios but imagine a portfolio of 10 clients where client1 and client2 account for 80% of turnover. In an extreme case, it could be the case that all the clients are promoters except client1 and client2 who are detractors. In that case, the NPS would be 60%, a more than respectable figure that would not reflect the critical situation of being on the verge of losing 80% of revenue.
  3. The question is personal: this is key to associating the answer to loyalty, fidelization, etc. but also makes it subject to interpretation and above all to cultural differences. In fact, this is a real problem since the issue of recommending something to a family member or friend is interpreted very differently depending on the country.

What I propose to take advantage of and minimize limitations is to correlate the NPS with some other metric that collects the answers about the client experience.

As explained in the matrix above, this is a good way to check if the NPS segmentation matches the average satisfaction obtained through various questions about the business relationship.

 

In conclusion, the NPS is a very useful metric especially in B2C companies with a large component of sales by brand but should not become an objective in itself but a tool of a broader project that should be to measure customer satisfaction and perception in order to make better decisions when improving the product or service.

A few months ago I was reading the book Welcome to Hard Times from one of my favourite writers, the great E.L.Doctorow, which narrates the hard life of the first villages in the Wild West to be created in the midst of a gold rush. Many people came called by the possibility of finding gold, but really the chances of that case were very low and the risks were very high. At the same time, the book describes the lives of some of the “service providers” of these mining settlements: the supply depot, the saloon, the blacksmith, and so on. In a passage from the book, one of the miners complains that all the money he earns from extracting the gold is spent on supplies and “entertainment” …

These service providers manage to square the circle: to have a recurring income and limited risk (not counting those of operating in the wild west, of course) in a sector with very high risk as was the gold seekers. Is this example extrapolable to our days? Can a product or service be designed for a high-risk sector and limit at the same time the exposure to that risk? Is it possible to enter a volatile sector and achieve a stable income?

A priori it seems difficult because profit and risk are intrinsically related but if we look more closely, all sectors with a lot of risk have niches of moderate risk that surely will not have the potential returns of the sector but that can be a growing source of income. Let’s look at some examples:

 

  • Crypto currencies

Possibly one of the most volatile and risky sectors of the moment. The bitcoin was worth $958 in Dec’16 and in 12 months it became worth $19.343, that is to say, it revalued an awesome 1.919%! it is currently at $6.500 which means that in 10 months it has lost 66% of its value.

bitcoin price evolution

This is a scenario where any product associated with the value of bitcoin will have a very high risk. But there are services that the cryptocoin sector needs to operate, such as the flourishing business of mining gear suppliers or, in other words, the IT infrastructure to mine bitcoins. They are the updated version of the supply sellers for the old gold prospectors. Fluctuations in the value of bitcoin are well cushioned in your profit and loss account as many of your revenues are fixed tariffs. As long as there are bitcoin miners, their services will be needed. They have managed to isolate the risk of their customers from their own and believe that the market will recognize it so some of them are already preparing their IPO as Canaan and Ebang. They are undoubtedly successful companies such as Bitmain which is estimated to have annual revenues in excess of $2,500mill.

 

  • E-Commerce

Once again a sector with great dynamism and high risk. Today e-commerce is very well established but still has very high failure rates (some place between 80% and 97%). The growth in this field is expected to be very accentuated especially by the irruption of purchases through mobile (m-commerce), which according to some studies could account in 2021 73% of all e-commerce.

mobile commerce forecast

However, there are services intrinsically related to e-commerce that are independent of the success or failure of the business, such as, for example, payment services. This is one of the most successful sectors nowadays. They are companies that offer payment platforms for web and mobile applications. The big internet companies such as Amazon, Google or Paypal already have platforms of this type but there are a multitude of new players that are innovating in these services such as Stripe and Square in USA or PayU and LemonWay in Europe.

 

Again the plan is simple: to obtain regular income through commissions in a market with high volatility.

 

If someone is thinking of entering a high-risk sector but limiting those risks, here are some tips:

  1. Study the value chain of the sector identifying the different players.
  2. Identify the risk drivers. In the case of bitcoin, it will be price volatility, as well as limited liquidity.
  3. Seek support services that depend as little as possible on the success or failure of companies. These will normally be infrastructure activities but may also be start-up consultancy or start-up services.
  4. Design an income structure with a majority percentage of fixed and a minority of variables depending on market evolution.

 

Evidently, they are common sense advices that will depend on the bargaining power to meet them, but the conclusion to these reflections would be that you should never rule out a high-risk sector or market a priori because there are niches in all with limited risk.

All of us who believe that the electric car (EV) is the future of transport focus our hopes on the rapid reduction of battery costs that will make the average price of an EV lower than traditional internal combustion (ICE) in a few years. But this scenario has a small problem: it assumes that the ECIs do not improve (or do so at the rate we know). And what is clear is that a much improved ICE in fuel consumption, noise and price would be the real competitor to EVs.

 

According to Bloomberg, EVs could be cheaper than ICEs in 7 years.

But as shown in the graph above, the price of ICE is assumed to be flat. It is true that the automobile sector has never stood out for its technological leaps or for its sudden competitive advances, but to assume that the world’s largest industry will not improve its products is quite risky.

There are many theories that try to explain why the automobile is comparatively one of the technological inventions that has least evolved over time. In more than 100 years of life, the basic concept with which it was born remains today. Progress has been relatively modest compared to other similar inventions such as airplanes, trains and telephones and has always been based on new materials, electronics and safety regulations. Beyond conspiracy theories circulating on the Internet, I believe this is due to two fundamental reasons:

  • It is a high-volume, capital-intensive industry where the goal is not to seek a revolution that makes all assets obsolete but to find ways to make the investment more profitable. That is why cost reduction is always a priority.
  • There has never been any pressure from customers or other competitors to seek something radically different. Competition has been based more on brand building and segmentation than on technological revolutions.

But suddenly this has changed: there are new competitors with new concepts, the market expects a change, customers demand clean transport options, regulation pushes to eliminate emissions…is it the end of ICE?

 

Let’s travel at the end of 1998. I had just finished my degree and started working in Madrid at the telecommunications company Lucent Technologies. I would arrive in a city that was already starting to be upside down in the ditches of cable companies. In the midst of the .com boom, Spain as a whole was a big ditch that aimed to bring cable (fiber + coaxial) to all homes and with it a package of services very competitive for the time: telephony, internet up to 300kbps, multitude of TV channels, pay-per-view, etc….in short, new competitors offering a new and higher quality product than the existing one so far… Doesn’t it sound similar to the EV vs ICE case?

 

Telefonica, until that time with the monopoly, faced a competitive pressure unknown until then. It had a copper network that reached all the houses in Spain but the truth is that no one gave a damn about that asset: it was an obsolete network that could never, experts said, support speeds of more than 50kbps (who doesn’t remember those 33kbps modems?). But in the same 1998 a standard is published that will be familiar to all of you: ADSL. Suddenly the copper lines could support up to 8Mbps with minimal investment. Telefonica launched its first ADSL offer in 1999, In Lucent we hardly could cope with the demand for equipment and the rest is history: ADSL devoured cable with successive improvements and It was in 2017 when number of fiber connection surpassed for the first time xDSL lines. Even so, ADSL is still being improved, offering speeds of 300Mbps commercially and with laboratory tests of up to 1 Gbps!!

Number of Internet connection in Spain by year

 

The funny thing about this story and that many people don’t know is that ADSL was invented at Lucent’s Bell Labs in the 1980s and was kept in the drawer until the competitive pressure of cable made the big TelCos need a solution to upgrade their copper network services. ADSL was a great deal of business for Lucent and Alcatel as manufacturers, but especially for Telefónica which amortized its copper network, eliminated competitors and earned 15 years to build a winning fiber offer (Fusion).

 

The lesson of this story is that it is certain that some car manufacturers will struggle to amortize their current assets and I would bet that in a few years we will see the ADSL of the car: perhaps a new generation of gasoline engines with consumption of <1 l/100km, low noise and reduced maintenance? We will see but if that is the case, the massive eruption of the EVs would be delayed a few years, new competitors would have a hard time surviving and traditional manufacturers would have achieved their goal: transforming the revolution into what they do best: another evolution.

What does it take to set up a Competitive Intelligence (CI) system? Is there specific software? is it very complicated? is it expensive? …these are some of the most frequently asked questions that arise when someone is considering launching a CI project.

 

Competitive intelligence or CI is based on 4 steps:

  1. Plan the objectives of the project according to the business needs.
  2. collect information about competitors, structure and store it
  3. analyse it and obtain insights useful for the development of the product itself and for commercial activity. The quality of these insights will depend on the quality of the information collected as well as the expertise of the CI team.
  4. Create reports and distribute information to target users

So let us focus on processes 2 and 4: information collection and dissemination. Let’s look at the options we have when setting up a system:

Self-managed commercial software: there are several commercial tools that can be purchased and installed on local servers. In future posts we will review some of them, but I will tell you that due to their power and price they are solutions that are better adapted to large organizations.

Software as a Service (SaaS): they are replacing the previous ones. Basically, the entire information gathering process is outsourced to a specialist and the client has web access to a reporting tool. They are very flexible solutions that can be adapted to all types of customers.

“Homemade” (HM) system: use simple tools (most of them free) to create a basic CI system.

 

Undoubtedly, the “HM” system has two clear advantages: cost and knowledge acquisition. In my experience, this is a very good way to be aware of what your business’ real needs are in terms of CI, so if you end up opting for commercial software, it will be much easier to identify which one is the most suitable.

 

Let’s quickly review some examples of tools that can help us. The first thing to do is to identify the type of information we want to collect as well as the sources. In our example, we will collect information on competitors’ main products. We will focus on product news, financial data, patents, trademarks, technical specifications and contracts. Let’s see how to get this information:

News: most commercial software works in this field: sorting news by keywords. The most common thing is to work with Google directly but there are tools like Feedly that make our work easier. Feedly is the improved successor of the great but discontinued Google Reader. You can structure your fonts (including google news) and even for little money you get the premium version with collaborative options for teams.

Financial data: if the company is listed, Google finance or Yahoo finance are good free options but my preferred one is definitely Wall Street Journal. If it is not quoted, it is best to go to the web and look at investor presentations. If there is no information available, there are specialized online companies as Einforma that consolidate all public information for a small fee.

Patents: Google patent search is a very powerful tool and covers all major markets. Obtaining insights from this information is likely to be beyond the reach of a CI team but at least serves as an indicator of technological activity.

Brands: there are search engines for both EU (TMView) and US (TESS) brands. It is most convenient to use TMView as it includes results from all countries of the world.

Documents: the best thing is to store the pdf, ppt, etc. related to the competitor in a document manager but if none are available, at least a policy of file names and directories must be defined so that all the information is well catalogued and ready for a future migration to a document manager.

 

When it comes to distributing information, it is advisable to first create lists of users according to the type of information they are looking for. There should be at least 3 groups: strategy, technology and commercial. As for media, the main ones are:

Newsletters: the main support type push. To make them, there are a multitude of tools. If you want a quasi-professional but free solution the best thing is MailChimp. As a home-made solution you can always design your newsletter in word format and send it with Outlook or similar.

– Intranet portal: the main pull type support. If you have some basic knowledge and IT support, the best option is WordPress. If you need a collaborative environment, the true standard is MS Sharepoint (not free).

Collaborative environment: pull-push type. From lists in Whatsapp to using Slack, Yammer or Alfresco tools. They are very useful for getting direct feedback from users and opening information channels for example at a trade fair or when a competitor launches a new product.

My final recommendation for anyone who wants to launch a CI process in their company is to start at home to identify needs, typical users, information sources and reports and when you have all this clear, decide which tools are most appropriate depending on the volume of information, number of users, resources available,etc…In other words, that tools must be at the service of the process and never the other way around.

If there is one thing that characterizes the market today, it is its dynamism. Product life cycles are becoming shorter and shorter in order to adapt to changing market needs and competitive pressures. But there are still very complex products whose development cycles are very long, so they are at high risk of suffering the product manager’s biggest nightmare: a “born to die” product.

Let us look at the case of the market for space rockets to launch satellites. These rockets have development cycles of more than 8 years.  ESA and ULA have been the traditional dominators with a quasi-monopoly. When both launched the development of their new generation of rockets, SpaceX was only Elon Musk’s dream: to design reusable rockets and reduce the cost of launching by 50%. Hardly anyone believed it was possible. Traditional manufacturers launched their new developments with objectives and specifications suited to the old semi-monopolistic rules.

But sometimes dreams come true and SpaceX has succeeded, through component reuse, modularity and simplification, in reducing costs by 40-50%, so new products from traditional manufacturers are going to have a very serious competitive problem. Even more so with other new players who want to join the new market like Blue Origin.

Designing a rocket, like many other complex products, is not easy, so they will always have long development cycles. But there are ways to reduce the risk of “born to die” effect. I can think of a few examples:

  • Modularity: try to design generic modules and not specific products. SpaceX developed its large rocket (Falcon Heavy) faster by means of bonding 3 Falcon 9 modules. Another example that I have experienced first-hand is how Gamesa managed to reduce the wind turbine development cycle thanks to its product platform design strategy, whose modules are then used to quickly configure specific products for the changing needs of the market.
  • Development platforms: designing a game is complex and long if you want to develop all the tools from scratch. EPIC Games develops a creation platform (Unreal engine) with a long lifecycle to allow its customers to develop games in short lead-time. The idea is to move the most complex tools into the long cycle and leave everything to do with the user experience in the short development cycle.
  • Postponement: in products with a lot of customization and changing requirements, you can develop a generic product and develop customization kits in an agile way to meet market needs.

In conclusion, as the crystal ball for the product manager is not yet available, he or she should launch risk mitigation actions in cases of long development cycles and should closely monitor the product’s competitiveness at intermediate milestones and, in the worst case, recommend its abandonment.