When I started in the wind sector back in 2004, the imminent offshore wind boom was already announced. All market forecasts agreed that offshore would be comparable to onshore in a few years. But years passed and that moment was always delayed. Pioneer countries grew little and new markets did not arrive. Today the offshore market is limited to 4-5 markets in Europe and China and does not even account for 10% of annual global installations. The EWEA established in 2005 a minimum target of 70GW installed in Europe by 2020 but the reality is that we will reach barely 20GW. What has happened not to meet expectations? Last week WindEurope published a very complete report on the situation of the offshore sector in Europe that can help us to analyze the evolution and prospects of the offshore business.

 

 

In Europe, 2500MW were installed in 2018, a figure still lower than those reached in 2015 and 2017. To guess what is going to happen in the next few years, the best thing is to see the financed capacity and we see that it is of the region of 4,000MW in 2018, a level similar to that of 2016 and 2017. This already indicates that the level of installations in the next 3-4 years is going to be very similar to that of 2018. To put it in perspective, offshore in Europe accounted for approximately 4% of total world wind power installations. In addition, all installations are concentrated in 5 markets: UK, Germany, Denmark, Belgium and Holland.

Let’s look at some reasons why this market is not growing as expected:

  • Complexity of planning: there is always talk of the complexity of installing turbines in the sea, but it is equally or even more complicated to prepare the site. In order to be able to install a project it is necessary to have evacuation capacity, permits, previous studies, etc. and all that that in onshore is solved in 2-3 years, in offshore it is part of a long term planning of a country. That is why the capacity that can be installed in the next 5-10 years is already limited by the evacuation/distribution infrastructures that are planned. There are countries such as Germany, UK and Holland that launched this planning some time ago and hence their activity. But the majority of countries do not have infrastructure so it is impossible to have offshore projects in the short term.

 

  • LCoE: when we look at the reduction curves of the offshore LCoE in recent years, one might think that it is a factor that will increase demand: the cheaper the offshore is, the more it will be installed. Well, that’s not the case. LCoE levels are given more by pressure from the renewable market than by vocation of the sector. The low levels of remuneration mean that the companies capable of achieving these costs are less. That is to say, the market is limited to the best (or to those who can assume more risks or to those who have more volume…). This is not bad because it makes the sector very professional, but it limits the volume of installations and probably the capacity for innovation.

 

As can be seen in the previous graphs, it is the large utilities (Orsted, EOn, Vattenfall, Iberdrola, etc.) that dominate the operators’ market. And their share is even greater as developers as they often sell part of the projects to investment funds or pension funds.

 

  • Manufacturers: there is a very relevant graph of the WindEurope report:

It can be seen that the market for offshore manufacturers is currently a duopoly of SGRE and MHI-Vestas. And this is good because we come from many years of quasi-monopoly of Siemens. GE is trying to get back into the market with its new turbine but I’m afraid it is more geared towards being prepared for the future US offshore market than for Europe. The market needs more manufacturers to increase competition and innovation but without volume it is difficult so we are in a vicious circle difficult to break.

 

  • Size: in offshore size does matter (and a lot)

Closely related to the LCoE, the clearest way to reduce costs is to increase the size of both the turbine and the park. This increases the complexity of everything: turbines, logistics, O&M, etc. In the last year there has been a lot of movement of new turbines:

  1. MHI-Vestas with its V174-9.5MW and V164-10MW
  2. SGRE with its 0-193
  3. GE with its impressive Haliade-X of 12MW and 220m of rotor

 

They look like small details but with 10MW turbines and almost 200m of rotor, there are not many boats that can transport and install them. The large growth of the turbines reduces costs but limits the means available for installation and maintenance.

 

In conclusion, it is not viable in the short and medium term for the offshore market in Europe to grow significantly. The global offshore market will have to grow in other markets such as the USA, Japan or India (I exclude China as it has its own value chain) and that will be neither fast nor easy. Offshore will be key in the future but we will have to be patient because it will take time.

A few months ago and coinciding with Apple’s last keynote held on October 30th, a friend was surprised that he had not yet written in this blog about Apple and its product strategy. The truth is that I’ve thought several times about using Apple as an example of how to launch a product on the market, but so much has been written about the subject that I prefer to deal with it from the opposite perspective: what mistakes to avoid when launching a product. And what better example to illustrate these errors than one of my favorite products of all times, a product ahead of its time, a novel product, quality, technological but that failed miserably: the Segway PT.

 

Before we get into flour with the errors, let’s do a little memory for the millenials who believe that the electric skates were invented by Xiaomi. The segway was launched in 2001 by Dean Kamen with the aim of revolutionizing world transport. It was a 2-wheel parallel electric vehicle with gyroscopic system and high handlebars.

After the initial launch the sales figures were not as expected and although the company continued to develop the product, it was clear that it had remained a very expensive niche product. The company changed owners several times and in 2011 it had one of the most surreal events in the business world: the company had been acquired by Mr. Heldesen and during the trial of one of the Segway models on his estate, he fell down a cliff and was killed. Recently in 2015 it was acquired by the Chinese company Ninebot, one of the giants of the new market of electric scooters, who is in the process of updating the Segway product catalog.

 

Let’s look at some of the mistakes Segway made when launching its product:

  • Unrealistic expectations: when launching any consumer product, it is important to manage expectations well so as not to cause frustration to customers. The launch of the Segway is remembered as one of the most anticipated and expected. Before launching it, celebrities who saw it said “it was going to change the transport”, that “it was a game-changer”, that it would sell “10,000 units a week” or that “it would be the product in reaching the million dollars of sales in less time in history”. The media campaign, PR and creation of buzz was spectacular but with the great risk of not being able to meet expectations (as it did). The reality is that in this hypercompetitive world, there are very few companies/persons (e.g. Apple or Elon Musk) whose announcements are expected by the market and who can therefore use it as a marketing tool. Take-away: avoid giving expected sales figures when launching as well as using concepts such as “game-changer”, “revolutionary” or “product of the century”.

 

  • Target not defined: the Segway was intended to be such a generic product that it did not even bother to segment its target market. It sought to compete not only with any means of transport (bike, motorcycle and even car) but also to change the habits of those who move around on foot. The problem was that the characteristics of the product (price, functionality, autonomy, etc.) did not fit with many of these segments. Why would someone with a bike spend 5,000 € on something that had already solved? was it possible to move on the sidewalks? with a limited autonomy would I be able to replace motorcycles and cars? Obviously the only market segment that reacted to the launch was the techies-early-adopters, but as could be seen in the early months, there are not so many that could be spent the € 5000 that cost. Take-away: be prudent and conservative when making a business case by selecting a limited but known and realistic target. There will always be time to extend it.

 

  • Choosing the wrong channel: Segway chose the general media as the loudspeaker for its hype campaign (in fact it was launched in the Good Morning America program). But for a product without a clear application and with a high cost perhaps it was not the best option. Talking about the product in general TV, radio and press programmes probably discouraged the luxury/elite segment but failed to attract the mainstream market. Take-away: choose the right channel for the product’s target and communication capabilities (budget, materials, etc.). Quality of impact is more important than quantity.

 

  • Bad timing: arriving too early or too late. In the case of the Segway, it was too early: expensive new technology, unprepared regulation, applications not worked on…priority was given to launching the “technological marvel” rather than executing a business plan. After the launch came the attempts to reduce the price with cheaper versions, develop applications (sightseeing, fleet vehicle for police, airports or even Segway-polo) but it was already late the entire subsequent history of the Segway was perceived as an attempt to minimize the initial great failure. Take-away: design a good product, study the market well and cross your fingers because it is very easy to analyze afterwards the reasons for the success or failure of a product but as the case of the segway or Microsoft with their mp3 players, the market reaction is difficult to predict.

 

  • It is easy to conclude that the launch plan must be consistent with the overall product plan. In this way it will be easier for product attributes, pricing policy and marketing tools to match the launch channels and most importantly: get the right message to the right audience. Not optimizing this process will cause the cost of launching per unit sold to go up or in other words: with a closed budget for the launch, less sales will be achieved.

Last week, the financial advisory and asset management company Lazard launched the 12th edition of its Levelized Cost of Energy Analysis where it makes a comparative analysis of the average cost of producing a MWh with different generation technologies (here the complete pdf). Since 2008, when the first edition was launched, it has been a reference in the sector to see how renewables were reducing the cost of generation and approaching what seemed like a chimera a few years ago: the grid parity. But the news is that this is no longer news: that renewables (wind and solar) are now clearly more competitive in new installation than conventional is something that almost no one (with some criteria) discusses. Now the goal is what I would call the point of no return of renewables: that new wind or solar installations are cheaper than keeping conventional already amortized.

 

And this is what, for the first time, the Lazard report indicates may be happening. That’s why all the headlines in various media such as PV Magazine or El periódico de la Energía have highlighted this fact. Let’s take a closer look at Lazard’s report on LCoE:

  1. New renewable installation vs. conventional marginal cost

As already mentioned, this is the big news. According to Lazard, in certain cases, both the wind power and the new solar installation (both without subsidies) can have a lower generation cost than the cost of generating coal or nuclear power plants that are already depreciated and that, therefore, their cost is the marginal cost of operation, fuel, maintenance, etc.

  • The first thing that is surprising is that we are already at that point. Bloomberg NEF, which is probably the most reputable consultant in the sector, published this year its New Energy Outlook where it predicted that this point would be reached at the earliest in 2028 in projects in Germany and in 2030 in China, in both cases versus Coal installations. If the comparison is with gas installations, the date is brought forward to 2022 and 2023 for China and Germany respectively. It is already known that the profession of fortune teller is especially difficult in the world of energy but 10-12 years difference is a lot even for this world. Surely there is an explanation for such a difference (criteria, assumptions, etc.) but I personally, if I have to choose one of the two sources, I stay with Bloomberg.
  • The consequences of reaching this point are very relevant: to begin with, it would give a free hand to force closures of highly polluting plants (e.g. Coal) without economically damaging operators. This, together with private initiatives by the utilities themselves, would accelerate the penetration of renewables.
  • Apparently, the previous point seems very positive, but if we go a little further, a massive introduction of renewables would cause a fall in the pool price, which would make projects without PPA less attractive, which could be very detrimental to the renewables themselves.

2. Wind vs Solar

Another thing that catches my attention in the report is the very low LCoE of wind (29 $/MWh) while the cheapest solar is at 26 $/MWh. It is striking because the price levels that are being reached in the auctions seem to indicate that the soil of the solar is lower than that of the wind. Let’s look at some relevant details in this comparison

  • Analyzing the assumptions of the analysis, we see that the capacity factor of Wind Onshore is set at a range of 55-38%. That 55% corresponds to the minimum value of 29$/MWh and personally it seems to me something unreal for onshore projects. I find it hard to think that there are projects with 4,800 net equivalente hours. What’s more, the offshore range is 55-45% which seems more real but that the maximum range of both ranges coincides is very rare. I would put a range of 50-30% for onshore.

As for the lifetime of the installations, 20 years are assumed for wind. For Onshore it may make sense (life extension currently has capex associated) but for Offshore it should clearly be 25 or even 30 years.

  • Historical reduction ratios are spectacular but seem to indicate some floor for wind while solar seems far from reaching some floor (Bloomberg predicts in solar an additional 30% reduction by 2025).

  • It seems clear that solar will be cheaper than wind (if it is not already) and this coupled with being less capital-intensive, less technological risk and very short installation times, makes it look like an unbeatable rival in future multi-technology auctions. But as Jose Luis Blanco, CEO of Nordex-Acciona said at the recent EnerCluster event in Navarre: “although solar may be cheaper, the value of wind MWh will always be higher”. And this is something that legislators should bear in mind when planning future auctions: for regions such as Navarre or countries such as Spain or even for Europe, the return on investment in wind power is very high as a large percentage of the investment is local or regional while, in solar, the trend is that each time the manufacture of all the HW focuses more on China.

Be that as it may, the trend is unstoppable: renewables are already the most installed source of generation and will be even more so in the coming years. Now it is the turn of legislators and market planners to put in place the right mechanisms so that there is no risk of dying of success.

When I worked at Gamesa, we developed a customer satisfaction project that was based on conducting face-to-face interviews with all customers, collecting answers to more than 50 closed-ended and 30 open-ended questions. It was a very powerful project where more than 100 interviews were conducted in 20 different countries. The information obtained was invaluable and we presented it with different segmentations and KPI’s so that business conclusions were drawn from this information and an action plan was launched in order to correct aspects that could be improved or to reinforce things that were very well perceived. This action plan with its results was then presented to the clients who had been interviewed, so that the feedback circle was closed and ready to start another round of surveys. As you can guess, it was a complex project that lasted 18-24 months and involved many people in the company.

The fact is that I tell this not because I am one of the parents of the project but because I remember that 5 or 6 years ago I was in one of the external audits and after telling the auditor all the excellence of the project, he tells me “all that sounds very good but … what is your NPS?”

 

For me, this story comes to condense the best and worst of the famous Net Promoter Score™ or NPS: it is a simple metric and well-known almost everyone but in turn is able to eclipse the true objective of customer satisfaction surveys.

 

Before going into further analysis of the NPS, let’s remember that it was a metric published in 2003 by Frederick F. Reichheld, Bain & Company and Satmetrix. It is based on asking the question “how are you likely to recommend our product to a colleague or friend?” with the answer being a scale of 0 to 10 with 0 being the most unlikely and 10 the most likely. The novelty of the metric was the segmentation of the answers and the summary of the results in a final NPS value that only takes into account the highest and lowest scores.

NPS metric explained

The truth is that seen this way there is no doubt that it is a very attractive metric for its simplicity and ability to synthesize aspects such as brand loyalty, potential growth, customer perception, and so on. And it was precisely this that made the NPS the metric preferred by analysts as it was simple and could be correlated with future growth. This not only caused companies to be measured externally by their NPS, but many companies also used it as an internal metric, often associated with payment of variables or bonuses. And we already know what happens when something is associated with remuneration: what the objective is to improve the NPS as a metric and not the product or service so that the NPS comes out better.

 

Let’s take a closer look at some of the limitations I’ve encountered when applying the NPS:

  1. The interviewee must be the decision maker: this makes the NPS perfect for B2C but limits it a lot in B2B. In relations between companies where the purchase decision does not depend on a single person, the answer to the NPS question is still a subjective perception of the interviewee. This can be alleviated with several interviews to different contacts, but it is still very subjective and leaves out many key aspects in B2B relationships.
  2. All the answers are worth the same: it is perfect for very homogeneous client portfolios but imagine a portfolio of 10 clients where client1 and client2 account for 80% of turnover. In an extreme case, it could be the case that all the clients are promoters except client1 and client2 who are detractors. In that case, the NPS would be 60%, a more than respectable figure that would not reflect the critical situation of being on the verge of losing 80% of revenue.
  3. The question is personal: this is key to associating the answer to loyalty, fidelization, etc. but also makes it subject to interpretation and above all to cultural differences. In fact, this is a real problem since the issue of recommending something to a family member or friend is interpreted very differently depending on the country.

What I propose to take advantage of and minimize limitations is to correlate the NPS with some other metric that collects the answers about the client experience.

As explained in the matrix above, this is a good way to check if the NPS segmentation matches the average satisfaction obtained through various questions about the business relationship.

 

In conclusion, the NPS is a very useful metric especially in B2C companies with a large component of sales by brand but should not become an objective in itself but a tool of a broader project that should be to measure customer satisfaction and perception in order to make better decisions when improving the product or service.

A few months ago I was reading the book Welcome to Hard Times from one of my favourite writers, the great E.L.Doctorow, which narrates the hard life of the first villages in the Wild West to be created in the midst of a gold rush. Many people came called by the possibility of finding gold, but really the chances of that case were very low and the risks were very high. At the same time, the book describes the lives of some of the “service providers” of these mining settlements: the supply depot, the saloon, the blacksmith, and so on. In a passage from the book, one of the miners complains that all the money he earns from extracting the gold is spent on supplies and “entertainment” …

These service providers manage to square the circle: to have a recurring income and limited risk (not counting those of operating in the wild west, of course) in a sector with very high risk as was the gold seekers. Is this example extrapolable to our days? Can a product or service be designed for a high-risk sector and limit at the same time the exposure to that risk? Is it possible to enter a volatile sector and achieve a stable income?

A priori it seems difficult because profit and risk are intrinsically related but if we look more closely, all sectors with a lot of risk have niches of moderate risk that surely will not have the potential returns of the sector but that can be a growing source of income. Let’s look at some examples:

 

  • Crypto currencies

Possibly one of the most volatile and risky sectors of the moment. The bitcoin was worth $958 in Dec’16 and in 12 months it became worth $19.343, that is to say, it revalued an awesome 1.919%! it is currently at $6.500 which means that in 10 months it has lost 66% of its value.

bitcoin price evolution

This is a scenario where any product associated with the value of bitcoin will have a very high risk. But there are services that the cryptocoin sector needs to operate, such as the flourishing business of mining gear suppliers or, in other words, the IT infrastructure to mine bitcoins. They are the updated version of the supply sellers for the old gold prospectors. Fluctuations in the value of bitcoin are well cushioned in your profit and loss account as many of your revenues are fixed tariffs. As long as there are bitcoin miners, their services will be needed. They have managed to isolate the risk of their customers from their own and believe that the market will recognize it so some of them are already preparing their IPO as Canaan and Ebang. They are undoubtedly successful companies such as Bitmain which is estimated to have annual revenues in excess of $2,500mill.

 

  • E-Commerce

Once again a sector with great dynamism and high risk. Today e-commerce is very well established but still has very high failure rates (some place between 80% and 97%). The growth in this field is expected to be very accentuated especially by the irruption of purchases through mobile (m-commerce), which according to some studies could account in 2021 73% of all e-commerce.

mobile commerce forecast

However, there are services intrinsically related to e-commerce that are independent of the success or failure of the business, such as, for example, payment services. This is one of the most successful sectors nowadays. They are companies that offer payment platforms for web and mobile applications. The big internet companies such as Amazon, Google or Paypal already have platforms of this type but there are a multitude of new players that are innovating in these services such as Stripe and Square in USA or PayU and LemonWay in Europe.

 

Again the plan is simple: to obtain regular income through commissions in a market with high volatility.

 

If someone is thinking of entering a high-risk sector but limiting those risks, here are some tips:

  1. Study the value chain of the sector identifying the different players.
  2. Identify the risk drivers. In the case of bitcoin, it will be price volatility, as well as limited liquidity.
  3. Seek support services that depend as little as possible on the success or failure of companies. These will normally be infrastructure activities but may also be start-up consultancy or start-up services.
  4. Design an income structure with a majority percentage of fixed and a minority of variables depending on market evolution.

 

Evidently, they are common sense advices that will depend on the bargaining power to meet them, but the conclusion to these reflections would be that you should never rule out a high-risk sector or market a priori because there are niches in all with limited risk.

All of us who believe that the electric car (EV) is the future of transport focus our hopes on the rapid reduction of battery costs that will make the average price of an EV lower than traditional internal combustion (ICE) in a few years. But this scenario has a small problem: it assumes that the ECIs do not improve (or do so at the rate we know). And what is clear is that a much improved ICE in fuel consumption, noise and price would be the real competitor to EVs.

 

According to Bloomberg, EVs could be cheaper than ICEs in 7 years.

But as shown in the graph above, the price of ICE is assumed to be flat. It is true that the automobile sector has never stood out for its technological leaps or for its sudden competitive advances, but to assume that the world’s largest industry will not improve its products is quite risky.

There are many theories that try to explain why the automobile is comparatively one of the technological inventions that has least evolved over time. In more than 100 years of life, the basic concept with which it was born remains today. Progress has been relatively modest compared to other similar inventions such as airplanes, trains and telephones and has always been based on new materials, electronics and safety regulations. Beyond conspiracy theories circulating on the Internet, I believe this is due to two fundamental reasons:

  • It is a high-volume, capital-intensive industry where the goal is not to seek a revolution that makes all assets obsolete but to find ways to make the investment more profitable. That is why cost reduction is always a priority.
  • There has never been any pressure from customers or other competitors to seek something radically different. Competition has been based more on brand building and segmentation than on technological revolutions.

But suddenly this has changed: there are new competitors with new concepts, the market expects a change, customers demand clean transport options, regulation pushes to eliminate emissions…is it the end of ICE?

 

Let’s travel at the end of 1998. I had just finished my degree and started working in Madrid at the telecommunications company Lucent Technologies. I would arrive in a city that was already starting to be upside down in the ditches of cable companies. In the midst of the .com boom, Spain as a whole was a big ditch that aimed to bring cable (fiber + coaxial) to all homes and with it a package of services very competitive for the time: telephony, internet up to 300kbps, multitude of TV channels, pay-per-view, etc….in short, new competitors offering a new and higher quality product than the existing one so far… Doesn’t it sound similar to the EV vs ICE case?

 

Telefonica, until that time with the monopoly, faced a competitive pressure unknown until then. It had a copper network that reached all the houses in Spain but the truth is that no one gave a damn about that asset: it was an obsolete network that could never, experts said, support speeds of more than 50kbps (who doesn’t remember those 33kbps modems?). But in the same 1998 a standard is published that will be familiar to all of you: ADSL. Suddenly the copper lines could support up to 8Mbps with minimal investment. Telefonica launched its first ADSL offer in 1999, In Lucent we hardly could cope with the demand for equipment and the rest is history: ADSL devoured cable with successive improvements and It was in 2017 when number of fiber connection surpassed for the first time xDSL lines. Even so, ADSL is still being improved, offering speeds of 300Mbps commercially and with laboratory tests of up to 1 Gbps!!

Number of Internet connection in Spain by year

 

The funny thing about this story and that many people don’t know is that ADSL was invented at Lucent’s Bell Labs in the 1980s and was kept in the drawer until the competitive pressure of cable made the big TelCos need a solution to upgrade their copper network services. ADSL was a great deal of business for Lucent and Alcatel as manufacturers, but especially for Telefónica which amortized its copper network, eliminated competitors and earned 15 years to build a winning fiber offer (Fusion).

 

The lesson of this story is that it is certain that some car manufacturers will struggle to amortize their current assets and I would bet that in a few years we will see the ADSL of the car: perhaps a new generation of gasoline engines with consumption of <1 l/100km, low noise and reduced maintenance? We will see but if that is the case, the massive eruption of the EVs would be delayed a few years, new competitors would have a hard time surviving and traditional manufacturers would have achieved their goal: transforming the revolution into what they do best: another evolution.

What does it take to set up a Competitive Intelligence (CI) system? Is there specific software? is it very complicated? is it expensive? …these are some of the most frequently asked questions that arise when someone is considering launching a CI project.

 

Competitive intelligence or CI is based on 4 steps:

  1. Plan the objectives of the project according to the business needs.
  2. collect information about competitors, structure and store it
  3. analyse it and obtain insights useful for the development of the product itself and for commercial activity. The quality of these insights will depend on the quality of the information collected as well as the expertise of the CI team.
  4. Create reports and distribute information to target users

So let us focus on processes 2 and 4: information collection and dissemination. Let’s look at the options we have when setting up a system:

Self-managed commercial software: there are several commercial tools that can be purchased and installed on local servers. In future posts we will review some of them, but I will tell you that due to their power and price they are solutions that are better adapted to large organizations.

Software as a Service (SaaS): they are replacing the previous ones. Basically, the entire information gathering process is outsourced to a specialist and the client has web access to a reporting tool. They are very flexible solutions that can be adapted to all types of customers.

“Homemade” (HM) system: use simple tools (most of them free) to create a basic CI system.

 

Undoubtedly, the “HM” system has two clear advantages: cost and knowledge acquisition. In my experience, this is a very good way to be aware of what your business’ real needs are in terms of CI, so if you end up opting for commercial software, it will be much easier to identify which one is the most suitable.

 

Let’s quickly review some examples of tools that can help us. The first thing to do is to identify the type of information we want to collect as well as the sources. In our example, we will collect information on competitors’ main products. We will focus on product news, financial data, patents, trademarks, technical specifications and contracts. Let’s see how to get this information:

News: most commercial software works in this field: sorting news by keywords. The most common thing is to work with Google directly but there are tools like Feedly that make our work easier. Feedly is the improved successor of the great but discontinued Google Reader. You can structure your fonts (including google news) and even for little money you get the premium version with collaborative options for teams.

Financial data: if the company is listed, Google finance or Yahoo finance are good free options but my preferred one is definitely Wall Street Journal. If it is not quoted, it is best to go to the web and look at investor presentations. If there is no information available, there are specialized online companies as Einforma that consolidate all public information for a small fee.

Patents: Google patent search is a very powerful tool and covers all major markets. Obtaining insights from this information is likely to be beyond the reach of a CI team but at least serves as an indicator of technological activity.

Brands: there are search engines for both EU (TMView) and US (TESS) brands. It is most convenient to use TMView as it includes results from all countries of the world.

Documents: the best thing is to store the pdf, ppt, etc. related to the competitor in a document manager but if none are available, at least a policy of file names and directories must be defined so that all the information is well catalogued and ready for a future migration to a document manager.

 

When it comes to distributing information, it is advisable to first create lists of users according to the type of information they are looking for. There should be at least 3 groups: strategy, technology and commercial. As for media, the main ones are:

Newsletters: the main support type push. To make them, there are a multitude of tools. If you want a quasi-professional but free solution the best thing is MailChimp. As a home-made solution you can always design your newsletter in word format and send it with Outlook or similar.

– Intranet portal: the main pull type support. If you have some basic knowledge and IT support, the best option is WordPress. If you need a collaborative environment, the true standard is MS Sharepoint (not free).

Collaborative environment: pull-push type. From lists in Whatsapp to using Slack, Yammer or Alfresco tools. They are very useful for getting direct feedback from users and opening information channels for example at a trade fair or when a competitor launches a new product.

My final recommendation for anyone who wants to launch a CI process in their company is to start at home to identify needs, typical users, information sources and reports and when you have all this clear, decide which tools are most appropriate depending on the volume of information, number of users, resources available,etc…In other words, that tools must be at the service of the process and never the other way around.

A few weeks ago I was attending the most important solar fair in Europe, Intersolar, in Munich. In addition to solar energy, this fair is a reference in e-mobility and energy storage. And the feeling I was left with is that batteries and their applications for energy storage are the new market boom. And many of the things I saw reminded me of the dot-com boom 20 years ago when, among other things, datacenters emerged as a large-scale business model. And that flashback, in addition to making me aware of the unrelenting pace of time, gave me the opportunity to analyze this analogy a little more in depth.

  • It is an infrastructure business that serves as an “enabler” for other businesses. In the case of data centers, he was the enabler of e-commerce, cloud applications, etc. In the case of energycenters, we will have to wait and see how to monetize the break with the first commandment of the network operators: “generation and demand must be identical”.
  • Both are businesses based on efficient hardware management using advanced thermal management and control software. Both scale well and savings are achieved by centralizing the infrastructure.
  • Both base their profitability on the cost reduction curve. In the case of datacenters it is heavily influenced by Moore’s law. In the case of the energycenters, as they are electrochemical technology, we cannot expect such spectacular curves, but according to BNEF in its latest report, the cost reduction will lead us in a few years to reach the mythical figure of 100 $/kWh.

  • The value chain in both is very similar

Currently, energy storage attracts players of many different types. From large industrial conglomerates such as LG, Panasonic, Siemens, GE or ABB, through IPPs such as RES or AES to specialists in the new sector such as Tesla, BYD or Leclanché. If we look at the datacenters, we can see that 20 years ago there were also many players but now the sector is evolving towards 2 large groups of companies:

Infrastructure providers: specialists such as Equinix or Cyxtera or telcos such as China Telecom who maintain the centres with the associated HW.

Service providers: Amazon, IBM, Google… hire the above mentioned capacity and offer “cloud” services.

In the case of energycenters, infrastructure providers are likely to be large companies such as utilities with the capacity to make large investments with medium term returns. But it is in the part of service providers where the most exciting battle seems to me to be where perhaps we will see great Internet companies like Google with new companies like Tesla and certainly classics like Siemens or IBM. In fact, I was struck by the fact that one of the most spectacular stands at Intersolar was from Mercedes energy, which did not actually show any specific products but clearly focused on energy storage solutions. Whoever develops services that fully leverage the capabilities of technology (as Amazon did with its cloud services) will revolutionize the market. And all of this has a lot to do with Smart Grids, a field that focuses the efforts of many giants of both energy and IT.

 

It seems clear that energy storage is going to change the management of renewables, as well as the grid management currently being carried out by operators, but I believe that it is not going to be through distributed private installations but as virtual’ storage services offered by companies that in turn will rely on optimised centralised infrastructures. Initially it will be aimed at large power generators/consumers, but it will evolve towards a scalable service that is within the reach of both large customers and individuals.

 

But all this will only happen if energy storage is profitable. Currently, the main source of income is frequency regulation, which is usually paid for by the network operator. As storage costs are reduced, new business applications will emerge. At the moment everyone wants to be well prepared for what is supposed to come in the near future: energy storage as a tool to revolutionise electricity management.

What is product management? what does it consist of? how does it help to improve the competitiveness of a product? Some of these questions and others I tried to answer in the first post of this same blog but it’s clear that either I wasn’t very clear or it wasn’t very read because people keep asking me what WeMake does. So I will try to explain it better through a case study: Madonna

Product management references have changed over time. In the 1980s, large industrial conglomerates such as GE or Siemens set an example in the handling of various successful products. In the 1990s, it was consumer multinationals such as Nestlé and Procter&Gamble that revolutionised the world with their brand portfolio management. As we entered the new millennium, all the business newspapers highlighted LVMH as the guru of brand/product management, and who would now be the benchmarks in this field? Everyone thinks of Apple as a great brand and product creator, but I think that the ones who are best managing products right now are the sport&artist management companies. Large players such as CAA or Excel mgmt are probably the ones who more intensively deploy all the tools of product management and marketing.

Let’s look at a universal example to illustrate this: Madonna. Some will define her as a singer, others as an artist, perhaps some audacious as an actress but the truth is that she is a global consumer product and very successful by the way.

If you look at Madonna’s career, it fits the classic product development curve very well:

  • Introduction: Madonna appeared in the 80s in a dazzling way. Like a virgin and True Blue were his first two mass albums, which were sold for a whopping 21 and 25 million copies respectively.

 

  • Growth: records like Like a prayer or bedtime stories and their associated scandals allowed her to grow and establish herself as a mass idol.

 

  • Maturity: with Ray of light and above all with Confessions… she achieves her zenith: for the first time she brings together mass success and critical recognition. It becomes a musical icon that creates a trend.

 

  • Decline: her latest albums have gone quite unnoticed and, above all, her movements no longer generate a trend.

 

How has Madonna and her team managed to create this successful product for over 30 years? Here are 3 keys

  1. Create a good product

Regardless of the musical tastes of each one, it is indisputable that Madonna has always offered a quality product. He has surrounded himself with the best producers, composers, video directors and designers. A good product is not the result of chance but of the work of very good professionals.

  1. Get to know your customers and position yourself

Madonna’s offer has evolved and adapted to the needs of her audience. Her provocative attitude, styling and even early videos have nothing to do with the sophistication and detail of her mature age.

  1. Evolve, improve, reinvent yourself

Perhaps only Bowie can compare to Madonna’s capacity for reinvention. With each album, a new and improved product of the successful “Madonna” brand has been created. Each new “product” included not only the music but also new sounds, new aesthetics, powerful videos and even new public and private scandals (which happened to coincide with the product launch periods).

 

In conclusion, the Madonna brand has been launching successful products on the market for 35 years…we’ll see what its next product upgrade is. I bet on the world tours with a nostalgic wink to attract the 30-50 year old audience who are willing to pay a lot of money to attend perhaps the last tour of the pop queen.

One of the steps where most doubts arise when launching a new product is the issue of trademark protection or registration. Let’s review below the main doubts that usually appear:

  1. Is it really necessary to register the trademark?

It depends. The aim of registration is to ensure that no one will use the same name or logo in the same sector and/or market. The first thing to do, therefore, is to assess whether this danger really exists in each case. There are consumer sectors where the brand is key to differentiating itself and there are other more industrial sectors where the brand is more descriptive. As general guidelines my advice would be:

  • Always register
    • Company’s trade name
    • Generic brand of product family that will have a long life cycle
    • specific product brands in the case of consumer sectors
  • Assess product names on a case-by-case basis in industrial or B2B sectors depending on how intensive the sector is in brand communication and the level of competition. The levels of investment in the brand are a good criterion for deciding whether or not it is worth registering, as such registration is a protection of the intended investment.
  • Do not register descriptive or functional trademarks for industrial sectors that are not very active in brand creation.
  1. Is the register global and absolute?

No. Registration is requested based on 2 parameters:

  • Geographical: you can apply by country through the trademark offices of each state (Spanish office), regionally in the EU (European office) or even worldwide (international office). The latter actually includes 80 countries. It should be borne in mind that both the price and the likelihood of opposition increases as we expand the geographical scope.
  • Classes: these are the sectors of activity in which registration is carried out. They are sorted into 45 categories or classes according to the Nice classification. Again it is important to limit the number of classes well as both the price and the likelihood of opposition will increase significantly with each additional class.
  1. Is it too expensive?

The registration fee for 10 years alone ranges from €120 for the most basic class in Spain to €850 for a class in the EU. If we add more classes and/or more countries, we will have to add up, so a world registration in 3 classes will mean some thousands of euros. That’s why it’s important to be clear about where you want to protect your brand.

  1. Do you have to hire specialized lawyers for the whole process?

It is not mandatory, although in some cases it is recommended. My advice is that anyone who knows how to register their trademark should do it online, as it is very simple and fast. Remember that before registering it is highly recommended to do a search to avoid failed applications.

For those who have doubts about the countries or classes in which to register and, above all, if during the process there is any type of opposition or lawsuit from a third party, it is highly recommended to consult with IP lawyers.

  1. What is the period of validity of the registration?

The registration is valid for 10 years from the moment of approval. It is recommended that trademark management be clearly assigned to a department or function of the company so that there can be a follow-up of renewals, oppositions, registrations of competitors, etc. It is very common that, after 10 years, the trademark is abandoned due to neglect in the renewal.

 

In short, trademark registration policy is one more part of brand building strategy and therefore needs to be consistent with the approach, investment and scope you want to give the brand