In our last post, we discussed how to select new technologies for your tech stack. As a reminder, this decision is critical to the long-term health of both the product and the organization, and usually appears at critical inflection points. So now that we’ve discussed how to select new technologies, in this post we’ll tackle implementing new technologies.
Before we get started, I want to emphasize that the frameworks that we’ll explore below are valid across many different kinds of product launches.
As product managers, we should strive to leverage reusable frameworks and guidelines rather than blindly memorizing formulas or processes. That way, our experiences build on top of one another and enable us to tackle broader and deeper challenges over time.
So, you followed our advice in selecting a technology. You’ve spent hours and days and weeks conducting an in-depth assessment on what technology to use, and you have buy-in across your organization. It’s time to implement your chosen technology, right?
Not so fast. Before you roll out the new technology, you need to pilot it first.
When you pilot – that is, when you use the technology with a small test group – you’ll reduce the risk of organization-wide implementation, and you’ll be able to collect feedback and data on how to improve your rollout.
So, let’s talk about how to effectively pilot your new technology.
Piloting New Technologies
The goal of a pilot is to discover and reduce risk. Therefore, the first set of tasks is to identify metrics of success and thresholds for go / no-go decisions.
What are the key metrics that matter for the organization? How will you run the pilot to confirm the hypothesis that this technology will drive these metrics in a positive, impactful, and measurable way?
At what point do you decide that the pilot is a failure, and that you should revert to your existing tech stack?
Be careful with slippery slopes. Even though you may be setting these metrics and thresholds in an arbitrary manner, your thresholds should be strict.
Even if you only miss your threshold by half a percentage point, that data indicates that your existing tech stack may better solve for your needs and that you should seriously reconsider your efforts.
Once your metrics and thresholds are set, define the subpopulation that will pilot the technology.
Is this a group of engineers? Designers? Operations people? How will you select the group in such a way where the group can help find risks and gather feedback, without skewing dramatically from the overall organization?
Afterwards, define the scope of the pilot. What will this group of people use this technology for? Will they be allowed to use the previous technology at the same time? How long will your pilot run for?
Finally, execute your pilot using the parameters above.
Remember that failure itself can be valuable! If you execute in good faith but find that your pilot does not succeed, you’ll have an additional data point showing that your existing tech stack is the best identified solution at this point in time. That way, the organization can rest easy knowing that the status quo is the best decision based on current available information.
For the purposes of this article, we’ll assume that your pilot succeeded. Now we can dive into how to implement the technology across the entire organization.
Rolling Out New Technologies
Rolling out a new technology actually requires two plans.
You must plan for ramping up on the new technology, and you must plan for winding down on the existing technology.
Begin with your ramp up plan first, since it’s better to have two systems in parallel than to have none at all. Before I discuss the ramp up plan in detail, however, I’d like to note that there are significant costs to having two technologies at once.
Your developers will need to develop and maintain two tech stacks in parallel, which will cut your team’s output. Your organization will need to pay licensing and usage costs for both tech stacks. Your operations teams will need to work in two different ways, which can lead to confusion and redundancy.
Ensure that your leadership team and your development team are aware of these costs and can accommodate them accordingly.
In designing your ramp up plan, address the following questions:
- What are the key tasks required to implement the new technology across the organization?
- What are the key dependencies that need to be addressed?
- For each phase, what percentage of the organization will be using the technology?
- For each phase, what set of use cases will not be covered by the new technology? How will the organization function in these scenarios?
- How will you measure success at each phase? If the phase is not successful, what actions need to be taken?
As I work through these questions, I’ve found that Gantt charts are incredibly helpful in designing a ramp up plan and visualizing dependencies.
The goal is not to have a date set in stone, but rather to get a sense of what happens when any particular task or milestone slips, and how that impacts the timing and resourcing required at each phase.
Now that you have your ramp up plan, design your wind down plan. Address the following:
- What are the key tasks required to deprecate the old technology across the organization?
- What are the key dependencies that need to be addressed?
- How will you phase the wind down to reduce disruption?
- How will you ensure that the organization uses the correct set of technologies as you deprecate the old technology?
- Which parts of your wind down plan can run in parallel with your ramp up plan? Which parts should happen only after your ramp up plan is complete?
Once you have both plans, secure buy-in across your stakeholders and your team, then execute the plans accordingly.
Regularly check in with stakeholders at pre-defined milestones. Remember, technological change sets the entire foundation for the company’s future. It’s essential that your leadership team is informed and empowered to drive the change alongside your efforts.
Be patient. Implementing new technologies can sometimes take quarters or years. While speed is critical, quality is even more critical.
Speaking of quality, let’s discuss pitfalls that I’ve personally witnessed when implementing new technologies, and how you can prevent falling for these traps.
Caveats with New Technologies
The most common problems I’ve seen with implementing new technologies are the following:
- Change management and training
- Configuration
- Metric tracking
- Changes to definitions of metrics
- Bug fixing
- Data migration
Let’s tackle them one by one.
1) Change management is crucial to the success of your new technology. You need to teach the entire organization not just how to do their jobs with a new technology, but also why they should embrace this technology.
Begin with the why. Create a concise and compelling story based on the in-depth analyses that you’ve performed in selecting a new technology.
Then, share this story with leaders, managers, and employees. Tailor your story to fit their roles, responsibilities, aspirations, and pain points.
Afterwards, talk about the how. Think through how you expect each role to interact with your technology, and what changes they’ll need to make. Consider how you can reduce friction and encourage adoption.
You need to convince people that your decision is the right one. Changing technologies is a high-cost extended project that can block many other opportunities. You are guaranteed to find resistance – and any resistance you face can cause your technology to be rejected.
2) Your new technology will need to be configured to meet your organizational needs. Without thoughtful configuration, your new technology will not be able to achieve its potential.
Understand what each of the different options means for the different teams in your company. Configuration can feel like “death by a thousand paper cuts” – you need to make hundreds or thousands of configuration decisions, each with their own implications and downstream impacts.
Just as with any product decision, decide how reversible or irreversible the decision is, what the magnitude of risk is, and what the potential upside is, then collaborate or communicate with stakeholders as relevant.
Your new technology can easily be killed off by the wrong configuration. Stay thoughtful as you and your team configure the technology accordingly.
3) You need to ensure that you can track metrics in your new technology. If you can’t measure your impact, you’ve failed.
Your new technology will be different from your old one in many ways. Identify all of the metrics you tracked in your old technology, as well as all of the metrics you’d like to start tracking.
As with any prioritization decision, you’ll find that you have limited bandwidth. Decide which metrics are the most critical to track.
Be especially careful when you decide not to track a metric that was previously tracked in your old technology. If you do not proactively explain your decision to stakeholders before you deprecate the metric, they will be furious that they lost valuable information.
4) Your metrics will change definitions. I guarantee it.
And when metrics change definitions, “engagement” suddenly no longer means engagement, and “retention” suddenly no longer means retention.
When embarking on this initiative, clearly document the definition of every single metric that your old technology was tracking. Then, clearly document the definition of every single metric that your new technology will track. Highlight the deltas between the two.
During the transition from old to new technology, and for at least 1 quarter afterwards, report both sets of metrics. Show the old definition and the new definition side-by-side. Once executives and employees are confident in the new metric definitions, you can use just the new definition moving forward.
I’ve seen entire sprints spent on trying to reconcile old vs. new metric definitions. Don’t fall into this trap – it will severely reduce your credibility and dramatically reduce your output. Proactively measure both definitions.
5) Bugs will appear as you implement your new technology, as with any other sort of product implementation.
However, you will face an incredible amount of pressure when you tackle bugs related to your new technology. This is because your executive team and your customers are less likely to understand what you are doing “under the hood”, and may therefore feel that you are not delivering new functionality quickly enough.
To mitigate this challenge, track every bug that comes up during the implementation, regularly review with stakeholders, and highlight what would happen if you didn’t fix the bug. Ensure that your stakeholders understand that you are preventing bad things from happening, and that prevention is just as important as new functionality.
Just as importantly, before you even begin implementation, create a comprehensive test plan that mimics the real-world use cases of your old technology. Ensure that you execute this test plan for every single phase of new implementation, and for every single phase of deprecation.
Remember this phrase: the more you sweat in practice, the less you bleed in battle. Proactive prevention will enable your organization to scale quickly and securely at every phase of your implementation.
6) If you’re working with a new data-related technology, data migration is one of the worst headaches associated with implementation.
For some period of time, your organization will need to work with two sources of truth – the old technology, and the new technology.
Your queries and reports will need to pull across both sources, and you need to have some clear way of moving data from the old system into the new system at just the right time.
To ensure success, do the following for each phase of implementation and deprecation:
- Define which data sets will move to what systems
- Plan for rollbacks
- Validate each migration
Right before you conduct a migration, pull summary stats in both systems. Then, pull summary stats in both systems after you migrate. Confirm that the total for both sets align with one another.
If they don’t, execute your rollback plan, identify the root cause of the discrepancy, then try again.
From personal experience, I can tell you that it’s a nightmare when you have to pull off an ad-hoc data fix because you incorrectly and irreversibly migrated data without validating the data sets pre-migration and post-migration. The roll back plan is the most valuable component of any sort of data migration.
Summary
Let’s pull together what we’ve learned in our last article and in this article.
Selecting and implementing new technologies is one of the most daunting but impactful responsibilities that a product manager can have.
To succeed, treat the initiative just like any other product decision and product launch.
Conduct customer research to discover pain, assess the status quo against known alternatives, weigh tradeoffs, and get buy-in across the organization.
Then, pilot the new technology, measure results, and gather feedback.
Finally, roll out the new technology and wind down the old one while keeping dependencies in mind. Stay mindful of known traps, and proactively plan for worst-case scenarios.
Throughout the entire process, communicate as much as possible.
Breathe. Be patient, calm, and persistent. You’ve got this! You’ve set the foundation for a fantastic future, for both your product and your organization.
And, just as importantly, you’ve honed an incredible suite of skills that will serve you well as a product leader and as an executive.
Have thoughts that you’d like to contribute around implementing new technologies? Chat with other product managers around the world in our PMHQ Community!
Join 30,000+ Product People and Get a Free Copy of The PM Handbook and our Weekly Product Reads Newsletter
Subscribe to get:
- A free copy of the PM Handbook (60-page handbook featuring in-depth interviews with product managers at Google, Facebook, Twitter, and more)
- Weekly Product Reads (curated newsletter of weekly top product reads)
Clement Kao has published 60+ product management best practice articles at Product Manager HQ (PMHQ). Furthermore, he provides product management advice within the PMHQ Slack community, which serves 8,000+ members. Clement also curates the weekly PMHQ newsletter, serving 27,000+ subscribers.