Five Must-Know Steps for Confidently Predicting Hotel Performance

Written by: Nate Nasralla

In the first post of this two-part blog series, we outlined the most common mistakes that result in errant and inefficient forecasting processes.

Also, while many pieces to the hospitality experience from reservations to revenue management have embraced technology, forecasting remains an antiquated exercise dominated by Excel spreadsheets, heuristics and gut feel. Finally, at the root of inaccuracies is the lack of a process-driven approach that can be replicated and audited without bias.

Now, with this context in mind, we will cover five practical steps your team can take to begin producing accurate and efficient forecasts.

Finance and technology teams who are not yet regularly leveraging driver-based models (DBM) as the cornerstone of their forecasting program have much to gain. DBM is not only among, if not the, most accurate methodologies available, it allows teams to focus their cross-functional conversations and efforts on the strategic business drivers that, if tweaked and managed, will yield the greatest financial results.

Here is a breakdown of practical steps you can take to implement a DBM in your hospitality organization:

1. Identify your organization’s key business drivers.  

You will need to first layout a basic understanding of how your business actually operates. As routine as this may seem, it is very likely that different people view your organization differently. For example, is pricing (ADR) the greatest driver of transient bookings? Is it marketing spend? Or something completely different? Agreeing on which metrics hold the most explanatory – and therefore predictive – in relationship to others is a critical step to laying the groundwork for an effective driver-based process.

2. Structure all relevant data sets.

Obviously, a mathematical model without data is a ship without a sail. It will not get you very far. You can think about sourcing data to feed your model in a series of three sub-steps: (1) What internal data sets are needed, based on your drivers, and where are they stored? Is there any standardization that needs to happen? Should outliers be included? Omitted? Next: (2) How granular do you need to get? Do you have the compute power to look at transactional data, or do you need to look at aggregated data with less scale? If the latter, what detail or explanatory power will you sacrifice? And finally: (3) Will external data sets like weather or local events be included to help boost your model accuracy?

3. Statistically validate the structure of your DBM.

Once the first version of your DBM is built and your data is prepared, you will want to use statistical analysis to test the validity of your selected drivers, and to refine the relationships you have mapped.  The goals here are to confirm your drivers are highly predictive of the outcome measure, and to sufficiently separate correlation from causation.

4. Work bottom up, then double check top down.

After confirming you have built out a statistically-sound DBM, begin your forecast by predicting the future value of the metrics lying at the lowest level.  From there, you will be able to roll these lower-level metric forecasts up along your “driver path,” working from the bottom up, to arrive at the predicted value of your outcome metric, like room revenue. Guidance from leadership typically takes a top down approach (“We need to grow transient bookings by 20 percent while maintaining our ADR”), so you can double check the soundness of your bottom-up results by working top down.

5. Back-test your model and refine it for greater accuracy.

Long-term, you will want to implement a system for ingesting actual data to gauge the efficacy of your model, as it is important to not only correct false assumptions discovered over time, but as your business evolved, you will want to adjust the drivers in your model to match reality. If there’s a significant decision or change in the business’ structure, or you see a large gap between your predicted and actual results, you will want to investigate why the variance exists and refine your model accordingly.

Nate Nasralla is director, data analytics at Nodin.ai. He is responsible for building and executing market strategies to transform the nonprofit and private sectors through data-driven approaches to revenue generation and organizational scalability.

You May Also Like

About the Author: Contributor

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.