Testing E2E product is fairly a straightforward process. There should be a defined and known output for every input. So, we already know what should the output be. Sometimes the output may not be well defined and you may end up getting some disagreements on whether or not particular result represent a bug.
So, machine learning can be used with the type of software where the output is no longer the case. Like machine learning systems and predictive analytics.
There is a difference between both of them. Mostly, machine learning systems are based on neural networks. Today, we understand that as an artificial intelligence.
Wheres, predictive analytics makes adjustments to the algorithms in production, based on results fed back into the software. In short, the application will work better based on how those rules worked in the past.
Both of these types of product will never produce the exact result.Often times, they might produce an incorrect result.
Most of these products are tested during the training phase.
So, coming back to your question. For now, you can divide test cases into P0, P1 & P2. Work closely with the dev team and see how far the new build is affecting the code. Test that area of the code. Have the history of failures, bugs found by UAT or the client and part of the code it's affecting. This will help you to identify frequent failures and which area of the code is failing often times. Also, you can take frequency of each build as a parameter.