As with all sentences about consciousness this is both controversial and debatable. I personally find this statement, that consciousness involves self awareness, somewhat self-evident. I think everything below still follows if the statement is just that some parts of consciousness involve self awareness.
The link between consciousness and self awareness can be found in Hofstadter's work on strange loops.
If this strange loop is operating over a 'self' and this 'self' exists in some way then it must exist in a self model.
If the predictive processing model of the mind is correct, then the natural place to look for self models is where action meets input. This is the point where a model of the world will be most efficient if it encodes any repeatable transformations over the input data caused by actions.
There are several different ways in which the term active inference is used in the literature:
PredNet (Lotter, Kreiman, Cox) takes ideas from predictive coding and implements them in a deep net to predict the next image in a video sequence. This works well for generative tasks where learning occurs over large data sets but does not require any actuators on the agents part.
It is not clear if simply minimising prediction error is enough to drive behaviour in an embedded system. Simply rewarding the lowest prediction error could lead to dark-room behaviour where the agent finds a boring unchanging space in the environment and stays there.
A promising direction for an alternative intrinsic reward is empowerment. Empowerment is an information-theoretic measure of the coupling between an agents inputs (sensors) and outputs (actuators) [Klyubin et al 2005]. An agent attempting to maximise empowerment is attempting to maximise the amount of information they have about the environment based on the actions they can perform.
Other possibilities include maximising rate-of-change-of error minimisation and introducing homeostatic goals to the system. It is possible that some combination of approaches is required.