stormvogel.model¶
Contains the python model representations and their APIs.
Attributes¶
Classes¶
represents an interval value for interval models |
|
The type of the model. |
|
Represents an observation of a state (for POMDPs and HMMs) |
|
Represents an action, e.g., in MDPs. |
|
Represents a state in a Model. |
|
Represents branches, which is a distribution over states. |
|
Represents a choice, which map actions to branches. |
|
Represents a state-exit reward model. |
|
Represents a model. |
Functions¶
|
|
|
Convert a Value to a string. |
|
|
Get a Choice object from a ChoiceShorthand. Use for all choices in DTMCs and for empty actions in MDPs. |
|
|
Creates a DTMC. |
|
Creates an MDP. |
|
Creates a CTMC. |
|
Creates a POMDP. |
|
Creates a HMM. |
|
Creates a MA. |
|
More general model creation function |
Module Contents¶
- stormvogel.model.Number¶
- stormvogel.model.DEPRECATED_NAMES = [('Choice', 'Choices'), ('Branch', 'Branches')]¶
- stormvogel.model.__getattr__(name)¶
- class stormvogel.model.Interval(bottom: Number, top: Number)¶
represents an interval value for interval models
- Args:
bottom: the bottom (left) element of the interval top: the top (right) element of the interval
- bottom: Number¶
- top: Number¶
- __getitem__(idx)¶
- __lt__(other)¶
- __str__()¶
- stormvogel.model.Value¶
- stormvogel.model.value_to_string(n: Value, use_fractions: bool = True, round_digits: int = 4, denom_limit: int = 1000) str¶
Convert a Value to a string.
- class stormvogel.model.ModelType(*args, **kwds)¶
Bases:
enum.EnumThe type of the model.
- DTMC = 1¶
- MDP = 2¶
- CTMC = 3¶
- POMDP = 4¶
- MA = 5¶
- HMM = 6¶
- class stormvogel.model.Observation¶
Represents an observation of a state (for POMDPs and HMMs)
- Args:
observation: the observation of a state as an integer
- observation: int¶
- valuations: dict[str, int | float | bool] | None = None¶
- format()¶
- __str__()¶
- class stormvogel.model.Action¶
- Represents an action, e.g., in MDPs.
Note that this action object is completely independent of its corresponding branch. Their relation is managed by Choices. Two actions with the same labels are considered equal.
- Args:
label: The label of this action. Corresponds to Storm label.
- label: str | None¶
- __post_init__()¶
- __lt__(other)¶
- __str__()¶
- stormvogel.model.EmptyAction¶
- class stormvogel.model.State(labels: list[str], valuations: dict[str, int | float | bool], id: int, model, observation: Observation | list[tuple[ValueType, Observation]] | None = None, name: str | None = None)¶
Represents a state in a Model.
- Args:
labels: The labels of this state. Corresponds to Storm labels. valuations: The valuations of this state. Corresponds to Storm valuations/features. id: The id of this state. model: The model this state belongs to. observation: the observation of this state in case the model is a POMDP or HMM. name: the name of this state. (unique identifier in case ids change)
- labels: list[str]¶
- valuations: dict[str, int | float | bool]¶
- id: int¶
- observation: Observation | list[tuple[State.ValueType, Observation]] | None¶
- name: str¶
- add_label(label: str)¶
adds a new label to the state
- set_observation(observation: Observation | list[tuple[ValueType, Observation]]) Observation | list[tuple[ValueType, Observation]]¶
sets the observation for this state
- get_observation() Observation | list[tuple[ValueType, Observation]]¶
gets the observation
- has_choices() bool¶
- nr_choices() int¶
The number of choices in this state.
- add_valuation(variable: str, value: int | bool | float)¶
Adds a valuation to the state.
- get_valuation(variable: str) int | bool | float¶
- get_outgoing_transitions(action: Action | None = None) list[tuple[ValueType, State[ValueType]]] | None¶
gets the outgoing transitions of this state (after a specific action)
- is_absorbing() bool¶
returns whether the state has a nonzero transition going to another state or not
- is_initial()¶
Returns whether this state is initial.
- __str__()¶
- __eq__(other)¶
- __lt__(other)¶
- class stormvogel.model.Branches(*args)¶
Represents branches, which is a distribution over states.
- Args:
- branch: The branch as a list of tuples representing the transitions.
The first element is the probability value and the second element is the target state.
- get_successors() set[int]¶
- sort_states()¶
sorts the branch list by states
- __str__()¶
- __eq__(other)¶
- __add__(other)¶
- sum_probabilities() Number¶
- __iter__()¶
- class stormvogel.model.Choices(choice: dict[Action, Branches[ValueType]])¶
- Represents a choice, which map actions to branches.
Note that an EmptyAction may be used if we want a non-action choice. Note that a single Choices might correspond to multiple ‘arrows’.
- Args:
choice: The choice dictionary. For each available action, we have a branch containing the transitions.
- __str__()¶
- has_empty_action() bool¶
- __eq__(other)¶
- is_stochastic(epsilon: Number) bool¶
returns whether the probabilities in the branches sum to 1
- has_zero_transition() bool¶
- __getitem__(item)¶
- __iter__()¶
- __len__() int¶
- stormvogel.model.ChoiceShorthand¶
- stormvogel.model.choice_from_shorthand(shorthand: ChoiceShorthand, model: Model[ValueType]) Choices[choice_from_shorthand.ValueType]¶
Get a Choice object from a ChoiceShorthand. Use for all choices in DTMCs and for empty actions in MDPs.
There are two possible ways to define a ChoiceShorthand. - using only the probability and the target state (implies default action when in an MDP). - using only the action and the target state (implies probability=1).
- class stormvogel.model.RewardModel(name: str, model: Model, rewards: dict[Tuple[int, Action], ValueType])¶
Represents a state-exit reward model. Args:
name: Name of the reward model. model: The model this rewardmodel belongs to. rewards: The rewards, the keys state action pairs.
- name: str¶
- rewards: dict[Tuple[int, Action], RewardModel.ValueType]¶
Rewards dict. Hashed by state id and Action. The function update_rewards can be called to update rewards. After this, rewards will correspond to intermediate_rewards. Note that in models without actions, EmptyAction will be used here.
- set_from_rewards_vector(vector: list[ValueType]) None¶
Set the rewards of this model according to a (stormpy) rewards vector.
- get_state_reward(state: State) ValueType | None¶
Gets the reward at said state or state action pair. Return None if no reward is present.
- get_state_action_reward(state: State, action: Action) ValueType | None¶
Gets the reward at said state or state action pair. Returns None if no reward was found.
- set_state_reward(state: State, value: ValueType)¶
Sets the reward at said state. If the model has actions, try to use the empty state.
- set_state_action_reward(state: State, action: Action, value: ValueType)¶
sets the reward at said state action pair (in case of models with actions). If you disable auto_update_rewards, you will need to call update_intermediate_to
- get_reward_vector() list[ValueType]¶
Return the rewards in a (stormpy) vector format.
- set_unset_rewards(value: ValueType)¶
Fills up rewards that were not set yet with the specified value. Use this if converting (to stormpy) doesn’t work because the reward vector does not have the expected length.
- __lt__(other) bool¶
- __eq__(other) bool¶
- __iter__()¶
- class stormvogel.model.Model(model_type: ModelType, create_initial_state: bool = True)¶
Represents a model.
- Args:
type: The model type. states: The states of the model. The keys are the state’s ids. choices: The choices of this model. The keys are the state ids. actions: The actions of the model, if this is a model that supports actions. rewards: The rewardsmodels of this model. exit_rates: The exit rates of the model, optional if this model supports rates. markovian_states: list of markovian states in the case of a ma.
- observations: list[Observation] | None¶
- rewards: list[RewardModel]¶
- exit_rates: dict[int, Model.ValueType] | None¶
- parametric: bool | None¶
- interval: bool | None¶
- used_names¶
- id_counter = 0¶
- observation_id = 0¶
- summary()¶
Give a short summary of the model.
- get_actions() set[Action] | None¶
Return the actions of the model. Returns None if actions are not supported.
- supports_actions() bool¶
Returns whether this model supports actions.
- supports_rates() bool¶
Returns whether this model supports rates.
- supports_observations() bool¶
Returns whether this model supports observations.
- iterate_transitions() list[tuple[ValueType, State]]¶
iterates through all transitions in all choices of the model
- is_interval_model() bool¶
Returns whether this model is an interval model, i.e., containts interval values)
- is_parametric() bool¶
Returns whether this model contains parametric transition values
- is_stochastic(epsilon: Number = 1e-06) bool | None¶
For discrete models: Checks if all sums of outgoing transition probabilities for all states equal 1, with at most epsilon rounding error. For continuous models: Checks if all sums of outgoing rates sum to 0
- normalize()¶
Normalizes a model (for states where outgoing transition probabilities don’t sum to 1, we divide each probability by the sum)
- get_sub_model(states: list[State], normalize: bool = True) Model¶
Returns a submodel of the model based on a collection of states. The states in the collection are the states that stay in the model.
- parameter_valuation(values: dict[str, Number]) Model¶
evaluates all parametric transitions with the given values and returns the induced model
- get_choice_id(state: State[ValueType], action: Action) int | None¶
We calculate the appropriate choice id for a given state and action. This choice id corresponds to the row of the matrix in the underlying matrix.
- get_state_action_pair(id: int) tuple[State[ValueType], Action] | None¶
Given an state action id, we return the corresponding state action pair Each state action pair corresponds to an id (which also corresponds to the row of the matrix).
- add_self_loops()¶
adds self loops to all states that do not have an outgoing transition
- add_valuation_at_remaining_states(variables: list[str] | None = None, value: int | bool | float = 0)¶
sets (dummy) value to variables in all states where they don’t have a value yet
- unassigned_variables() list[tuple[State[ValueType], str]]¶
we return a list of tuples of state variable pairs that are unassigned
- all_states_outgoing_transition() bool¶
checks if all states have a choice
- all_non_init_states_incoming_transition() bool¶
checks if all states except the initial state have an incoming transition
- has_zero_transition() bool¶
checks if the model has transitions with probability zero
- add_markovian_state(markovian_state: State)¶
adds a state to the markovian states (in case of markov automatas)
- set_choice(s: State, choices: Choices[ValueType] | ChoiceShorthand) None¶
Set the choice for a state.
- add_choice(s: State, choices: Choices[ValueType] | ChoiceShorthand) None¶
Add new choices from a state to the model. If no choice currently exists, the result will be the same as set_choice.
- get_successor_states(state_or_id: State[ValueType] | int) set[int]¶
Returns the set of successors of state_or_id.
- get_choices(state_or_id: State[ValueType] | int) Choices[ValueType]¶
Get the choices at state s. Throws a KeyError if not present.
- get_branches(state_or_id: State | int) Branches[ValueType]¶
Get the branch at state s. Only intended for emtpy choices, otherwise a RuntimeError is thrown.
- get_branch(state_or_id: State[ValueType] | int) Branches[ValueType]¶
Get the branch at state s. Only intended for emtpy choices, otherwise a RuntimeError is thrown.
- reassign_ids()¶
Reassigns the ids of states, choices and rates to be in order again. Mainly useful to keep consistent with storm.
- remove_state(state: State, normalize: bool = True, reassign_ids: bool = False)¶
Properly removes a state, it can optionally normalize the model and reassign ids automatically.
- remove_transitions_between_states(state0: State, state1: State, normalize: bool = True)¶
Remove the transition(s) that start in state0 and go to state1. Only works on models that don’t support actions.
- new_observation(valuations: dict[str, int | float | bool] | None = None, observation_id: int | None = None) Observation¶
- get_observation(observation_id: int) Observation | None¶
- observation(observation_id: int | None = None, valuations: dict[str, int | float | bool] | None = None) Observation¶
New observation or get observation if it exists.
- new_state(labels: list[str] | str | None = None, valuations: dict[str, int | bool | float] | None = None, observation: Observation | list[tuple[ValueType, Observation]] | None = None, name: str | None = None) State[ValueType]¶
Creates a new state and returns it.
- has_only_number_values() bool¶
Checks whether all transition values, observation probabilities, exit rates, markovian states in this model are numbers.
- make_observations_deterministic(reassign_ids: bool = False)¶
In case of POMDPs or HMMs, makes the observations deterministic by splitting states with multiple observations into multiple states with single observations.
- get_ordered_labels() list[list[str]]¶
Get all the labels of this model, ordered by id. IMPORTANT: If a state has no label, then a value ‘’ is inserted!
- get_labels() set[str]¶
Get all labels in states of this Model.
- get_variables() set[str]¶
gets the set of all variables present in this model (corresponding to valuations
- get_default_rewards() RewardModel¶
Gets the default reward model, throws a RuntimeError if there is none.
- get_rewards(name: str) RewardModel¶
Gets the reward model with the specified name. Throws a RuntimeError if said model does not exist.
- get_parameters() set[str]¶
Returns the set of parameters of this model
- new_reward_model(name: str) RewardModel¶
Creates a reward model with the specified name, adds it and returns it.
- property nr_states: int¶
Returns the number of states in this model. Note that not all states need to be reachable.
- property nr_choices: int¶
Returns the number of choices in the model (summed over all states).
- to_dot() str¶
Generates a dot representation of this model.
- __str__() str¶
- __eq__(other) bool¶
- __getitem__(state_id: int)¶
- __iter__()¶