NotQuiteParadise2

Debug

class Debugger(game: Game)[source]

Bases: object

__init__(game: Game)[source]
disable_profiling(dump_data: bool = False)[source]

Turn off current profiling. Dump data to file if required.

draw(surface: pygame.Surface)[source]

Draw debug info

enable_profiling(duration: int = 999)[source]

Enable profiling. Create profiler if one doesnt exist

initialise_logging()[source]

Initialise logging.

kill_logging()[source]

Kill logging resources

kill_profiler()[source]

Kill profiling resource

static performance_test(method_descs: List[str], old_methods: List[Tuple[Union[str, Callable], str]], new_methods: List[Tuple[Union[str, Callable], str]], num_runs: int = 1000, repeats: int = 3) str[source]

Run performance testing on a collection of methods/functions. Returns a formatted string detailing performance of old, new and % change between them.

method_descs are used as descriptions only. old_methods/new_methods expects a list of tuples that are (method_to_test, setup). Setup can be an empty string but is usually an import. Index in each list much match, i.e. method_name[0] is the alias of the methods in old_methods[0] and new_methods[0].

Outputs as “Access Trait: 0.00123 -> 0.00036(71.00033%)”.

example usage: method_descs = [“Set Var”, “Access Skill”] old_methods = [(“x = 1”, “”),(“library.get_skill_data(‘lunge’)”, “”)] new_methods = [(“x = ‘one’”, “”), (“library2.SKILLS.get(‘lunge’)”, “from scripts.engine import library2”)] print( performance_test(method_descs, old_methods, new_methods) )

print_values_to_console()[source]

Print the debuggers stats.

toggle_debug_info()[source]

Toggle whether the debug info is shown

toggle_dev_console_visibility()[source]
update(delta_time: float)[source]
class Timer(label=None)[source]

Bases: object

Context manager to document program time

__init__(label=None)[source]