Debug

disable_profiling(dump_data: bool = False)[source]

Turn off current profiling. Dump data to file if required.

enable_profiling(duration: int = 999)[source]

Enable profiling. Create profiler if one doesnt exist

get_visible_values()List[str][source]

Get all visible values from the debugger

initialise_logging()[source]

Initialise logging.

is_fps_visible()bool[source]

Returns true if fps stats are visible

is_logging()bool[source]

Returns true if logging is active

is_profiling()bool[source]

Returns true if profiling is active

kill_logging()[source]

Kill logging resources

kill_profiler()[source]

Kill profiling resource

performance_test(method_descs: List[str], old_methods: List[Tuple[Union[str, Callable], str]], new_methods: List[Tuple[Union[str, Callable], str]], num_runs: int = 1000, repeats: int = 3)str[source]
Run performance testing on a collection of methods/functions. Returns a formatted string detailing performance of

old, new and % change between them.

method_descs are used as descriptions only. old_methods/new_methods expects a list of tuples that are (method_to_test, setup). Setup can be an empty string but is usually an import. Index in each list much match, i.e. method_name[0] is the alias of the methods in old_methods[0] and new_methods[0].

Outputs as “Access Trait: 0.00123 -> 0.00036(71.00033%)”.

example usage: method_descs = [“Set Var”, “Access Skill”] old_methods = [(“x = 1”, “”),(“library.get_skill_data(‘lunge’)”, “”)] new_methods = [(“x = ‘one’”, “”), (“library2.SKILLS.get(‘lunge’)”, “from scripts.engine import library2”)] print( performance_test(method_descs, old_methods, new_methods) )

print_values_to_console()[source]

Print the debuggers stats.

set_fps_visibility(is_visible: bool = True)[source]

Set whether the FPS is visible