Debug¶
-
disable_profiling
(dump_data: bool = False)[source]¶ Turn off current profiling. Dump data to file if required.
-
enable_profiling
(duration: int = 999)[source]¶ Enable profiling. Create profiler if one doesnt exist
-
performance_test
(method_descs: List[str], old_methods: List[Tuple[Union[str, Callable], str]], new_methods: List[Tuple[Union[str, Callable], str]], num_runs: int = 1000, repeats: int = 3) → str[source]¶ - Run performance testing on a collection of methods/functions. Returns a formatted string detailing performance of
old, new and % change between them.
method_descs are used as descriptions only. old_methods/new_methods expects a list of tuples that are (method_to_test, setup). Setup can be an empty string but is usually an import. Index in each list much match, i.e. method_name[0] is the alias of the methods in old_methods[0] and new_methods[0].
Outputs as “Access Trait: 0.00123 -> 0.00036(71.00033%)”.
example usage: method_descs = [“Set Var”, “Access Skill”] old_methods = [(“x = 1”, “”),(“library.get_skill_data(‘lunge’)”, “”)] new_methods = [(“x = ‘one’”, “”), (“library2.SKILLS.get(‘lunge’)”, “from scripts.engine import library2”)] print( performance_test(method_descs, old_methods, new_methods) )