Pydantic#
Example Output#
Overview#
The Pydantic Generator produces Pydantic flavored python dataclasses from a linkml model, with optional support for user-supplied jinja2 templates to generate alternate classes.
Example#
Given a definition of a Person class:
Person:
is_a: NamedThing
description: >-
A person (alive, dead, undead, or fictional).
class_uri: schema:Person
mixins:
- HasAliases
slots:
- primary_email
- birth_date
- age_in_years
- gender
- current_address
- has_employment_history
- has_familial_relationships
- has_medical_history
(some details omitted for brevity, including slot definitions and parent classes)
The generate python looks like this:
class Person(NamedThing):
"""
A person (alive, dead, undead, or fictional).
"""
primary_email: Optional[str] = Field(None)
birth_date: Optional[str] = Field(None)
age_in_years: Optional[int] = Field(None, ge=0, le=999)
gender: Optional[GenderType] = Field(None)
current_address: Optional[Address] = Field(None, description="""The address at which a person currently lives""")
has_employment_history: Optional[List[EmploymentEvent]] = Field(None)
has_familial_relationships: Optional[List[FamilialRelationship]] = Field(None)
has_medical_history: Optional[List[MedicalEvent]] = Field(None)
aliases: Optional[List[str]] = Field(None)
id: Optional[str] = Field(None)
name: Optional[str] = Field(None)
description: Optional[str] = Field(None)
image: Optional[str] = Field(None)
Command Line#
gen-pydantic#
Generate pydantic classes to represent a LinkML model
gen-pydantic [OPTIONS] YAMLFILE
Options
- -V, --version#
Show the version and exit.
- --meta <meta>#
How to include linkml schema metadata in generated pydantic classes. See docs for MetadataMode for full description of choices. Default (auto) is to include all metadata that can’t be otherwise represented
- Options:
MetadataMode.FULL | MetadataMode.EXCEPT_CHILDREN | MetadataMode.AUTO | MetadataMode.NONE
- --black#
Format generated models with black (must be present in the environment)
- --extra-fields <extra_fields>#
How to handle extra fields in BaseModel.
- Options:
allow | ignore | forbid
- --array-representations <array_representations>#
List of array representations to accept for array slots. Default is list of lists.
- Options:
list | numpydantic
- --template-dir <template_dir>#
Optional jinja2 template directory to use for class generation.
Pass a directory containing templates with the same name as any of the default
PydanticTemplateModel
templates to override them. The given directory will be searched for matching templates, and use the default templates as a fallback if an override is not foundAvailable templates to override:
- attribute.py.jinja- base_model.py.jinja- class.py.jinja- conditional_import.py.jinja- enum.py.jinja- imports.py.jinja- module.py.jinja- validator.py.jinja
- -f, --format <format>#
Output format
- Default:
'pydantic'
- Options:
pydantic
- --metadata, --no-metadata#
Include metadata in output
- Default:
True
- --useuris, --metauris#
Use class and slot URIs over model uris
- Default:
True
- -im, --importmap <importmap>#
Import mapping file
- --log_level <log_level>#
Logging level
- Default:
'WARNING'
- Options:
CRITICAL | ERROR | WARNING | INFO | DEBUG
- -v, --verbose#
Verbosity. Takes precedence over –log_level.
- --mergeimports, --no-mergeimports#
Merge imports into source file (default=mergeimports)
- --stacktrace, --no-stacktrace#
Print a stack trace when an error occurs
- Default:
False
Arguments
- YAMLFILE#
Required argument
Generator#
- class linkml.generators.pydanticgen.PydanticGenerator(schema: str | ~typing.TextIO | ~linkml_runtime.linkml_model.meta.SchemaDefinition | Generator | ~pathlib.Path, schemaview: ~linkml_runtime.utils.schemaview.SchemaView = None, format: str | None = None, metadata: bool = True, useuris: bool | None = None, log_level: int | None = 30, mergeimports: bool | None = True, source_file_date: str | None = None, source_file_size: int | None = None, logger: ~logging.Logger | None = None, verbose: bool | None = None, output: str | None = None, namespaces: ~linkml_runtime.utils.namespaces.Namespaces | None = None, directory_output: bool = False, base_dir: str = None, metamodel_name_map: ~typing.Dict[str, str] = None, importmap: str | ~typing.Mapping[str, str] | None = None, emit_prefixes: ~typing.Set[str] = <factory>, metamodel: ~linkml.utils.schemaloader.SchemaLoader = None, stacktrace: bool = False, include: str | ~pathlib.Path | ~linkml_runtime.linkml_model.meta.SchemaDefinition | None = None, template_file: str = None, package: str = 'example', array_representations: ~typing.List[~linkml.generators.pydanticgen.array.ArrayRepresentation] = <factory>, black: bool = False, template_dir: str | ~pathlib.Path | None = None, extra_fields: ~typing.Literal['allow', 'forbid', 'ignore'] = 'forbid', gen_mixin_inheritance: bool = True, injected_classes: ~typing.List[str | ~typing.Type] | None = None, injected_fields: ~typing.List[str] | None = None, imports: ~typing.List[~linkml.generators.pydanticgen.template.Import] | ~linkml.generators.pydanticgen.template.Imports | None = None, sort_imports: bool = True, metadata_mode: ~linkml.generators.pydanticgen.pydanticgen.MetadataMode | str | None = MetadataMode.AUTO, split: bool = False, split_pattern: str = '.{{ schema.name }}', split_context: dict | None = None, split_mode: ~linkml.generators.pydanticgen.pydanticgen.SplitMode = SplitMode.AUTO, gen_classvars: bool = True, gen_slots: bool = True, genmeta: bool = False, emit_metadata: bool = True, _predefined_slot_values: ~typing.Dict[str, ~typing.Dict[str, str]] | None = None, _class_bases: ~typing.Dict[str, ~typing.List[str]] | None = None, **_kwargs)[source]#
Generates Pydantic-compliant classes from a schema
This is an alternative to the dataclasses-based Pythongen
Lifecycle methods (see
LifecycleMixin
) supported:before_generate_enums()
Slot generation is nested within class generation, since the pydantic generator currently doesn’t create an independent representation of slots aside from their materialization as class fields. Accordingly, the
before_
andafter_generate_slots
are called before and after each class’s slot generation, rather than all slot generation.before_generate_classes()
before_generate_class()
after_generate_class()
after_generate_classes()
before_generate_slots()
before_generate_slot()
after_generate_slot()
after_generate_slots()
before_render_template()
after_render_template()
- SNAKE_CASE: ClassVar[str] = '(((?<!^)(?<!\\.))(?=[A-Z][a-z]))|([^\\w\\.]+)'#
Substitute CamelCase and non-word characters with _
- property class_bases: Dict[str, List[str]]#
Generate the inheritance list for each class from is_a plus mixins :return:
- generate_collection_key(slot_ranges: List[str], slot_def: SlotDefinition, class_def: ClassDefinition) str | None [source]#
Find the python range value (str, int, etc) for the identifier slot of a class used as a slot range.
If a pyrange value matches a class name, the range of the identifier slot will be returned. If more than one match is found and they don’t match, an exception will be raised.
- Parameters:
slot_ranges – list of python range values
- generate_module_import(schema: SchemaDefinition, context: dict | None = None) str [source]#
Generate the module string for importing from python modules generated from imported schemas when in
split
mode.Use the
split_pattern
as a jinja template rendered with theSchemaDefinition
and any passedcontext
. Apply theSNAKE_CASE
regex to substitute matches with_
and ensure lowercase.
- generate_python_range(slot_range, slot_def: SlotDefinition, class_def: ClassDefinition) str [source]#
Generate the python range for a slot range value
- classmethod generate_split(schema: str | Path | SchemaDefinition, output_path: str | Path = PosixPath('.'), split_pattern: str | None = None, split_context: dict | None = None, split_mode: SplitMode = SplitMode.AUTO, **kwargs) List[SplitResult] [source]#
Generate a schema that imports from other schema as a set of python modules that import from one another, rather than generating all imported classes in a single schema.
Uses
output_path
for the main schema fromschema
, and then generates any imported schema (from which classes are actually used) to modules whose locations are determined by the module names generated by thesplit_pattern
(seePydanticGenerator.split_pattern
).For example, for
a
output_path
ofmy_dir/v1_2_3/main.py
a schema
main
with a versionv1.2.3
that imports from
s2
with versionv4.5.6
,and a
split_pattern
of..{{ schema.version | replace('.', '_') }}.{{ schema.name }}
One would get: *
my_dir/v1_2_3/main.py
, as expected * that importsfrom ..v4_5_6.s2
* a module atmy_dir/v4_5_6/s2.py
__init__.py
files are generated for any directories that are between the generated modules and their highest common directory.- Parameters:
schema (str,
Path
,SchemaDefinition
) – Main schema to generateoutput_path (str,
Path
) – Python.py
module to generate main schema tosplit_pattern (str) – Pattern to use to generate module names, see
PydanticGenerator.split_pattern
split_context (dict) – Additional variables to pass into jinja context when generating module import names.
- Returns:
list[
SplitResult
]
- generatorname: ClassVar[str] = 'pydanticgen.py'#
Name of the generator. Override with os.path.basename(__file__)
- generatorversion: ClassVar[str] = '0.0.2'#
Version of the generator. Consider deprecating and instead use overall linkml version
- get_array_representations_range(slot: SlotDefinition, range: str) List[SlotResult] [source]#
Generate the python range for array representations
- imports: List[Import] | Imports | None = None#
Additional imports to inject into generated module.
Examples:
from linkml.generators.pydanticgen.template import ( ConditionalImport, ObjectImport, Import, Imports ) imports = (Imports() + Import(module='sys') + Import(module='numpy', alias='np') + Import(module='pathlib', objects=[ ObjectImport(name="Path"), ObjectImport(name="PurePath", alias="RenamedPurePath") ]) + ConditionalImport( module="typing", objects=[ObjectImport(name="Literal")], condition="sys.version_info >= (3, 8)", alternative=Import( module="typing_extensions", objects=[ObjectImport(name="Literal")] ), ).imports )
becomes:
import sys import numpy as np from pathlib import ( Path, PurePath as RenamedPurePath ) if sys.version_info >= (3, 8): from typing import Literal else: from typing_extensions import Literal
- include_metadata(model: PydanticModule, source: SchemaDefinition) PydanticModule [source]#
- include_metadata(model: PydanticClass, source: ClassDefinition) PydanticClass
- include_metadata(model: PydanticAttribute, source: SlotDefinition) PydanticAttribute
Include metadata from the source schema that is otherwise not represented in the pydantic template models.
Metadata inclusion mode is dependent on
metadata_mode
- see:MetadataMode
PydanticTemplateModel.exclude_from_meta()
- injected_classes: List[str | Type] | None = None#
A list/tuple of classes to inject into the generated module.
Accepts either live classes or strings. Live classes will have their source code extracted with inspect.get - so they need to be standard python classes declared in a source file (ie. the module they are contained in needs a
__file__
attr, see:inspect.getsource()
)
- injected_fields: List[str] | None = None#
A list/tuple of field strings to inject into the base class.
Examples:
injected_fields = ( 'object_id: Optional[str] = Field(None, description="Unique UUID for each object")', )
- metadata_mode: MetadataMode | str | None = 'auto'#
How to include schema metadata in generated pydantic models.
See
MetadataMode
for mode documentation
- property predefined_slot_values: Dict[str, Dict[str, str]]#
Dictionary of dictionaries with predefined slot values for each class
- Type:
return
- render() PydanticModule [source]#
Render the schema to a
PydanticModule
model
- serialize(rendered_module: PydanticModule | None = None) str [source]#
Serialize the schema to a pydantic module as a string
- Parameters:
rendered_module (
PydanticModule
) – Optional, if schema was previously rendered withrender()
, use that, otherwiserender()
fresh.
- static sort_classes(clist: List[ClassDefinition], imported: List[ClassDefinition] | None = None) List[ClassDefinition] [source]#
sort classes such that if C is a child of P then C appears after P in the list
Overridden method include mixin classes
TODO: This should move to SchemaView
- sort_imports: bool = True#
Before returning from
PydanticGenerator.render()
, sort imports withImports.sort()
Default
True
, but optional in case import order must be explicitly given, eg. to avoid circular import errors in complex generator subclasses.
- split: bool = False#
Generate schema that import other schema as separate python modules that import from one another, rather than rolling all into a single module (default,
False
).
- split_context: dict | None = None#
Additional variables to pass into
split_pattern
when generating imported module names.Passed in as
**kwargs
, so e.g. ifsplit_context = {'myval': 1}
then one would use it in a template string like{{ myval }}
- split_mode: SplitMode = 'auto'#
How to filter imports from imported schema.
See
SplitMode
for description of options
- split_pattern: str = '.{{ schema.name }}'#
When splitting generation, imported modules need to be generated separately and placed in a python package and import from each other. Since the location of those imported modules is variable – e.g. one might want to generate schema in multiple packages depending on their version – this pattern is used to generate the module portion of the import statement.
These patterns should generally yield a relative module import, since functions like
generate_split()
will generate and write files relative to some base file, though this is not a requirement since custom split generation logic is also allowed.The pattern is a jinja template string that is given the
SchemaDefinition
of the imported schema in the environment. Additional variables can be passed into the jinja environment with thesplit_context
argument.Further modification is possible by using jinja filters.
After templating, the string is passed through a
SNAKE_CASE
pattern to replace whitespace and other characters that can’t be used in module names.See also
generate_module_import()
, which is used to generate the module portion of the import statement (and can be overridden in subclasses).Examples
for a schema named
ExampleSchema
and version1.2.3
…".{{ schema.name }}"
(the default) becomesfrom .example_schema import ClassA, ...
"...{{ schema.name }}.v{{ schema.version | replace('.', '_') }}"
becomesfrom ...example_schema.v1_2_3 import ClassA, ...
- template_dir: str | Path | None = None#
Override templates for each PydanticTemplateModel.
Directory with templates that override the default
PydanticTemplateModel.template
for each class. If a matching template is not found in the override directory, the default templates will be used.
Split Generation#
Pydantic models can also be generated in a “split” mode where rather than rolling down all classes into a single file, schemas are kept as their own pydantic modules that import from one another.
See:
The implementation of split
mode in the Generator itself still generates
a single module, except for importing classes from other modules rather than
including them directly. This is wrapped by PydanticGenerator.generate_split()
which
can be used to generate the module files directly
Templates#
The pydanticgen module has a templating system that allows each part of a schema to be generated independently and customized. See the documentation for the individual classes, but in short - each part of the output pydantic domain has a model with a corresponding template. At render time, each model is recursively rendered.
The PydanticGenerator
then serves as a translation layer between
the source models from linkml_runtime
and the target models in
pydanticgen.template
, making clear what is needed to generate
pydantic code as well as what parts of the linkml metamodel are supported.
Usage example:
Imports:
imports = (Imports() +
Import(module="sys") +
Import(module="pydantic", objects=[{"name": "BaseModel"}, {"name": "Field"}])
)
renders to:
import sys
from pydantic import (
BaseModel,
Field
)
Attributes:
attr = PydanticAttribute(
name="my_field",
annotations={"python_range": {"value": "str"}},
title="My Field!",
description="A Field that is mine!",
pattern="my_.*",
)
By itself, renders to:
my_field: str = Field(None, title="My Field!", description="""A Field that is mine!""")
Classes:
cls = PydanticClass(
name="MyClass",
bases="BaseModel",
description="A Class I Made!",
attributes={"my_field": attr},
)
Renders to (along with the validator for the attribute):
class MyClass(BaseModel):
my_field: str = Field(None, title="My Field!", description="""A Field that is mine!""")
@validator('my_field', allow_reuse=True)
def pattern_my_field(cls, v):
pattern=re.compile(r"my_.*")
if isinstance(v,list):
for element in v:
if not pattern.match(element):
raise ValueError(f"Invalid my_field format: {element}")
elif isinstance(v,str):
if not pattern.match(v):
raise ValueError(f"Invalid my_field format: {v}")
return v
Modules:
module = PydanticModule(imports=imports, classes={cls.name: cls})
Combine all the pieces:
import sys
from pydantic import (
BaseModel,
Field
)
metamodel_version = "None"
version = "None"
class WeakRefShimBaseModel(BaseModel):
__slots__ = '__weakref__'
class ConfiguredBaseModel(WeakRefShimBaseModel,
validate_assignment = True,
validate_all = True,
underscore_attrs_are_private = True,
extra = "forbid",
arbitrary_types_allowed = True,
use_enum_values = True):
pass
class MyClass(BaseModel):
my_field: str = Field(None, title="My Field!", description="""A Field that is mine!""")
@validator('my_field', allow_reuse=True)
def pattern_my_field(cls, v):
pattern=re.compile(r"my_.*")
if isinstance(v,list):
for element in v:
if not pattern.match(element):
raise ValueError(f"Invalid my_field format: {element}")
elif isinstance(v,str):
if not pattern.match(v):
raise ValueError(f"Invalid my_field format: {v}")
return v
# Update forward refs
# see https://pydantic-docs.helpmanual.io/usage/postponed_annotations/
MyClass.update_forward_refs()
- linkml.generators.pydanticgen.template.IMPORT_GROUPS#
See
Import.group
andImports.sort
Order of this literal is used in sort and therefore not arbitrary.
alias of
Literal
[‘future’, ‘stdlib’, ‘thirdparty’, ‘local’, ‘conditional’]
- class linkml.generators.pydanticgen.template.PydanticTemplateModel[source]#
Metaclass to render pydantic models with jinja templates.
Each subclass needs to declare a
typing.ClassVar
for a jinja template within the templates directory.Templates are written expecting each of the other TemplateModels to already be rendered to strings - ie. rather than the
class.py.jinja
template receiving a fullPydanticAttribute
object or dictionary, it receives it having already been rendered to a string. See therender()
method.Black Formatting
Template models will try to use
black
to format results when it is available in the environment when render is called withblack = True
. If it isn’t, then the string is returned without any formatting beyond the template. This is mostly important for complex annotations like those produced for arrays, as otherwise the templates are acceptable looking.To install linkml with black, use the extra
black
dependency.e.g. with pip:
pip install linkml[black]
or with poetry:
poetry install -E black
- render(environment: Environment | None = None, black: bool = False) str [source]#
Recursively render a template model to a string.
For each field in the model, recurse through, rendering each
PydanticTemplateModel
using the template set inPydanticTemplateModel.template
, but preserving the structure of lists and dictionaries. RegularBaseModel
s are rendered to dictionaries. Any other value is passed through unchanged.- Parameters:
environment (
jinja2.Environment
) – Template environment - seeenvironment()
black (bool) – if
True
, format template with black. (default False)
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.EnumValue(*, label: str, value: str, description: str | None = None)[source]#
A single value within an
Enum
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.PydanticEnum(*, name: str, description: str | None = None, values: ~typing.Dict[str, ~linkml.generators.pydanticgen.template.EnumValue] = <factory>)[source]#
Model used to render a
enum.Enum
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.PydanticBaseModel(*, name: str = <factory>, extra_fields: ~typing.Literal['allow', 'forbid', 'ignore'] = 'forbid', fields: ~typing.List[str] | None = None, strict: bool = False)[source]#
Parameterization of the base model that generated pydantic classes inherit from
- fields: List[str] | None#
Extra fields that are typically injected into the base model via
injected_fields
- strict: bool#
Enable strict mode in the base model.
Note
Pydantic 2 only! Pydantic 1 only has strict types, not strict mode. See: https://github.com/linkml/linkml/issues/1955
References
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.PydanticAttribute(*, name: str, required: bool = False, identifier: bool = False, key: bool = False, predefined: str | None = None, range: str | None = None, title: str | None = None, description: str | None = None, equals_number: int | float | None = None, minimum_value: int | float | None = None, maximum_value: int | float | None = None, exact_cardinality: int | None = None, minimum_cardinality: int | None = None, maximum_cardinality: int | None = None, multivalued: bool | None = None, pattern: str | None = None, meta: Dict[str, Any] | None = None)[source]#
Reduced version of SlotDefinition that carries all and only the information needed by the template
- meta_exclude: ClassVar[List[str]] = ['from_schema', 'owner', 'range', 'inlined', 'inlined_as_list']#
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.PydanticValidator(*, name: str, required: bool = False, identifier: bool = False, key: bool = False, predefined: str | None = None, range: str | None = None, title: str | None = None, description: str | None = None, equals_number: int | float | None = None, minimum_value: int | float | None = None, maximum_value: int | float | None = None, exact_cardinality: int | None = None, minimum_cardinality: int | None = None, maximum_cardinality: int | None = None, multivalued: bool | None = None, pattern: str | None = None, meta: Dict[str, Any] | None = None)[source]#
Trivial subclass of
PydanticAttribute
that uses thevalidator.py.jinja
template instead- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.PydanticClass(*, name: str, bases: List[str] | str = 'ConfiguredBaseModel', description: str | None = None, attributes: Dict[str, PydanticAttribute] | None = None, meta: Dict[str, Any] | None = None)[source]#
Reduced version of ClassDefinition that carries all and only the information needed by the template.
On instantiation and rendering, will create any additional
validators
that are implied by the givenattributes
. Currently the only kind of slot-level validators that are created are for those slots that have apattern
property.- attributes: Dict[str, PydanticAttribute] | None#
- property validators: Dict[str, PydanticValidator] | None#
- property slots: Dict[str, PydanticAttribute] | None#
alias of attributes
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.ObjectImport(*, name: str, alias: str | None = None)[source]#
An object to be imported from within a module.
See
Import
for examples- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.Import(*, module: str, alias: str | None = None, objects: List[ObjectImport] | None = None, is_schema: bool = False)[source]#
A python module, or module and classes to be imported.
Examples
Module import:
>>> Import(module='sys').render() import sys >>> Import(module='numpy', alias='np').render() import numpy as np
Class import:
>>> Import(module='pathlib', objects=[ >>> ObjectImport(name="Path"), >>> ObjectImport(name="PurePath", alias="RenamedPurePath") >>> ]).render() from pathlib import ( Path, PurePath as RenamedPurePath )
- objects: List[ObjectImport] | None#
- is_schema: bool#
Whether or not this
Import
is importing another schema imported by the main schema – ie. that it is not expected to be provided by the environment, but imported locally from within the package. Used primarily in split schema generation, seepydanticgen.generate_split()
for example usage.
- property group: Literal['future', 'stdlib', 'thirdparty', 'local', 'conditional']#
Import group used when sorting
future
- from __future__ import…stdlib
- … the standard librarythirdparty
- other dependencies not in the standard librarylocal
- relative imports (eg. from split generation)conditional
- aConditionalImport
- merge(other: Import) List[Import] [source]#
Merge one import with another, see
Imports()
for an example.If module don’t match, return both
If one or the other are a
ConditionalImport
, return bothIf modules match, neither contain objects, but the other has an alias, return the other
If modules match, one contains objects but the other doesn’t, return both
If modules match, both contain objects, merge the object lists, preferring objects with aliases
- sort() None [source]#
Sort imported objects
First by whether the first letter is capitalized or not,
Then alphabetically (by object name rather than alias)
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.ConditionalImport(*, module: str, alias: str | None = None, objects: List[ObjectImport] | None = None, is_schema: bool = False, condition: str, alternative: Import)[source]#
Import that depends on some condition in the environment, common when using backported features or straddling dependency versions.
Make sure that everything that is needed to evaluate the condition is imported before this is added to the injected imports!
Examples
conditionally import Literal from
typing_extensions
if on python <= 3.8imports = (Imports() + Import(module='sys') + ConditionalImport( module="typing", objects=[ObjectImport(name="Literal")], condition="sys.version_info >= (3, 8)", alternative=Import( module="typing_extensions", objects=[ObjectImport(name="Literal")] ) )
Renders to:
import sys if sys.version_info >= (3, 8): from typing import Literal else: from typing_extensions import Literal
- property group: Literal['conditional']#
Import group used when sorting
future
- from __future__ import…stdlib
- … the standard librarythirdparty
- other dependencies not in the standard librarylocal
- relative imports (eg. from split generation)conditional
- aConditionalImport
- sort() None [source]#
Import.sort()
called for self andalternative
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.Imports(*, imports: ~typing.List[~linkml.generators.pydanticgen.template.Import | ~linkml.generators.pydanticgen.template.ConditionalImport] = <factory>, group_order: ~typing.Tuple[str, ...] = ('future', 'stdlib', 'thirdparty', 'local', 'conditional'), render_sorted: bool = True)[source]#
Container class for imports that can handle merging!
See
Import
andConditionalImport
for examples of declaring individual importsUseful for generation, because each build stage will potentially generate overlapping imports. This ensures that we can keep a collection of imports without having many duplicates.
Defines methods for adding, iterating, and indexing from within the
Imports.imports
list.Examples
imports = (Imports() + Import(module="sys") + Import(module="pathlib", objects=[ObjectImport(name="Path")]) + Import(module="sys") )
Renders to:
from pathlib import Path import sys
- imports: List[Import | ConditionalImport]#
- group_order: Tuple[str, ...]#
Order in which to sort imports by their
Import.group
- classmethod imports_are_merged(imports: List[Import | ConditionalImport]) List[Import | ConditionalImport] [source]#
When creating from a list of imports, construct model as if we have done so by iteratively constructing with __add__ calls
- property import_groups: List[Literal['future', 'stdlib', 'thirdparty', 'local', 'conditional']]#
List of what group each import belongs to
- sort() None [source]#
Sort imports recursively, mimicking isort:
First by
Import.group
according toImports.group_order
Then by whether the
Import
has any objects (import module
comes beforefrom module import name
)Then alphabetically by module name
- render(environment: Environment | None = None, black: bool = False) str [source]#
Recursively render a template model to a string.
For each field in the model, recurse through, rendering each
PydanticTemplateModel
using the template set inPydanticTemplateModel.template
, but preserving the structure of lists and dictionaries. RegularBaseModel
s are rendered to dictionaries. Any other value is passed through unchanged.- Parameters:
environment (
jinja2.Environment
) – Template environment - seeenvironment()
black (bool) – if
True
, format template with black. (default False)
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class linkml.generators.pydanticgen.template.PydanticModule(*, metamodel_version: str | None = None, version: str | None = None, base_model: ~linkml.generators.pydanticgen.template.PydanticBaseModel = PydanticBaseModel(name='ConfiguredBaseModel', extra_fields='forbid', fields=None, strict=False), injected_classes: ~typing.List[str] | None = None, python_imports: ~linkml.generators.pydanticgen.template.Imports | ~typing.List[~linkml.generators.pydanticgen.template.Import | ~linkml.generators.pydanticgen.template.ConditionalImport] = Imports(imports=[], group_order=('future', 'stdlib', 'thirdparty', 'local', 'conditional'), render_sorted=True, import_groups=[]), enums: ~typing.Dict[str, ~linkml.generators.pydanticgen.template.PydanticEnum] = <factory>, classes: ~typing.Dict[str, ~linkml.generators.pydanticgen.template.PydanticClass] = <factory>, meta: ~typing.Dict[str, ~typing.Any] | None = None)[source]#
Top-level container model for generating a pydantic module :)
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- base_model: PydanticBaseModel#
- python_imports: Imports | List[Import | ConditionalImport]#
- enums: Dict[str, PydanticEnum]#
- classes: Dict[str, PydanticClass]#
Arrays#
TODO
Narrative documentation for pydantic LoL Arrays. Subsection this by different array reps
See Schemas/Arrays
- class linkml.generators.pydanticgen.array.ArrayRepresentation(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
- class linkml.generators.pydanticgen.array.ArrayValidator[source]#
Validate the specification of a LinkML Array
- classmethod validate(array: ArrayExpression)[source]#
Validate an array expression.
- Raises:
.ValidationError –
- classmethod validate_dimension(dimension: DimensionExpression)[source]#
Validate a single array dimension
- Raises:
.ValidationError –
- static array_exact_dimensions(array: ArrayExpression)[source]#
Arrays can have exact_number_dimensions OR min/max_number_dimensions, but not both
- static array_consistent_n_dimensions(array: ArrayExpression)[source]#
Complex arrays with both exact/min/max_number_dimensions and parameterized dimensions need to have the exact/min/max_number_dimensions greater than the number of parameterized dimensions!
- static array_dimensions_ordinal(array: ArrayExpression)[source]#
minimum_number_dimensions needs to be less than maximum_number_dimensions when both are set
- static array_explicitly_unbounded(array: ArrayExpression)[source]#
Complex arrays with a minimum_number_dimensions and parameterized dimensions need to either use exact_number_dimensions to specify extra anonymous dimensions or set maximum_number_dimensions to
False
to specify unbounded extra anonymous dimensions to avoid ambiguity.
- static dimension_exact_cardinality(dimension: DimensionExpression)[source]#
Dimensions can only have exact_cardinality OR min/max_cardinality, but not both
- static dimension_ordinal(dimension: DimensionExpression)[source]#
minimum_cardinality must be less than maximum_cardinality when both are set
- class linkml.generators.pydanticgen.array.ArrayRangeGenerator(array: ArrayExpression | None, dtype: str | Element)[source]#
Metaclass for generating a given format of array range.
See Shape Forms for more details on array range forms.
These classes do only enough validation of the array specification to decide which kind of representation to generate. Proper value validation should happen elsewhere (ie. in the metamodel and generated
ArrayExpression
class.)Each of the array representation generation methods should be able to handle the supported pydantic versions (currently still 1 and 2).
Notes
When checking for array specification, recall that there is a semantic difference between
None
andFalse
, particularly forArrayExpression.max_number_dimensions
- check for absence of specification withis None
rather than checking for truthiness/falsiness (unless that’s what you intend to do ofc ;)- array#
Array to create a range for
- Type:
- dtype#
dtype of the entire array as a string
- Type:
Union[str,
Element
- validate()[source]#
Ensure that the given ArrayExpression is valid using
ArrayValidator
- Raises:
.ValidationError –
- property has_bounded_dimensions: bool#
Whether the
ArrayExpression
has some shape specification aside fromdimensions
- classmethod get_generator(repr: ArrayRepresentation) Type[ArrayRangeGenerator] [source]#
Get the generator class for a given array representation
- class linkml.generators.pydanticgen.array.ListOfListsArray(array: ArrayExpression | None, dtype: str | Element)[source]#
Represent arrays as lists of lists!
- class linkml.generators.pydanticgen.array.NumpydanticArray(array: ArrayExpression | None, dtype: str | Element)[source]#
Represent array range with
numpydantic.NDArray
annotations, allowing an abstract array specification to be used with many different array libraries.- MIN_NUMPYDANTIC_VERSION = '1.6.1'#
Minimum numpydantic version needed to be installed in the environment using the generated models
Additional Notes#
LinkML contains two Python generators. The Pydantic dataclass generator is specifically useful for FastAPI, but is newer and less full featured than the standard Python generator.
Biolink Example#
Begin by downloading the Biolink Model YAML and adding a virtual environment and installing linkml.
curl -OJ https://raw.githubusercontent.com/biolink/biolink-model/master/biolink-model.yaml
python3 -m venv venv
source venv/bin/activate
pip install linkml
Now generate the classes using the gen-pydantic command
gen-pydantic biolink-model.yaml > biolink-model.py