Releases: kedro-org/kedro
0.17.1
Release 0.17.1
Major features and improvements
- Added
envandextra_paramstoreload_kedro()line magic. - Extended the
pipeline()API to allow strings and sets of strings asinputsandoutputs, to specify when a dataset name remains the same (not namespaced). - Added the ability to add custom prompts with regexp validator for starters by repurposing
default_config.ymlasprompts.yml. - Added the
envandextra_paramsarguments toregister_config_loaderhook. - Refactored the way
settingsare loaded. You will now be able to run:
from kedro.framework.project import settings
print(settings.CONF_ROOT)Bug fixes and other changes
- The version of a packaged modular pipeline now defaults to the version of the project package.
- Added fix to prevent new lines being added to pandas CSV datasets.
- Fixed issue with loading a versioned
SparkDataSetin the interactive workflow. - Kedro CLI now checks
pyproject.tomlfor atool.kedrosection before treating the project as a Kedro project. - Added fix to
DataCatalog::shallow_copynow it should copy layers. kedro pipeline pullnow usespip downloadfor protocols that are not supported byfsspec.- Cleaned up documentation to fix broken links and rewrite permanently redirected ones.
- Added a
jsonschemaschema definition for the Kedro 0.17 catalog. kedro installnow waits on Windows until all the requirements are installed.- Exposed
--to-outputsoption in the CLI, throughout the codebase, and as part of hooks specifications. - Fixed a bug where
ParquetDataSetwasn't creating parent directories on the fly. - Updated documentation.
Breaking changes to the API
- This release has broken the
kedro ipythonandkedro jupyterworkflows. To fix this, follow the instructions in the migration guide below.
Note: If you're using the
ipythonextension instead, you will not encounter this problem.
Migration guide
You will have to update the file <your_project>/.ipython/profile_default/startup/00-kedro-init.py in order to make kedro ipython and/or kedro jupyter work. Add the following line before the KedroSession is created:
configure_project(metadata.package_name) # to add
session = KedroSession.create(metadata.package_name, path)Make sure that the associated import is provided in the same place as others in the file:
from kedro.framework.project import configure_project # to add
from kedro.framework.session import KedroSessionThanks for supporting contributions
Mariana Silva,
Kiyohito Kunii,
noklam,
Ivan Doroshenko,
Zain Patel,
Deepyaman Datta,
Sam Hiscox,
Pascal Brokmeier
0.17.0
Release 0.17.0
Major features and improvements
- In a significant change, we have introduced
KedroSessionwhich is responsible for managing the lifecycle of a Kedro run. - Created a new Kedro Starter:
kedro new --starter=mini-kedro. It is possible to use the DataCatalog as a standalone component in a Jupyter notebook and transition into the rest of the Kedro framework. - Added
DatasetSpecswith Hooks to run before and after datasets are loaded from/saved to the catalog. - Added a command:
kedro catalog create. For a registered pipeline, it creates a<conf_root>/<env>/catalog/<pipeline_name>.ymlconfiguration file withMemoryDataSetdatasets for each dataset that is missing fromDataCatalog. - Added
settings.pyandpyproject.toml(to replace.kedro.yml) for project configuration, in line with Python best practice. ProjectContextis no longer needed, unless for very complex customisations.KedroContext,ProjectHooksandsettings.pytogether implement sensible default behaviour. As a resultcontext_pathis also now an optional key inpyproject.toml.- Removed
ProjectContextfromsrc/<package_name>/run.py. TemplatedConfigLoadernow supports Jinja2 template syntax alongside its original syntax.- Made registration Hooks mandatory, as the only way to customise the
ConfigLoaderor theDataCatalogused in a project. If no such Hook is provided insrc/<package_name>/hooks.py, aKedroContextErroris raised. There are sensible defaults defined in any project generated with Kedro >= 0.16.5.
Bug fixes and other changes
ParallelRunnerno longer results in a run failure, when triggered from a notebook, if the run is started usingKedroSession(session.run()).before_node_runcan now overwrite node inputs by returning a dictionary with the corresponding updates.- Added minimal, black-compatible flake8 configuration to the project template.
- Moved
isortandpytestconfiguration from<project_root>/setup.cfgto<project_root>/pyproject.toml. - Extra parameters are no longer incorrectly passed from
KedroSessiontoKedroContext. - Relaxed
pysparkrequirements to allow for installation ofpyspark3.0. - Added a
--fs-argsoption to thekedro pipeline pullcommand to specify configuration options for thefsspecfilesystem arguments used when pulling modular pipelines from non-PyPI locations. - Bumped maximum required
fsspecversion to 0.9. - Bumped maximum supported
s3fsversion to 0.5 (S3FileSysteminterface has changed since 0.4.1 version).
Deprecations
- In Kedro 0.17.0 we have deleted the deprecated
kedro.cliandkedro.contextmodules in favour ofkedro.framework.cliandkedro.framework.contextrespectively.
Other breaking changes to the API
kedro.io.DataCatalog.exists()returnsFalsewhen the dataset does not exist, as opposed to raising an exception.- The pipeline-specific
catalog.ymlfile is no longer automatically created for modular pipelines when runningkedro pipeline create. Usekedro catalog createto replace this functionality. - Removed
include_examplesprompt fromkedro new. To generate boilerplate example code, you should use a Kedro starter. - Changed the
--verboseflag from a global command to a project-specific command flag (e.gkedro --verbose newbecomeskedro new --verbose). - Dropped support of the
dataset_credentialskey in credentials inPartitionedDataSet. get_source_dir()was removed fromkedro/framework/cli/utils.py.- Dropped support of
get_config,create_catalog,create_pipeline,template_version,project_nameandproject_pathkeys byget_project_context()function (kedro/framework/cli/cli.py). kedro new --starternow defaults to fetching the starter template matching the installed Kedro version.- Renamed
kedro_cli.pytocli.pyand moved it inside the Python package (src/<package_name>/), for a better packaging and deployment experience. - Removed
.kedro.ymlfrom the project template and replaced it withpyproject.toml. - Removed
KEDRO_CONFIGSconstant (previously residing inkedro.framework.context.context). - Modified
kedro pipeline createCLI command to add a boilerplate parameter config file inconf/<env>/parameters/<pipeline_name>.ymlinstead ofconf/<env>/pipelines/<pipeline_name>/parameters.yml. CLI commandskedro pipeline delete/package/pullwere updated accordingly. - Removed
get_static_project_datafromkedro.framework.context. - Removed
KedroContext.static_data. - The
KedroContextconstructor now takespackage_nameas first argument. - Replaced
contextproperty onKedroSessionwithload_context()method. - Renamed
_push_sessionand_pop_sessioninkedro.framework.session.sessionto_activate_sessionand_deactivate_sessionrespectively. - Custom context class is set via
CONTEXT_CLASSvariable insrc/<your_project>/settings.py. - Removed
KedroContext.hooksattribute. Instead, hooks should be registered insrc/<your_project>/settings.pyunder theHOOKSkey. - Restricted names given to nodes to match the regex pattern
[\w\.-]+$. - Removed
KedroContext._create_config_loader()andKedroContext._create_data_catalog(). They have been replaced by registration hooks, namelyregister_config_loader()andregister_catalog()(see also upcoming deprecations).
Upcoming deprecations for Kedro 0.18.0
kedro.framework.context.load_contextwill be removed in release 0.18.0.kedro.framework.cli.get_project_contextwill be removed in release 0.18.0.- We've added a
DeprecationWarningto the decorator API for bothnodeandpipeline. These will be removed in release 0.18.0. Use Hooks to extend a node's behaviour instead. - We've added a
DeprecationWarningto the Transformers API when adding a transformer to the catalog. These will be removed in release 0.18.0. Use Hooks to customise theloadandsavemethods.
Thanks for supporting contributions
Deepyaman Datta, Zach Schuster
Migration guide from Kedro 0.16.* to 0.17.*
Reminder: Our documentation on how to upgrade Kedro covers a few key things to remember when updating any Kedro version.
The Kedro 0.17.0 release contains some breaking changes. If you update Kedro to 0.17.0 and then try to work with projects created against earlier versions of Kedro, you may encounter some issues when trying to run kedro commands in the terminal for that project. Here's a short guide to getting your projects running against the new version of Kedro.
Note: As always, if you hit any problems, please check out our documentation:
To get an existing Kedro project to work after you upgrade to Kedro 0.17.0, we recommend that you create a new project against Kedro 0.17.0 and move the code from your existing project into it. Let's go through the changes, but first, note that if you create a new Kedro project with Kedro 0.17.0 you will not be asked whether you want to include the boilerplate code for the Iris dataset example. We've removed this option (you should now use a Kedro starter if you want to create a project that is pre-populated with code).
To create a new, blank Kedro 0.17.0 project to drop your existing code into, you can create one, as always, with kedro new. We also recommend creating a new virtual environment for your new project, or you might run into conflicts with existing dependencies.
- Update
pyproject.toml: Copy the following three keys from the.kedro.ymlof your existing Kedro project into thepyproject.tomlfile of your new Kedro 0.17.0 project:
[tools.kedro]
package_name = "<package_name>"
project_name = "<project_name>"
project_version = "0.17.0"Check your source directory. If you defined a different source directory (source_dir), make sure you also move that to pyproject.toml.
-
Copy files from your existing project:
- Copy subfolders of
project/src/project_name/pipelinesfrom existing to new project - Copy subfolders of
project/src/test/pipelinesfrom existing to new project - Copy the requirements your project needs into
requirements.txtand/orrequirements.in. - Copy your project configuration from the
conffolder. Take note of the new locations needed for modular pipeline configuration (move it fromconf/<env>/pipeline_name/catalog.ymltoconf/<env>/catalog/pipeline_name.ymland likewise forparameters.yml). - Copy from the
data/folder of your existing project, if needed, into the same location in your new project. - Copy any Hooks from
src/<package_name>/hooks.py.
- Copy subfolders of
-
Update your new project's README and docs as necessary.
-
Update
settings.py: For example, if you specified additional Hook implementations inhooks, or listed plugins underdisable_hooks_by_pluginin your.kedro.yml, you will need to move them tosettings.pyaccordingly:
from <package_name>.hooks import MyCustomHooks, ProjectHooks
HOOKS = (ProjectHooks(), MyCustomHooks())
DISABLE_HOOKS_FOR_PLUGINS = ("my_plugin1",)- **Mig...
0.16.6
Major features and improvements
- Added documentation with a focus on single machine and distributed environment deployment; the series includes Docker, Argo, Prefect, Kubeflow, AWS Batch, AWS Sagemaker and extends our section on Databricks
- Added kedro-starter-spaceflights alias for generating a project:
kedro new --starter spaceflights.
Bug fixes and other changes
- Fixed
TypeErrorwhen converting dict inputs to a node made from a wrappedpartialfunction. PartitionedDataSetimprovements:- Supported passing arguments to the underlying filesystem.
- Improved handling of non-ASCII word characters in dataset names.
- For example, a dataset named
jalapeñowill be accessible asDataCatalog.datasets.jalapeñorather thanDataCatalog.datasets.jalape__o.
- For example, a dataset named
- Fixed
kedro installfor an Anaconda environment defined inenvironment.yml. - Fixed backwards compatibility with templates generated with older Kedro versions <0.16.5. No longer need to update
.kedro.ymlto usekedro lintandkedro jupyter notebook convert. - Improved documentation.
- Added documentation using MinIO with Kedro.
- Improved error messages for incorrect parameters passed into a node.
- Fixed issue with saving a
TensorFlowModelDatasetin the HDF5 format with versioning enabled. - Added missing
run_resultargument inafter_pipeline_runHooks spec. - Fixed a bug in IPython script that was causing context hooks to be registered twice. To apply this fix to a project generated with an older Kedro version, apply the same changes made in this PR to your
00-kedro-init.pyfile.
Thanks for supporting contributions
Deepyaman Datta, Bhavya Merchant, Lovkush Agarwal, Varun Krishna S, Sebastian Bertoli, noklam, Daniel Petti, Waylon Walker
0.16.5
Major features and improvements
- Added the following new datasets.
| Type | Description | Location |
|---|---|---|
email.EmailMessageDataSet |
Manage email messages using the Python standard library | kedro.extras.datasets.email |
- Added support for
pyproject.tomlto configure Kedro.pyproject.tomlis used if.kedro.ymldoesn't exist (Kedro configuration should be under[tool.kedro]section). - Projects created with this version will have no
pipeline.py, having been replaced byhooks.py. - Added a set of registration hooks, as the new way of registering library components with a Kedro project:
register_pipelines(), to replace_get_pipelines()register_config_loader(), to replace_create_config_loader()register_catalog(), to replace_create_catalog()
These can be defined insrc/<package-name>/hooks.pyand added to.kedro.yml(orpyproject.toml). The order of execution is: plugin hooks,.kedro.ymlhooks, hooks inProjectContext.hooks.
- Added ability to disable auto-registered Hooks using
.kedro.yml(orpyproject.toml) configuration file.
Bug fixes and other changes
- Added option to run asynchronously via the Kedro CLI.
- Absorbed
.isort.cfgsettings intosetup.cfg. project_name,project_versionandpackage_namenow have to be defined in.kedro.ymlfor projects generated using Kedro 0.16.5+.- Packaging a modular pipeline raises an error if the pipeline directory is empty or non-existent.
Thanks for supporting contributions
0.16.4
Release 0.16.4
Major features and improvements
- Enabled auto-discovery of hooks implementations coming from installed plugins.
Bug fixes and other changes
- Fixed a bug for using
ParallelRunneron Windows. - Modified
GBQTableDataSetto load customised results using customised queries from Google Big Query tables. - Documentation improvements.
Thanks for supporting contributions
Ajay Bisht, Vijay Sajjanar, Deepyaman Datta, Sebastian Bertoli, Shahil Mawjee, Louis Guitton, Emanuel Ferm
0.16.3
0.16.2
Major features and improvements
- Added the following new datasets.
| Type | Description | Location |
|---|---|---|
pandas.AppendableExcelDataSet |
Works with Excel file opened in append mode |
kedro.extras.datasets.pandas |
tensorflow.TensorFlowModelDataset |
Works with TensorFlow models using TensorFlow 2.X |
kedro.extras.datasets.tensorflow |
holoviews.HoloviewsWriter |
Works with Holoviews objects (saves as image file) |
kedro.extras.datasets.holoviews |
kedro installwill now compile project dependencies (by runningkedro build-reqsbehind the scenes) before the installation if thesrc/requirements.infile doesn't exist.- Added
only_nodes_with_namespaceinPipelineclass to filter only nodes with a specified namespace. - Added the
kedro pipeline deletecommand to help delete unwanted or unused pipelines (it won't remove references to the pipeline in yourcreate_pipelines()code). - Added the
kedro pipeline packagecommand to help package up a modular pipeline. It will bundle up the pipeline source code, tests, and parameters configuration into a .whl file.
Bug fixes and other changes
- Improvement in
DataCatalog:- Introduced regex filtering to the
DataCatalog.list()method. - Non-alphanumeric characters (except underscore) in dataset name are replaced with
__inDataCatalog.datasets, for ease of access to transcoded datasets.
- Introduced regex filtering to the
- Improvement in Datasets:
- Improved initialization speed of
spark.SparkHiveDataSet. - Improved S3 cache in
spark.SparkDataSet. - Added support of options for building
pyarrowtable inpandas.ParquetDataSet.
- Improved initialization speed of
- Improvement in
kedro build-reqsCLI command:kedro build-reqsis now called with-qoption and will no longer print out compiled requirements to the console for security reasons.- All unrecognized CLI options in
kedro build-reqscommand are now passed to pip-compile call (e.g.kedro build-reqs --generate-hashes).
- Improvement in
kedro jupyterCLI command:- Improved error message when running
kedro jupyter notebook,kedro jupyter laborkedro ipythonwith Jupyter/IPython dependencies not being installed. - Fixed
%run_vizline magic for showing kedro viz inside a Jupyter notebook. For the fix to be applied on existing Kedro project, please see the migration guide. - Fixed the bug in IPython startup script (issue 298).
- Improved error message when running
- Documentation improvements:
- Updated community-generated content in FAQ.
- Added find-kedro and kedro-static-viz to the list of community plugins.
- Add missing
pillow.ImageDataSetentry to the documentation.
Breaking changes to the API
Migration guide from Kedro 0.16.1 to 0.16.2
Guide to apply the fix for %run_viz line magic in existing project
Even though this release ships a fix for project generated with kedro==0.16.2, after upgrading, you will still need to make a change in your existing project if it was generated with kedro>=0.16.0,<=0.16.1 for the fix to take effect. Specifically, please change the content of your project's IPython init script located at .ipython/profile_default/startup/00-kedro-init.py with the content of this file. You will also need kedro-viz>=3.3.1.
Thanks for supporting contributions
Miguel Rodriguez Gutierrez, Joel Schwarzmann, w0rdsm1th, Deepyaman Datta, Tam-Sanh Nguyen, Marcus Gawronsky
0.16.1
Bug fixes and other changes
- Fixed deprecation warnings from
kedro.cliandkedro.contextwhen runningkedro jupyter notebook. - Fixed a bug where
catalogandcontextwere not available in Jupyter Lab and Notebook. - Fixed a bug where
kedro build-reqswould fail if you didn't have your project dependencies installed.
0.16.0
Major features and improvements
CLI
- Added new CLI commands (only available for the projects created using Kedro 0.16.0 or later):
kedro catalog listto list datasets in your catalogkedro pipeline listto list pipelineskedro pipeline describeto describe a specific pipelinekedro pipeline createto create a modular pipeline
- Improved the CLI speed by up to 50%.
- Improved error handling when making a typo on the CLI. We now suggest some of the possible commands you meant to type, in
git-style.
Framework
- All modules in
kedro.cliandkedro.contexthave been moved intokedro.framework.cliandkedro.framework.contextrespectively.kedro.cliandkedro.contextwill be removed in future releases. - Added
Hooks, which is a new mechanism for extending Kedro. - Fixed
load_contextchanging user's current working directory. - Allowed the source directory to be configurable in
.kedro.yml. - Added the ability to specify nested parameter values inside your node inputs, e.g.
node(func, "params:a.b", None)
DataSets
- Added the following new datasets.
| Type | Description | Location |
|---|---|---|
pillow.ImageDataSet |
Work with image files using Pillow |
kedro.extras.datasets.pillow |
geopandas.GeoJSONDataSet |
Work with geospatial data using GeoPandas |
kedro.extras.datasets.geopandas.GeoJSONDataSet |
api.APIDataSet |
Work with data from HTTP(S) API requests | kedro.extras.datasets.api.APIDataSet |
- Added
joblibbackend support topickle.PickleDataSet. - Added versioning support to
MatplotlibWriterdataset. - Added the ability to install dependencies for a given dataset with more granularity, e.g.
pip install "kedro[pandas.ParquetDataSet]". - Added the ability to specify extra arguments, e.g.
encodingorcompression, forfsspec.spec.AbstractFileSystem.open()calls when loading/saving a dataset. See Example 3 under docs.
Other
- Added
namespaceproperty onNode, related to the modular pipeline where the node belongs. - Added an option to enable asynchronous loading inputs and saving outputs in both
SequentialRunner(is_async=True)andParallelRunner(is_async=True)class. - Added
MemoryProfilertransformer. - Removed the requirement to have all dependencies for a dataset module to use only a subset of the datasets within.
- Added support for
pandas>=1.0. - Enabled Python 3.8 compatibility. Please note that a Spark workflow may be unreliable for this Python version as
pysparkis not fully-compatible with 3.8 yet. - Renamed "features" layer to "feature" layer to be consistent with (most) other layers and the relevant FAQ.
Bug fixes and other changes
- Fixed a bug where a new version created mid-run by an external system caused inconsistencies in the load versions used in the current run.
- Documentation improvements
- Added instruction in the documentation on how to create a custom runner).
- Updated contribution process in
CONTRIBUTING.md- added Developer Workflow. - Documented installation of development version of Kedro in the FAQ section.
- Added missing
_existsmethod toMyOwnDataSetexample in 04_user_guide/08_advanced_io.
- Fixed a bug where
PartitionedDataSetandIncrementalDataSetwere not working withs3aors3nprotocol. - Added ability to read partitioned parquet file from a directory in
pandas.ParquetDataSet. - Replaced
functools.lru_cachewithcachetools.cachedmethodinPartitionedDataSetandIncrementalDataSetfor per-instance cache invalidation. - Implemented custom glob function for
SparkDataSetwhen running on Databricks. - Fixed a bug in
SparkDataSetnot allowing for loading data from DBFS in a Windows machine using Databricks-connect. - Improved the error message for
DataSetNotFoundErrorto suggest possible dataset names user meant to type. - Added the option for contributors to run Kedro tests locally without Spark installation with
make test-no-spark. - Added option to lint the project without applying the formatting changes (
kedro lint --check-only).
Breaking changes to the API
Datasets
- Deleted obsolete datasets from
kedro.io. - Deleted
kedro.contribandextrasfolders. - Deleted obsolete
CSVBlobDataSetandJSONBlobDataSetdataset types. - Made
invalidate_cachemethod on datasets private. get_last_load_versionandget_last_save_versionmethods are no longer available onAbstractDataSet.get_last_load_versionandget_last_save_versionhave been renamed toresolve_load_versionandresolve_save_versiononAbstractVersionedDataSet, the results of which are cached.- The
release()method on datasets extendingAbstractVersionedDataSetclears the cached load and save version. All custom datasets must callsuper()._release()inside_release(). TextDataSetno longer hasload_argsandsave_args. These can instead be specified underopen_args_loadoropen_args_saveinfs_args.PartitionedDataSetandIncrementalDataSetmethodinvalidate_cachewas made private:_invalidate_caches.
Other
- Removed
KEDRO_ENV_VARfromkedro.contextto speed up the CLI run time. Pipeline.namehas been removed in favour ofPipeline.tag().- Dropped
Pipeline.transform()in favour ofkedro.pipeline.modular_pipeline.pipeline()helper function. - Made constant
PARAMETER_KEYWORDSprivate, and moved it fromkedro.pipeline.pipelinetokedro.pipeline.modular_pipeline. - Layers are no longer part of the dataset object, as they've moved to the
DataCatalog. - Python 3.5 is no longer supported by the current and all future versions of Kedro.
Migration guide from Kedro 0.15.* to Upcoming Release
Migration for datasets
Since all the datasets (from kedro.io and kedro.contrib.io) were moved to kedro/extras/datasets you must update the type of all datasets in <project>/conf/base/catalog.yml file.
Here how it should be changed: type: <SomeDataSet> -> type: <subfolder of kedro/extras/datasets>.<SomeDataSet> (e.g. type: CSVDataSet -> type: pandas.CSVDataSet).
In addition, all the specific datasets like CSVLocalDataSet, CSVS3DataSet etc. were deprecated. Instead, you must use generalized datasets like CSVDataSet.
E.g. type: CSVS3DataSet -> type: pandas.CSVDataSet.
Note: No changes required if you are using your custom dataset.
Migration for Pipeline.transform()
Pipeline.transform() has been dropped in favour of the pipeline() constructor. The following changes apply:
- Remember to import
from kedro.pipeline import pipeline - The
prefixargument has been renamed tonamespace - And
datasetshas been broken down into more granular arguments:inputs: Independent inputs to the pipelineoutputs: Any output created in the pipeline, whether an intermediary dataset or a leaf outputparameters:params:...orparameters
As an example, code that used to look like this with the Pipeline.transform() constructor:
result = my_pipeline.transform(
datasets={"input": "new_input", "output": "new_output", "params:x": "params:y"},
prefix="pre"
)When used with the new pipeline() constructor, becomes:
from kedro.pipeline import pipeline
result = pipeline(
my_pipeline,
inputs={"input": "new_input"},
outputs={"output": "new_output"},
parameters={"params:x": "params:y"},
namespace="pre"
)Migration for decorators, color logger, transformers etc.
Since some modules were moved to other locations you need to update import paths appropriately.
You can find the list of moved files in the 0.15.6 release notes under the section titled Files with a new location.
Migration for KEDRO_ENV_VAR, the environment variable
Note: If you haven't made significant changes to your
kedro_cli.py, it may be easier to simply copy the updatedkedro_cli.py.ipython/profile_default/startup/00-kedro-init.pyand from GitHub or a newly generated project into your old project.
- We've removed
KEDRO_ENV_VARfromkedro.context. To get your existing project template working, you'll need to remove all instances ofKEDRO_ENV_VARfrom your project template:- From the imports in
kedro_cli.pyand.ipython/profile_default/startup/00-kedro-init.py:from kedro.context import KEDRO_ENV_VAR, load_context->from kedro.framework.context import load_context - Remove the
envvar=KEDRO_ENV_VARline from the click options inrun,jupyter_notebookandjupyter_labinkedro_cli.py - Replace
KEDRO_ENV_VARwith"KEDRO_ENV"in_build_jupyter_env - Replace
context = load_context(path, env=os.getenv(KEDRO_ENV_VAR))withcontext = load_context(path)in.ipython/profile_default/startup/00-kedro-init.py
- From the imports in
Migration for kedro build-reqs
We have upgraded pip-tools which is used by kedro build-reqs to 5.x. This pip-tools version requires pip>=20.0. To upgrade pip, please refer to their documentation.
Thanks for supporting contributions
@foolsgold, [Mani ...