-
Notifications
You must be signed in to change notification settings - Fork 235
Backward incompatible changes in 1.0.0
As there will be a few backward-incompatible changes in AiiDA 1.0.0, let's use this page to list them so we don't forget them. Before the release we will convert them to a nice doc.
Some TODOs
- New class names, link names, link connection rules (provenance redesign). Probably mention main changes and point to new docs.
- New way of defining CalcJobs
- conversion of is_alloy and has_vacancies to properties, change of order of parameters in arrays, change of type of symbols (and where they are stored). Check carefully also other changes in #2310 and put in a linter also the changes that are only deprecated. See also #2374
- Most of the imports have been moved. See what is the simplest way of explaining this (probably explain the new structure of the aiida code and the fact that one should use only functions from aiida and aiida.xxx and not deeper down). See e.g. #2357
- Most of the command line -> mention renames, changes, ...
- Dropped legacy workflows
Conrad's Notes: The target content for the docs should live between Content below and Content above tags. The final section is just a list of all PRs that didn't look relevant. In principle, all PRs from after v.1.0.0a4 are on this page, either within the content section, because I though they looked possibly relevant, or in the end section, because I though they looked benign.
Content below
TODO: This section should probably cover any kind of change which changes the behaviour of soemthing from 0.12.2, even it is not strictly backwards incompatible.
-
Restore the correct plugin type string based on the module path of the base
Datatypes [#1192] [Advanced] -
Do not allow the
copyordeepcopyofNode, except forData[#1705](sphuber) [Users - new clone method] -
Fix 2181 node incoming outgoing [#2236](waychal) Fixes #2181
Two new methods are implemented:
- get_incoming(): to get all input nodes of the given node
- get_outgoing(): to get all output nodes of the given node
Deprecated:
- get_inputs()
- get_inputs_dict()
- get_outputs()
- get_outputs_dict()
Also all the occurrences of get_inputs() and get_inputs_dict() are replaced by get_incoming() method and for get_outputs() and get_outputs_dict() by get_outgoing().
-
Prevent to store base Nodes in the DB but only their subclasses [#2301](giovannipizzi) This PR fixes #2300, that explains the reasoning behind it
-
Reorganization of some top level modules [#2357](sphuber) Fixes #2311
There are a few top-level modules whose purpose was not fully clear and partially as well as partially overlapping, which caused the content to be quite unorganized and scattered. Here we reorganize these modules after having clearly defined their purpose:
- aiida.common: for generic utility functions and data structures
- aiida.manage: for code that manages an AiiDA instance
- aiida.tools: for code that interfaces with third-party libraries
The module
aiida.controlhas been removed. Code that interacted with internal components have been moved intocmdlineormanagewhere applicable. Theaiida.control.postgreshas been moved to theaiida.manage.externalmodule which is reserved for code that have to interact with system wide components, such as the postgres database.The module
aiida.utilshas been removed. Generic functionality has been placed inaiida.commonand more specific functionality has been placed in the appropriate sub folders.
In order to define a clearer interface for users to the AiiDA entities, and allow (in the future) to swap between different profiles, the underlying hierarchy of nodes and links has been reworked (and a new paradigm has been implemented, with a "frontend" class that all users will interact with, and a "backend" class that users should never use directly, that implements the interaction to the database with a common interface, independent of the underlying ORM). The reorganisation of nodes and linktypes is mostly invisible to the user, but existing import statements will need to be updated.
-
Orm design refactor [#2190](muhrin) Ported Computer, AuthInfo and User to new backend design where the user interacts with concrete classes which themselves contain the abstract class corresponding to the backend type for that class.
Having the user interact with concrete classes means that they can, amongst other things, instantiate them using the constructor and have the correct backend type be crated underneath (rather than using import magic as before).
There is also a corresponding
Collectionclass for each object type to interact with the collection of those things eg.User.objectsresolves to the collection of users and can be used to do things like delete users. -
Rename node class for workfunctions [#2189](sphuber) The node class used by workfunction changed from
FunctionCalculationtoWorkFunctionNode. -
All calculations now go through the
Processlayer, homogenizing the state of work and job calculations [#1125] -
Implement the new base ORM classes for process nodes [#2184](sphuber) Fixes #2167
This commit defines the new basic hierarcy of the ORM classes for process nodes. In addition, it already moves over the functionality of the current abstract calculation classes to their new corresponding counterpart.
-
AbstractCalculation->ProcessNodeandCalcJobNode -
AbstractJobCalculation->CalcJobNode
This should allow the refactoring of the code according to the renaming of the node ORM classes to begin.
-
-
Add temporary support for loading of new process node ORM classes [#2188](sphuber) Fixes #2187
The loading of the correct ORM class for a given entry in the
DbNodetable is based on thetypecolumn, but its interpretation is still in a half-way state. The plugin loader will first attempt to reverse engineer it into an entry point and if that works load the correspondingly class. If that fails, it is interpreted as an internal class and will attempt to load it directly as a module path.Adding the new process node ORM classes, will need to be supported by the plugin loader, so here we add their entry points. We do this in the
aiida:nodegroup but note that in the future the entire system will be changed to solely rely on entry points, at which the format of thetypecolumn can be changed to be a fully qualified entry point string, just as theprocess_typecolumn already is. -
Rename node class for
WorkChains[#2192](sphuber) Fixes #2168The node class used by
WorkChainschanged fromWorkCalculationtoWorkChainNode. -
Rename node class for inline calculations [#2195](sphuber) Fixes #2170
The node class changed from
InlineCalculationtoCalcFunctionNode. -
Make
Codea real sub class ofData[#2193](ConradJohnston) Fixes #2173 . Subclasses Code from Data rather than Node, and simplifies places in the code where code had to be treated as a 'special' data-like node. -
Rename node class for
JobCalculation[#2201](sphuber) Fixes #2199The node class changed from
JobCalculationtoCalcJobNode.Since the
JobCalculationclass is heavily used by users, as it is the class that is sub classed to define a job calculation, we cannot quite get rid of it. Instead we moved all the logic from the oldCalculationandJobCalculationclasses to their new equivalentsProcessNodeandCalcJobNode, and put an alias toCalcJobNodeforJobCalculation.This last class can now still be imported from
aiida.orm.calculationandaiida.orm.calculation.job. Note that the paths that import directly from theimplementation.generalsub module have been removed, because the job calculation classes are now fully concrete.Furthermore, the old entry points for the base calculation classes
inline,jobworkandfunctionhave been removed as they have been replaced with the entry points in theaiida.nodegroup for the new node classes. -
Converted Group to new backend system [#2210](sphuber) Fixes #2080
-
Implement new link types [#2220](sphuber) Fixes #2177
With the new ORM hierarchy in place, the corresponding link types can be defined:
CREATE:CalculationNode -> DataRETURN:WorkflowNode -> DataINPUT_CALC:Data -> CalculationNodeINPUT_WORK:Data -> WorkflowNodeCALL_CALC:WorkflowNode -> CalculationNodeCALL_WORK:WorkflowNode -> WorkflowNodeThese rules are encoded in the
validate_linkfunction inaiida.orm.utils.links. This function is called fromadd_link_from, which has been renamed toadd_incoming, and validates the triple (source, link, target) representing the addition of a link from a source node to a target node. -
Converted Log to new backend system [#2227](muhrin)
- Removed custom Log find methods. These are piped through to the QueryBuilder now
- Temporarily adding dblog dummy model (can be removed once spyros finishes with the aldjemy work)
-
Move the
Commentclass to the new backend interface [#2225](sphuber) Fixes #2223 -
Make Groups great again [#2329](yakutovicha) Fixes #2075 and fixed #160
-
aiida.orm.importexport- While importing data from the export file allow to specify user-defined group and to put all the imported data in this group
-
aiida.common.utils- Remove
get_group_type_mappingfunction which was mapping machine-specific group names with the user-friendly ones - Add escape_for_sql_like that escapes
%or_symbols provided by user
- Remove
-
aiida.orm.groups- Add
GroupTypeStringenum which contains all allowed group types: data.upf (wasdata.upf.family) auto.import (wasaiida.import) auto.run (wasautogroup.run) user (was empty string) - Remove
Group.queryandGroup.group_querymethods, as they are redundant
- Add
-
aiida.orm.data.upf:- Set
UPFGROUP_TYPEtoGroupTypeString.UPFGROUP_TYPE - Replace the usage of
Group.querybyQueryBuilderinget_upf_groupsandget_upf_family_namesmethods
- Set
-
aiida.orm.autogroup:- set
VERDIAUTOGROUP_TYPEtoGroupTypeString.VERDIAUTOGROUP_TYPE
- set
-
aiida.cmdline.commands.cmd_group- Add
verdi group copy - Add option to show all available group types
- Add defaulf for group_type option
- Replace
Group.querywithQueryBuilderinverdi group list - Remove usage of the get_group_type_mapping() function
- Add
-
aiida.cmdline.commands.cmd_import- Add the possibility to define a group that will contain all imported nodes.
-
aiida.cmdline.params.types.group- Add the possibility to the
GroupParamTypeto create groups if they don't exist
- Add the possibility to the
-
aiida.backend*:- Rename
typeandnametotype_stringandlabelfor the database models
- Rename
-
Improve documentation for django and sqla backends migration
-
-
Add
CalculationToolsbase and entry pointaiida.tools.calculations[#2331](sphuber) Fixes #2322With the migration to the new provenance design, the type string of calculation nodes, no longer refer to the actual sub class of the
JobCalculationthat was run, but just the baseCalcJobNode. The actual class of the calculation process was stored in theprocess_typeif it could be mapped onto a known entry point.However, this change means that when a user now loads a node of a completed calculation node, let's say
PwCalculation, the loaded node will be an instance ofCalcJobNodeand notPwCalculation, which means that any utility methods defined on thePwCalculationclass are inaccessible.We can return this functionality through the concept of calculation tools, which will get exposed through the
CalcJobNodeclass and will load a specifically registered entry point. -
Define correct
typestring forCalcJobNodeinstances [#2376](sphuber) The type string of aCalcJobNodeshould now be just that of the node class and not the sub class. Since the users are currently still sub classing the node class and not a process class, we have to make a special exception when generating the type string for the node.Conversely, this type string, stored in the
typecolumn of the node, should be used to load theCalcJobNodeclass when loading from the database. The only classes that can legally exist in a database, and therefore be loaded, are defined inaiida-core. Therefore, using the entry point system to map the type string onto an actual ORM class is no longer necessary. We rename theaiida.plugins.loader.load_pluginfunction to the more correctload_node_class, which given a type string, will return the corresponding ORM node sub class.Note that the whole machinery around generating type and query strings and loading the nodes based on them is still somewhat convoluted and contains hacks for two reasons:
-
Data is not yet moved within the
aiida.orm.nodesub module and as a result gets thedata.Data.type string, which will not match thenode.Node.type when sub classing in queries. -
CalcJobProcesses are defined by sub classing JobCalculation Until the user directly define a Process sub class that uses the
CalcJobNodeas its node class, exceptions will have to be made.
If these two issues are addressed, a lot of the code around type strings can be simplified and cleaned up.
-
-
Further simplification of node type string definition and loading [#2401](sphuber) Fixes #1603
With the recent introduction of the
CalcJobprocess, which no ensures that also for job calculations, it is theCalcJobNodethat gets stored in the database, which has a type string just like all the other node types. This made a lot of the old logicaiida.plugins.loaderto determine the correcttypestring for aNode(sub) class obsolete as now all node classes are per definition internal toaiida-coreand have a type string that is based directly on their module path.The few functions that remain,
get_type_string_from_classthat formats the correct type string for a givenNodeclass and theload_node_classwhich can reload that class from the string have been moved toaiida.orm.utils.nodesince it has no longer anything to do with a "plugin".Finally, by moving the
WorkflowNode,CalculationNodeandProcessNodeclasses in the__init__files of their respective modules allows us to add further simplification toload_node_classas well as making all these classes and their subclasses importable from theaiida.orm.nodemodule. Eventually, whenaiida.orm.datais also moved there, all classes can even be exposed directly onaiida.ormand that is the deepest a user should ever go for they should need or be allowed to import. -
Move
aiida.orm.datawithin theaiida.orm.nodemodule [#2402](sphuber) Fixes #2200This final move makes the hierarchy of all node modules and its sub classes consistent. The
DataandProcessNodeclasses who are both sub classes of theNodebase class and are direct siblings live on the same level in the hierarchy, direct within theaiida.orm.nodemodule.Note that because the type string of all
Datanodes now will start withnode.instead ofdata., a data migration had to be added.
One of the most significant changes in version 1.0.0 is the new workflow engine.
The new engine aims to improve its robustness and userfriendliness.
In doing so a few minor changes were introduced that break workchains that were written before 1.0.0.
However, these workchains can be updated with just a few minor updates that we will list here:
- The free function
submitin anyWorkChainshould be replaced withself.submit. - The
_optionsinput forCalcJobis nowoptions, simply removed the leading underscore. - The
labelanddescriptioninputs forCalcJobor aWorkChainhave also lost the underscore. - The free functions from
aiida.work.runhave been moved toaiida.work.launch, even though for the time being the old import will still work. - The future returned by
submitno longer has thepidattribute but ratherpk. - The
get_inputs_template classmethod has been replaced byget_builder. See the section on the :ref:process builder<process_builder>on how to use it. - The import
aiida.work.workfunction.workfunctionhas been moved toaiida.work.process_function.workfunction. - The
input_grouphas been deprecated and been replaced by namespaces. See the section on :ref:port namespaces<ports_portnamespaces>on how to use them. - The use of a
.(period) in output keys is not supported inProcess.outbecause that is now reserved to indicate namespaces. - The method
ArrayData.iterarrays()has been renamed toArrayData.get_iterarrays(). - The method
TrajectoryData._get_cif()has been renamed toTrajectoryData.get_cif(). - The method
TrajectoryData._get_aiida_structure()has been renamed toTrajectoryData.get_structure(). - The method
StructureData._get_cif()has been renamed toStructureData.get_cif(). - The method
Code.full_text_info()has been renamed toCode.get_full_text_info(). - The method
Code.is_hidden()has been changed and is now accessed through theCode.hiddenproperty. - The method
RemoteData.is_empty()has been changes and is now accessed through theRemoteData.is_empty. - The method
.is_alloy()for classesStructureDataandKindis now accessed through the.is_alloyproperty. - The method
.has_vacancies()for classesStructureDataandKindis now accessed through the.has_vacanciesproperty. - The arguments
stepidsandcellsof the :meth:TrajectoryData.set_trajectory()<aiida.orm.node.data.array.trajectory.TrajectoryData.set_trajectory>method are made optional which has implications on the ordering of the arguments passed to this method. - The list of atomic symbols for trajectories is no longer stored as array data but is now accessible through the
TrajectoryData.symbolsattribute.
-
Convention of leading underscores for non-storable inputs has been replaced with a proper
non_dbattribute of thePortclass [#1105] (sphuber) -
@sphuber we should replace this with the sub-namespace where we now have store_provenance etc.
-
TODO should we move the 'updating workchains from pre-v1.0.0' from readthedocs to here, to keep only one version?
-
TODO add dropping of LegacyWorkflows
-
Implemented the
ProcessBuilderwhich simplifies the definition ofProcessinputs and the launching of aProcess[#1116](sphuber) - Probably remove? -
Namespaces have been added to the port containers of the
ProcessSpecclass [#1099](sphuber) -
Remove implementation of legacy workflows [#2379](sphuber) Fixes #2378
Note that the database models are kept in place as they will be removed later, because it will require a migration that will drop the tables. However, we want to include some functionality that allows the user to dump the content for storage, before having the migration executed.
In the workflow system, it is not possible to launch calculations by first creating an instance of the Calculation and then calling the calculation.use_xxx methods, as it was common in early versions of AiiDA. Instead, you need to pass the correct Calculation class to the run or submit function, passing the nodes to link as input as kwargs. For the past few versions, we have kept back-compatibility by supporting both ways of submitting. In version 1.0 we have decided to keep only one single way of submitting calculations for simplicity.
-
TODO add PRs with the fix above
-
Implement the new
calcfunctiondecorator. [#2203](sphuber) Fixes #2172This
calcfunctiondecorator replaces themake_inlineandoptional_inlinedecorators, which are deprecated. Thecalcfunctionoperates identical to theworkfunctionexcept the former gets aCalcFunctionNodeto represent itself in the provenance graph as opposed to theWorkFunctionNodeof the latter.The behavior of the
optional_inlinedecorator is already intrinsically contained within thecalcfunctionandworkfunctiondecorators. Since under the hood aFunctionProcesswill be generated, and each process has an input portstore_provenacethat can be toggled toFalseto prevent the provenance from being stored, eachcalcfunctioncan be run without storing the provenance by simply passingstore_provenance=Falsewhen calling the function. -
Refactor the function process decorator [#2246](sphuber) Fixes #2171
Note: I have not updated the documentation as that will require more work in any case and I leave it for the coding week.
The
calcfunctionandworkfunctiondecorator are essentially running the same code, with the only difference being the type of node they use to represent their execution in the provenance graph. The code has been refactored to a genericprocess_functiondecorator which takes a node class as an argument. Thecalcfunctionandworkfunctiondecorators are then simply defined by wrapping theprocess_functionwith the correct node class,CalcFunctionNodeandWorkFunctionNoderespectively.The difference in link rules between the two concrete decorators is also completely defined by those node classes, so the logic of the
FunctionProcessclass that is dynamically generated by the process function decorator can be implemented once. The common functionality of defining and overriding inputs is tested intest_process_functions. Aprocess_functionshould never actually be run, but for testing purposes we defined a dummy ORM class.Functionality that is specific to the two concrete decorators, such as which type of links can be created, is tested individually.
-
Put in an explicit check raising when calculations call another process [#2250](sphuber) Fixes #2248
By definition, calculation type processes cannot call other processes. This was already indirectly guarded against by the link validation process. If one attempted to add a
CALLlink from aCalulationNode, the link validation would raise aValueError. However, it is more instructive for the user to catch this problem before attempting to add the link. Therefore, when setting up the database record for a process, if a parent process is available, we ensure that it is not a calculation type process, or we raise anInvalidOperationexception. A test is added for acalcfunctioncalling anothercalcfunction. -
Implement
CalcJobprocess class [#2389](sphuber) Fixes #2377, fixed #2280, fixes #2219 and fixed #2381
This commit can be summarized in three steps:
- Reimplementation of a job calculation as
ProcesscalledCalcJob - Changing job calculation to be purely informational
- Remove the old job calculation mechanics and business logic
The old way of creating a job calculation, was to subclass the
JobCalculation class and override the _use_methods class method to
define the input nodes and the _prepare_for_submission to setup the
input files for the calculation. The problem was that these methods were
implemented on the Node class, thus mixing the responsabilities of
running and introspecting the results of a completed calculation.
Here we define the CalcJob class, a subclass of Process. This class
replaces the old JobCalculation and allows a user to defined the
inputs and outputs through the ProcessSpec, just as they would do for
a WorkChain. Except, instead of defining an outline, one should
implement the prepare_for_submission, which fulfills the exact same
function as before, only it is now a public method of the CalcJob
process class.
Finally, the role of the job calculation state, stored as an attribute
with the key state on the CalcJobNode has changed significantly.
The original job calculations had a calculation state that controlled
the logic during its lifetime. This was already superceded a long time
ago by the process wrapper that now fully governs the progression of the
calculation. Despite the calculation state no longer being authoritative
during the calculation's lifetime, it was still present. Here we finally
fully remove and only leave a stripped down version. The remaining state
is stored as an attribute and is a sub state while the CalcJob process
is in an active state and serves as a more granual state that can be
queried for. This is useful, because the process status, which also
keeps similar information is human readable and doesn't allow for easy
querying.
-
Redesign the Parser class [#2397](sphuber) Fixes #2390
Now that job calculations are directly implemented as processes, by sub classing the
CalcJobprocess, a lot of functionality has been removed from theCalcJobNodethat did not belong there. However, quite a bit of attributes and methods remain there that are used by theParserthat parses results attached to the node, such as names for retrieved links. These should, just like the input and output nodes, be specified on theProcessclass.We redesign the
Parserclass to get this the information it needs from theProcessclass which can be gotten through theCalcJobNodeas a proxy. Sub sequently, it can directly get information like default output node labels as well as exit codes from the process class. The base method that will be called by the engine to trigger parsing of a retrieved job calculation is theparsemethod. The engine will pass the nodes that the parser needs for parsing as keyword arguments. The engine knows which nodes to pass as they have been marked as such by theCalcJobprocess spec.The
Parserclass exposes a convenience methodparse_from_nodethat can be called outside of the engine, passing in aCalcJobNode. This will trigger theparsemethod to be called again, but it will be wrapped in acalcfunctionensuring that the produced outputs will be connected to aProcessNodein the provenance graph.Finally, what was the
CalculationResultManagerhas been renamed toCalcJobResultManagerand has been moved toaiida.orm.utils. The interface has also been adapted and it now no longer goes through the parser class to get the information it needs but rather through theCalcJobclass that can be retrieved from theCalcJobNodefor which it is constructed.
-
Implement the concept of an "exit status" for all calculations, allowing a programmatic definition of success or failure for all processes [#1189]
-
Implement the
get_optionsmethod forJobCalculation[#1961](sphuber)
-
Fix 2183 Update QB Relationship Indicators [#2224](ConradJohnston) Fixes #2183 .
Changes the existing QueryBuilder join types from
input_ofandoutput_oftowith_outgoingandwith_incomingrespectively. For now,ancestor_of,descendant_ofremain in use. The remaining join types are changed towith_{entity}where the type of join performed depends contextually on the class passed in the qb.append() method. For example:QueryBuilder().append(User, tag='u').append(Group, with_user='u').all()Throughout the code, the old relationships have been replaced.The old methods, if used, now print a deprecation warning, provided by the new
AiidaDeprecationWarningclass, fromaiida.common.warnings. This warning has the advantage of working with pycharm and not being swallowed likeDeprecationWarning. All deprecation warnings in the code have now been replaced by this new warning and the documentation has been updated. -
Change 'ancestor_of'/'descendant_of' to 'with_descendants'/'with_ancestors' [#2278](ConradJohnston) Fixes #2209 .
Deprecates the Querybuilder 'ancestor_of'/'descendant of' join type specifications and replaces them with 'with_descendants'/'with_ancestors' respectively. All deprecated uses are updated and the docs changed to reflect this new convention.
-
Enforced the behavior of queries to ressemble backend, by disallowing deduplication of results by SQLAlchemy [#2281](lekah) This resolves #1600, by forcing SQLAlchemy to not de-duplicate rows. Therefore, the backend (psql) behavior is more closely reproduced, and we have the following equality:
qb.count() == len(qb.all())A test for this was added.
The verdi command line interface has been migrated over to a new system (called click), making the interface of all verdi commands consistent: now the way to specify a node (via a PK, a UUID or a label) is the same for all commands, and command-line options that have the same meaning use the same flags in all commands. To make this possible, the interface of various verdi commands has been changed to ensure consistency. Also the output of most commands has been homogenised (e.g. to print errors or warnings always in the same style). Moreover, some of the commands have been renamed to be
consistent with the new names of the classes in AiiDA.
-
Migrate
verdito the click infrastructure [#1795] -
The output of
verdi calculation listandverdi work listhas been homogenized [#1197] -
Improve the grouping and ordering of the output of
verdi calculation show[#1212] -
verdi code showno longer shows number of calculations by default to improve performance, with--verboseflag to restore old behavior [#1428] -
verdi work treehas been removed in favor ofverdi work status[#1299] -
Homogenize the interface of
verdi quicksetupandverdi setup[#1797] -
Synchronize heuristics of
verdi work listandverdi calculation list[#1819] -
ZipFile: set allowZip64=True as the default. [#1619](yakutovicha) Solves the issue #1617 (exporting large databases)
According to my tests enabling the flag allowZip64=True does not change the time to create a zip archive. For more details see #1617
-
Renaming and reorganizing some
verdicommands [#2204](sphuber) Fixes #2175 -
Implement
verdi config[#2354](sphuber) Fixes #2208The new command
verdi configreplaces the various commands underverdi develthat were used to list, get, set and unset configuration options, that were used to be called "properties". Since the term "property" is a reserved keyword, it was decided to rename them to "options", which is also the term used by "git". The interface ofverdi configalso mirrors that ofgit config. That is to say to get the value of an option simply callgit config <option_name>and to set itgit config <option_name> <option_value>. To unset it, the--unsetflag can be used. Finally, the getting, setting or unsetting can be applied to a certain "scope", meaning to be configuration wide or profile specific.To make the implementation of this command simple and clean, the bulk of the work was in refactoring the definition, construction and operation on the configuration of an AiiDA instance. This is now represented by the
Configclass, through which these configuration options can be set or unset as well as retrieved.
- Add utility functions based on the
clicklibrary to simplify writing command line interface scripts [#1194] -
verdi export createsetallowZip64=Trueas the default when exporting as zip file [#1619] - Fix the formatting of
verdi calculation logshow[#1850]
As part of a number of new features and improvements, the underlying library used in the CifData object has been updated to use pymatgen rather than ase.
- Default library used in
_get_aiida_structureto convert theCifDatatoStructureDatahas been changed fromasetopymatgen[#1257]
- fix plugin test fixtures for unittest [#1622](ltalirz)
- unittest fixtures now use a custom testrunner
- testrunner allows to run multiple testcases (fix #1425)
- testrunner will set up aiida environment only once per run (fix #1451)
- Verdi run: don't inject future statements into the target script [#2218](borellim) Fix #2217
Content above
- Code has been made python 3 compatible. [#804][#2136][#2125][#2117][#2110][#2100][#2094][#2092]
- AiiDA now enforces UTF-8 encoding for text output in its files and databases. [#2107]
- Implementation of the
AuthInfoclass which will allow custom configuration per configured computer [#1184] - Implemented the
DbImporterfor the Materials Platform of Data Science API, which exposed the content of the Pauling file [#1238] - Implement the
has_atomic_sitesandhas_unknown_speciesproperties for theCifDataclass [#1257] - Added element
Xto the elements list in order to support unknown species [#1613] - Enable use of tuple in
QueryBuilder.appendfor all ORM classes [#1608], [#1607] - Bump version of Django to
1.8.19for py3 support [#1915]
- Each profile now has its own daemon that can be run completely independently in parallel [#1217]
- Polling based daemon has been replaced with a much faster event-based daemon [#1067]
- Replaced
CelerywithCircusas the daemonizer of the daemon [#1213] - The daemon can now be stopped without loading the database, making it possible to stop it even if the database version does not match the code [#1231]
- Ported job calculation to use coroutines for tasks [#1827]
- Implement exponential backoff retry mechanism for transport tasks [#1837]
- Pause
JobProcesswhen transport task falls through exponential backoff [#1903] - Each daemon worker now respects an optional minimum scheduler polling interval [#1929]
-
InlineCalculationshave been ported to use the newProcessinfrastructure, while maintaining full backwards compatibility [#1124] - Implemented a Sphinx extension for the
WorkChainclass to automatically generate documentation from the workchain definition [#1155] - Added new feature for a
WorkChainto expose the inputs and outputs of anotherWorkChain, which is perfect for writing modular workflows [#1170] - Add built-in support and API for exit codes in
WorkChains[#1640], [#1704], [#1681] - Overload
PortNamespacemutable properties upon exposing [#1635]
- Implement method for
JobCalculationto create a restart builder [#1962]
- Added new command
verdi processto interact with running processes [#1855] - Added new command
verdi group rename[#1224] - Added new command
verdi code duplicate[#1737] - Added new command
verdi computer duplicate[#1937] - Added new command
verdi profile show[#2028] - Added new command
verdi work show[#1816] - Added new command
verdi export inspect[#2128] - Determine active nodes in
verdi calculation listbased on process state [#1873] - Add the option
--versiontoverdito display current version [#1811] - Improve error message for
verdi importwhen archive version is incompatible [#1960]
- Allow PostgreSQL connections via unix sockets [#1721]
- Creating unique constraint and indexes at the
db_dbgroup_dbnodestable for SqlAlchemy [#1680] - Performance improvement for adding nodes to group [#1677]
- Renamed
aiida.daemon.execmanager.job_statestoJOB_STATES, conforming to python conventions [#1799] - Abstract method
aiida.scheduler.Scheduler._get_detailed_jobinfo_command()raisesaiida.common.exceptions.FeatureNotAvailable(wasNotImplemented).
- Added an importer class for the Materials Project API [#2097]
- Big reorganization of the documentation structure [#1299]
- Added section on the basics of workchains and workfunctions [#1384]
- Added section on how to launch workchains and workfunctions [#1385]
- Added section on how to monitor workchains and workfunctions[#1387]
- Added section on the concept of the
Process[#1395] - Added section on advance concepts of the
WorkChainclass, as well as best-practices on designing/writing workchains [#1459] - Remove outdated or duplicated docs for legacy and new workflow system [#1718]
- Added entry of potential issues for transports that enter bash login shells that write spurious output [#2132]
-
Fix leaking of SSH processes when using a proxy command for a computer using SSH transport [#2019]
-
Fixed a problem with the temporary folder containing the files of the
retrieve_temporary_listthat could be cleaned before parsing finished [#1168] -
Fixed a bug in the
storemethod ofCifDatawhich would raise and exception when called more than once [#1136] -
Restored a proper implementation of mutability for
Nodeattributes [#1181] -
Fix bug in
verdi export createwhen only exporting computers [#1448] -
Fix copying of the calculation raw input folder in caching [#1745]
-
Fix sphinxext command by allowing whitespace in argument [#1644]
-
Check for
test_keyword in repo path only on last directory during profile setup [#1812] -
Fix bug in the
RemoteData._cleanmethod [#1847] -
Differentiate
quicksetupprofile settings based on project folder [#1901] -
Ensure
WorkChaindoes not exit unless stepper returns non-zero value [#1945] -
Fix variable
virtual_memory_kbin direct scheduler. [#2050]
- Enable tab-completion for
verdi devel tests[#1809] - Add
-v/--verboseflag toverdi devel tests[#1807]
-
Remove unserializable data from metadata in
Logrecords [#2469](sphuber) Fixes #2363If unserializable data, often found in the
exc_infoorargskeys, is not removed from the metadata of a log record, a database exception will be thrown. -
Add the
verdi data upf contentcommand [#2468](sphuber) Fixes #2467This will print the bare contents of the file behind
CifDatanodes -
Implement support for process functions in
verdi process report[#2466](sphuber) Fixes #2465The nodes used by process functions, i.e.
CalcFunctionNodeandWorkFunctionNodealso can have log messages attached that should be displayable throughverdi process report. This is implemented in a helper functionget_process_function_report. -
Ensure nested
calcfunctionswork as longstore_provenance=False[#2464](sphuber) Fixes #2444In principle
calcfunctionsare not allowed to call othercalcfunctionsbecause a calculation is not allowed to call other calculations. However,calcfunctionscan be called as normal functions by setting the metadata variablestore_provenancetoFalse. Therefore, as long asstore_provenanceis turned off for nested functions, acalcfunctionis allowed to call othercalcfunctions.To make this work, the
Process._setup_db_recordmethod had to be adapted to only add a call link to the parent process, if and only ifstore_provenancefor the child node is enabled. Otherwise, the link validation would fail since aCALLlink outgoing of aCalculationNodeis forbidden. -
Pin the version of dependency
pika==1.0.0b1[#2463](sphuber) The second beta was just released which breaks our implementation. This will have to be adapted later. -
Disable some SSH transport tests until Travis issue is solved [#2461](sphuber) Problems described in issue #2460
-
Fix upper limit for dependency to
pg8000<1.13.0[#2450](sphuber) Fixes #2449The library
pg8000which is used bypgtestto interface with Postgres and is used by us for testing purposed released a new minor versionv1.13.0on Feb 1 2019, in which it droppedpy2support causing our tests to fail miserably. For the time being, we put an upper bound on that dependency. -
Corrects the Log migrations, improves import/export [#2447](szoupanos) This PR improves the DbLog migrations for Django & SQLA that were created for issues #1102 and #1759 with PR #2393.
More specifically the changes are described in issue #2423. Also the objuuid was removed and objpk is used. Import/export was adapted by @CasperWA and log is exported as Node records (with their pk which is recreated during export/import to avoid pk collisions).
We have to see what happens when a node is re-imported with more log entries - We should import the new log entries based on their UUID. A new ticket should be created for this.
-
Catch
UnroutableErrorfor orphaned processes inverdi processcalls [#2445](sphuber) Fixes #2443If the RabbitMQ task of a running process is lost before it is completed, it will never complete, but any
verdi processcommands that try to reach it such aspause,playandkillwill throw anUnroutableError. This exception should be caught and an error message should be printed instead. -
Fix incorrect call in
StructuresCollection.findofMpdsImporter[#2442](sphuber) Fixes #2441The
findmethod requires aqueryas a first argument which was not being passed. -
Fix bug in
delete_nodeswhen passing pks of non-existing nodes [#2440](sphuber) Fixes #2439The
aiida.manage.database.delete.nodes.delete_nodesfunction was not checking whether the pks that were passed correspond to existing nodes which would cause the line that fetches the repository folders, after the graph traversal queries had been completed, to except. To fix this we first verify that the passed pks exist and if not print a warning and discard them. -
Fix import error of
md5_fileinCifDataandUpfData[#2438](sphuber) Fixes #2437The recent move of
md5_filefromaiida.common.utilstoaiida.common.fileswas not caught in some of the methods of these classes because it imported the whole moduleaiida.common.utilsand used the function asaiida.common.utils.md5_file. -
Fix description of 'contains' filter in QB docs [#2429](borellim) The 'contains' filter operator in the QueryBuilder takes a list of values, not an individual value. This is confirmed by in-source documentation in aiida/orm/implementation/sqlalchemy/querybuilder.py.
-
Add exception for
calculation.job.JobCalculationprocess type inference [#2428](sphuber) Fixes #2406In the migration following the provenance redesign, the process type had to be inferred from the existing type string for
JobCalculationnodes. The functioninfer_calculation_entry_pointdid not account for literalcalculation.job.JobCalculation.type strings being present and would return.JobCalculationas the fallback process type. However, it makes more sense to just have an empty process type string in this case, as would be the case for a baseCalcJobNodeinstance. -
Prevent
verdifrom failing on empty profile list [#2425](astamminger) Resolves #2424Added a simple check for an empty profile list to the
verdibase command, so that it only attempts to load a profile if the configs profile list is not empty (profile will be set toNoneotherwise, such as it is already done for the config ifget_config()throws an exception).I added a small testcase as well to check for this issue in the future.
-
Changes to
Dataclass attributes andTrajectoryDatadata storage [#2422](asle85) Fixes #201Added migrations and tests for TrajectoryData symbols (from numpy array to attribute). In sqlalchemy migrations and test I used load_node; to discuss if we want to rewrite it to avoid using aiida functionality (e.g. to get and delete numpy arrays).
-
Export/import of extras: [#2416](yakutovicha) fixes #1761
- Extras are now exported together with other data
- Extras are now fully imported if the corresponding node did not exist
- In case the imported node already exists in a database the following
logic may be chosen to import the extras:
- keep_existing (default): keep extras with different names (old and imported ones), keep old value in case of name collision
- update_existing: -/-/-, keep new value in case of name collision
- ask: -/-/-, ask what to do in case of the name collision
- mirror: completely overwrite the extras by the imported ones
- none: keep old extras
-
Freeze pip version to 18.1 in Travis [#2415](CasperWA) Hack-fix for issue #2414.
Also add .vscode to .gitignore.
-
Export/import of comments (#1760) [#2413](CasperWA) This fixes #1760.
Comments can now be exported/imported using export-version 0.4.
At this stage of the implementation, all comments pertaining to a node chosen for export will be exported. Comments cannot be exported by themselves, nor is a
Commenta valid entity to export directly, i.e. comments can only be exported indirectly through nodes.There is a suggestion to add a flag to ex-/include comments when exporting.
If multiple users add comments to a node, all users will be exported recursively.
Examples of how data.json and metadata.json might look have been updated with
Commentin the documentation.An additional pair of joins has been added to QueryBuilder:
-
with_commentto join aCommentto aUser -
with_userto join aUserto aComment
These, along with a similar pair of joins for
Node, has been added to the table of Joining entities in the documentation.Lastly,
Commentcan now be imported directly fromaiida.orm. -
-
update rmq docs for macports [#2411](ltalirz)
-
Fix blocking bug on new SQLAlchemy migration tests [#2405](giovannipizzi) The problem was that the session was not properly closed. I am also moving some logic around and having a couple of useful methods that are reused in all tests.
This fixes #2404
-
Fix bug in the setup of a profile through
verdi setup/quicksetup[#2395](sphuber) Fixes #2394When setting up a profile that already exists,
verdiis supposed prompt the user whether they want to reuse the information of the existing profile or if they want to change it. However, instead of asking to confirm, the command was prompting, causing the user to always have to reenter the information. -
Necessary changes to properly export and import log records. [#2393](szoupanos) This PR adresses issue #1759
-
Add framework for migration tests in SQLAlchemy [#2392](giovannipizzi) This fixes #2391. In particular it adds a framework for testing migrations in SQLAlchemy/Alembic, and ports one test from Django, also to act as an example and to verify that the migration testing is properly working.
I've also removed old files that were running some tests but actually were just running on a set of example migrations and not on those of AiiDA.
-
Add a transaction context manager to backend [#2387](sphuber) This allows to group operations that will be rolled back if the context is exited with an exception. This is laying the groundwork for implementing
Nodeas part of the new backend system as links, caches, etc will have to be done in a transaction. -
Ensure that configuration options are considered for logging config [#2375](sphuber) Fixes #2372
The configuration knows various options to change the logging configuration, however, these were not respected for two reasons:
- Logging configuration was not lazily evaluated
- Globally configured options were not agglomerated
The first problem was caused by the fact that the logging configuration is evaluated upon loading the
aiidamodule, at which point the profile is not necessarily loaded yet, causing theget_config_optionfunctions to return the option defaults. The solution is to have the dictionary lazily evaluated by using lambdas, which are resolved whenconfigure_loggingis called. Finally, we make sure this function is called each time the profile is set.The second problem arose from the fact that if a profile is defined, the
get_config_optiononly returned the config value if explicitly set for that profile and otherwise it would return the option default. This means that if the option was defined globally for the configuration it was ignored. This is now corrected where if the current profile does not explicitly define a value for the option but it is globally defined, the global value is returned. -
Various bug and consistency fixes for
CifDataandStructureData[#2374](sphuber) Fixes #2373Bug fixes:
CifData.has_partial_occupanciesCifData.has_unknown_species
Added properties:
CifData.has_unknown_atomic_sites
Consistency changes:
-
Kind.is_alloy-> property -
Kind.has_vacancies-> property -
StructureData.is_alloy-> property -
StructureData.has_vacancies-> property
-
Add support for indexing and slicing in
orm.Group.nodesiterator [#2371](sphuber) Fixes #2370This allows one to directly get just a single or a slice of all the nodes contained within a
Groupinstead of having to iterate over it to get the nodes of interest. -
Add functionality to delete individual Log entries to collection [#2369](sphuber) Fixes #2368
Also fixed a bug in the
deletemethod of theAuthInfoandCommentcollections that were not caught because there were no tests. If the entry that was asked to be deleted did not exist, no exception was raised, however, that was expected by the implementation. -
Add support for compound projection
stateinverdi process list[#2367](sphuber) Fix #2366Also set it as one of the default projections.
-
Merge develop into provenance_redesign and update pre-commit dependencies [#2365](sphuber) The original plan was to just merge
developintoprovenance_redesignsince it contains some crucial bug fixes. However, tests stopped passing due to various issues with incompatible version of pre-commit tools and python versions. I added these changes on top of the merge commit and fixed a whole slew of new linter warnings that came as a result of upgradingpylintto version 2.0 which only supports python 3. -
Reenable the Travis linter pre-commit test [#2364](sphuber) Fixes #2362
The release of ruby
gem==3.0.0in December 2018 broke thepre-commitpackage because it used the--no-ri / --no-rdocflags that were deprecated in that gem release. These flags were replaced by the new--no-documentflag inpre-commit==1.13.0, which allows us to reenable the Travis linter that had been temporarily disabled. -
Replace call to deprecated
set_methods withset_option[#2361](sphuber) Fixes #2360The
Node._set_internalmethod was still calling deprecatedset_methods for options of theCalcJobNodesub class. These should now be set through the explicitset_optionmethod. TheJobProcesswas likewise violating this rule, when setting the options of the underlyingCalcJobNodeit is using as a storage record. -
Fix issue 1439 remove wrong path [#2359](zhubonan) Fix issue #1439, remove reference or implication of putting plugin files into the sub-packages on
aiida_core. In addition, for both two tutorials I included a brief example of the plugin project directory structure. @giovannipizzi Is there any other reference you know are left out? -
Fix bug in
CalcJobNode.get_desc[#2353](sphuber) Fixes #2352The function still called
CalcJobNode.get_statewith the argumentfrom_attribute, which has recently been removed. Originally, the state was stored in a separate database table and had a proxy as an attribute in the attribute table. The calc state table has been removed, leaving the attribute as the only source of the state. -
Fixed issue with wrong file mode for python3 [#2351](muhrin) The log file to report an integrity violation was not correct and would complain about expecting a bytes type instead of str. Fixed by setting the mode to w+.
-
Docs: update the example provenance graph images in concepts section [#2350](sphuber) The original .svg files are also included. They were generated in Inkspace but saved as plain SVG files. After that they were exported as PNG files. With the
convertUnix utility, the images were rescaled to 800 px as follows:convert filename.png -resize 800 filename.pngThis will rescale the image to 800px wide, maintaining the aspect ratio and writing it to the same file.
-
Refactor the management of the configuration file [#2349](sphuber) Fixes #2348
The configuration file is now abstracted by the
Configclass. Loading the configuration throughaiida.manage.load_configwill now return aConfiginstance which internally keep the contents of the configuration file of the AiiDA instace. Altering the config will go through this object.All configuration related code, such as settings and methods to setup the configuration folder for a new AiiDA instance, are organized in the
aiida.manage.configurationmodule. -
Fixed 1714 expose_inputs deepcopy default value [#2347](muhrin) Allowing deepcopy of a stored node to support this use case. See the bug report https://github.com/aiidateam/aiida_core/issues/1741 for default but simply put if you expose the inputs of a workchain with a stored default plumpy will try to deepcopy that default value which was expecting because of a ban on deepcopying stored nodes.
-
Reorganised some SQL related commands [#2346](muhrin) Moved from QueryManager to the corresponding backend classes. The longer term view will probably be that QueryManager gets dropped completely and more backend specific methods will be moved into the Backend class itself. This way we have a single point where we know to find such methods.
-
Use
ProfileParamTypeinverdiand expose inctx.obj[#2345](sphuber) Fixes #2344The
verdientry point now uses theProfileParamTypefor its-poption, which enables tab-completion for the profile option. In addition, we add the loaded profile to theobjattribute of the click contextctx. This way, when averdisub commands needs the active profile, one can simply use the@click.pass_contextdecorator to access the profile throughctx.obj.profile. -
Implement
verdi database integrityendpoints for links and nodes [#2343](sphuber) Fixes #2326These two new end points will scan the database for invalid links and nodes, by running prepared SQL statements. If any violations are found, they will be printed to the terminal. For now there are no options implemented to apply a patch to remove the violations.
-
Update export schema version and temporarily disable export [#2341](sphuber) Fixes #2340
The migration after the provenance redesign has been merged into
provenance_redesign. To avoid people from importing "dirty" export files into a cleanly migrated database, the export schema version is upped. Additionally, creating new export archives with the latest version is disabled, until the export procedure has been verified to abide by the new rules in place after the provenance redesign. -
Add migration after the provenance redesign [#2336](sphuber) Fixes #2178
Tested on 5 production databases:
- COFS (MaterialsCloud)
- SSSP (MaterialsCloud)
- 2D (MaterialsCloud)
- 3DD (personal)
- SDB (personal)
-
Implement
verdi database migrate[#2334](sphuber) Fixes #2332Up till now, the triggering of a database migration happened through separate interfaces for the two different backends. For Django, if the database schema did not match the code's, a warning message would be printed asking the user to execute a stand alone python script that was a modified Django
manage.pyscript. For SqlAlchemy, the user was automatically prompted with an Alembic migration.Here we unify the migration operation by making it go through
verdithrough the endpointverdi database migrate. The error message upon an outdated database schema and the to be executed commands are now identical for both backends. -
Add test for availability of sudo in quicksetup [#2333](ConradJohnston) Partially addresses #1382
Quicksetup needs to access the Postgres database, but does not know anything about the system setup. It initially tries to see if it is the postgres superuser, but if it is not, it tries to become the Postgres superuser via sudo. This is the "backstop" position and relies on the user having a modern Linux-like system. If, however, the sudo command is not available to this user at all (for example, perhaps the user is on an HPC cluser with command hidden), a system error will be raised and quicksetup will fail ungracefully.
This commit adds a test to see if sudo can be found before attempting to use it, prints a warning, and then falls back to the 'Postgres not detected' behaviour.
-
Docs reorganization - step 2 (installation section) [#2330](ltalirz) This is my go at reorganizing the installation section of the docs to make it more intuitive and efficient to use.
- quick install now contains all necessary information to actually set up AiiDA
- foldable sections with instructions for different operating systems keep quick install page short
- detailed instructions for different OS's retained under "prerequisites" section and linked directly from quick install
- OS-specific sections on postgres and rabbitmq merged into corresponding "prerequisites" sections
Note: I added the
provenance_redesignbranch to RTD, i.e. you'll be able to inspect the docs there (after the PR is merged). -
Ensure QueryBuilder return frontend entity of DbLog and DbComment [#2325](muhrin) Fixes #2321
-
Process queues are now removed when a process terminates [#2324](muhrin) Fixes #2269
Change the way that validation of ports is done which now also shows which port a validation error came from
Also changed the process tests to use close() because it is illegal to have two active process instances that refer to the same underlying process instance (i.e. with the same id)
-
Document the Windows Subsystem for Linux [#2319](zhubonan) Fixes #2256
Added guide for installing AiiDA on WSL in the documentation. I have tested the installation and there are a 2 failures (file modes with ssh transport and profile detetion) and 7 error out of 1051 tests. Nevertheless, I think overall it is OK to install on WSL if the user just want to try out on a windows machine especially when they don't have enough RAM to support a virtual machine.
-
upgrade prospector [#2317](ltalirz) prospector 1.1.5 finally works with pylint <2
-
document how to enable tab completion in conda [#2316](ltalirz)
-
Various speedups for verdi cmdline [#2315](giovannipizzi) Fixes #376 and fixes #2261
In particular when completing tests. Thanks to @ltalirz This fixes #376 (again...). Fixes include:
- using a new instance of the reentry manager that prevents automatic reentry scans
- removed unuseful with_dbenv decorators from 'verdi devel tests' and from the TestModuleParamType of click
- prevent unnecessary imports of plumpy that is a bit slow to import
Also, we fixed the place where AIIDADB_PROFILE is set in the main
verdiclick command, to set it also when a default profile is used (this was probably the reason of the unneeded with_dbenv calls) -
Add documentation for plugin fixtures [#2314](zhubonan) Added documentation about writing
pyteststyled tests using theaiida.utils.fixturesmodule (#822). I also included a guide with examples for migrating existing tests that are sub-class ofAiidaTestCaseto run with pytest. -
Fixes #1249 Removed pika log level [#2312](muhrin) This was added before, see #1249, but this no longer happens as we're using separate a thread for the pika event loop and what's more the issue has, I believe, been resolved with pika 1.0 which we now use.
-
Remove
get_aiida_class[#2309](muhrin) Fixes #2237This is part of the procedure that @szoupanos is leading to convert to automatically generated dummy models for the
QueryBuilderReplaced all instances with either get_backend_entity or get_orm_entity (as appropriate).
Also changed backend group to be independent of the ORM. Because the iterator was getting the orm class and returning that (which required it to access up to the ORM level). Now it just returns backend entities which are in turn converted to ORM entities by the orm.Group. This also means you can't call len(group.nodes) because this is not allowed for iterators but the same can be achieved with
len(list(group.nodes)).Removal of
get_aiida_classfrom QueryBuilder and addition ofDbModel-to-BackendEntityandBackendEntity-to-OrmEntityconvertors. -
Migrate the type and process type string of built in calculation plugins [#2308](sphuber) Fixes #2306
The built in calculation entry points were recently moved and renamed but no migration was put in.
This requires renaming the node types, process types and the input plugin attribute of code instances.
-
Move requirements from setup_requirements.py to setup.json [#2307](ltalirz) fix #2241
This moves all dependency information from the
setup_requirements.pypython file to asetup.json.- this makes it easier for plugin developers to process dependencies of aiida-core
- once merged, aiida-core can easily be registered on the aiida-plugin-registry. This will allow us to implement automatic checks on collisions of entry points
- add pre-commit hook to update version number in
setup.json(necessary to keep it there for the registration in the registry) - remove pylint blacklist from
.pylintrc. the only blacklist is now in.pre-commit-config.yaml - remove
setup.pyfrom blacklist - remove
dev_spinxextextra (contained only pytest, made part oftesting)
-
Fixes structure visualisation end point [#2305](waychal) Fixes #1763
-
Add provenance docs [#2302](asle85) added provenance documentation file and figures
-
Move practical developer docs to AiiDA wiki [#2297](ltalirz)
-
Improve robustness of parsing versions and element names from UPF files [#2296](astamminger) Fixes #2228
Changed parsing behavior such that regular expressions for elements and versions are now matched against the whole file instead of matching line by line. This makes the parsing process more robust and allows for parsing desired quantities independent of the actual position of the UPF file (See for instance the additional header line mentioned in issue #2228). Since re.search() returns the first match found in the string, there is no major performance impact expected compared to the previous implementation.
-
Document the purpose and use of
clean_value[#2295](ConradJohnston) Fixes #2038 and #1498- Update the docs to explain the use of clean_value and its behaviour.
- Update clean_value doc string to better explain its purpose.
- Change collection imports from, for example, "import collections.Iterable" to "from collections import Iterable" to comply with a forthcoming deprecation that will break these imports in Python 3.8.
-
removed wsgi file/configuration/documentation for django application [#2293](waychal) Fixes #415
-
Ensure correct types of
LinkTriplesreturned byNode.get_stored_link_triples[#2292](sphuber) Fixes #2291The
Node.get_stored_link_tripleswas usingLinkTripleinstances where the type of thelink_typeelement was not ofLinkTypebut of string, as it was directly using the string value returned by the query builder for thetypecolumn of theDbLinktable. By wrapping this value inLinkTypethe returnedLinkTriplesonce again have the correct type for all its elements. -
Temporarily lock versions of
kiwipyandpika[#2290](sphuber) Fixes #2284A new patch version v0.3.11 of
kiwipywas released that breaks functionality, so until that is fixed, we lock the version. We do the same forpikawhich requiresv1.0.0b1. -
Documented optional graphviz dependency and removed use of os.system() in draw_graph [#2287](ConradJohnston) Fixes #835 and #2285.
The use of verdi graph to generate a plot of part of the provenance graph requires the graphviz package, which was not documented other than in the source code. This pull request increases the visibility of this dependency in the docs and adds a more human-parsable error message suggest when it could be the case that graphviz is not installed.
Also replaces the use of os.system() in aiida.common.graph with subprocess.call to prevent shell script from being injected via verdi graph generate.
-
Skip test for unicode folder names if file system encoding is not set to unicode [#2283](astamminger) Fixes #294
Since all tests run as intended even tough file system encoding is not necessarily set to unicode (ran tests for locales
de_DE.UTF-8andde_DE.iso88591) it should be safe to just skip this test if the file system encoding differs from unicode. -
Docs reorganization - step 1 [#2279](ltalirz) Reorganization of documentation top level as discussed with @sphuber
-
specify versions of python for aiida [#2277](ltalirz) specify exactly which versions of python aiida is supposed to run on (see p.6 of presentation by @dev-zero on python3 compatibility in aiida from Sep 13th 2018)
-
Updated documentation about the usage of CodeInfo [#2276](zhubonan) Fixes #1000
Added notes in the plugin development documentation that multiple
Codeand henceCodeInfos may be used for a singleJobCalculationto allow pre/post-processing or packing multiple jobs in a single submission if necessary. We should be able to close issue #1000 now. -
Escape newline chars in strings used to label nodes in visual provenance graphs [#2275](astamminger) Fixes #1874
-
Raise exception when exporting a node without
pathsub folder in repository [#2274](yakutovicha) Fixes #2247Raise an exception when exporting a node which does not contain a 'path' folder in the file repository.
-
Remove preceeding '$' chars from code-blocks. [#2273](astamminger) Fixes #1561
-
improved setup of ipython line magic %aiida [#2272](zhubonan) Fixes #1562
Instead of telling users to edit the 'ipython_config.py', which does not exist by default. We now suggest simply add a python file in the startup folder of ipython to register the line magic %aiida. A new file is added serving as an example.
-
Check of existence of groups on import [#2270](lekah) This resolves #2074. Now on import, the existence of groups with same name is checked
-
Remove work around for postgres on Travis [#2266](sphuber) Fixes #1739
-
require recent pip version for pyproject.toml [#2264](ltalirz) fix #2262
-
Changed type of uuids returned by the QueryBuilder to be unicode [#2259](lekah) Fixes #231 and fixes #1862
-
Add
--groupoption toverdi process list[#2254](sphuber) Fixes #2253This option allows a user to narrow down the query set to only those nodes that are within the specified group.
-
Retrieve computers through front end collection in
_presubmit[#2252](sphuber) Fixes #2251The implementation was using the backend collection to
geta given computer for the remote copy lists, but this method is not defined. Instead this operation should go through the front end collection. -
Allow existing profile to be overriden in
verdi setup[#2244](sphuber) Fixes #2243With the
--forceflag a user can suppress the exception that is raised when one tries to create an existing profile in non-interactive mode. With the flag the existing profile will be overridden with the specified values except for the profile UUID which will be kept the same as it is auto generate and can also not be specified by the user on the command line. -
Make
verdicallable as sub command [#2240](sphuber) Fixes #2239Because the
verdigroup was not marked as callable without subcommand, the--versionoption was broken as it would throw aMissing commanderror. Addinginvoke_without_command=Trueto theverdigroup declaration and making the--versionan eager option will ensure it works without specifying a sub command.A test has been added to test this functionality.
-
Return correct ORM AuthInfo class for
JobCalculation._get_authinfo[#2235](sphuber) Fixes #2234The method returned the backend AuthInfo class, which does not have the
get_transportmethod that would be called for example in the_get_transportmethod of theJobCalculationclass. The fix is to wrap the backend entity in the ORM class by calling`AuthInfo.from_backend_entity`Added two regression tests for this bug.
-
Fix get_metadata name of AuthInfo class (#2221) [#2222](PhilippRue) Fixes #2221 Fixes this error:
Error: * The test raised an exception! ** Full traceback: Traceback (most recent call last): File ".../aiida_core/aiida/cmdline/commands/cmd_computer.py", line 561, in computer_test succeeded = test(transport=trans, scheduler=sched, authinfo=authinfo) File ".../aiida_core/aiida/cmdline/commands/cmd_computer.py", line 145, in _computer_create_temp_file workdir = authinfo.get_workdir().format(username=remote_user) File ".../aiida_core/aiida/orm/authinfos.py", line 136, in get_workdir metadata = self.get_metadata() File ".../aiida_core/aiida/orm/authinfos.py", line 122, in get_metadata return self._backend_entity.get_metadata() AttributeError: 'DjangoAuthInfo' object has no attribute 'get_metadata' Some tests failed! (1 out of 4 failed)when running
verdi computer test ... -
Speed up creation of Nodes in the AiiDA ORM [#2214](ltalirz)
- move AiiDAManager from
aiida.worktoaiida.manage - introduce caching of .objects in
aiida.orm.entities.Collection - cache default user in
aiida.orm.user.User.Collection(fix #2216)
Benchmark on MacBook Pro, python3
- before: 58.5s to create 10k nodes (57.2 spent inside
Node.__init__) - after: 2.8s to create 10k nodes (1.7s spent inside
Node.__init__)
The timing of 58.5s is significantly worse than what we wrote down during the tests with @szoupanos and @giovannipizzi (which was ~10s for 10k nodes), i.e. probably someone broke the caching.
Anyhow, the target we set then was: 2s for 10k nodes, which we have reached now on python3 (it's the 1.7s that should scale linearly with the number of nodes).
- move AiiDAManager from
-
fix django 1.11 migrations [#2212](ltalirz)
- update django settings (cleanup + adaptation to django 1.11) together with @giovannipizzi
- fix migration of UUID column for databases created with previous schema versions
- make sure we give unicode strings to django (not byte strings)
- add test to check that uuid migration does not change uuids
- add test to check that no django schema migrations remain
Will look into adding sqlalchemy unique constraints on uuid column in a separate PR
-
Django 1.11 [#2206](ltalirz)
- update django from 1.8.19 to 1.11.16
- remove django-extensions dependency
- move to UUIDField of django 1.11 + add migration
Note: for the moment, I've replaced the uuid fields also in old migrations. One could try to use aCharField(editable=False, blank=True, max_length=36)there to be safe - add unique constraints on all uuid fields
- move
get_filter_expr_from_columnfrom django/sqla to interface - delete django-specific
get_all_parentsfunction (was only used in tests)
-
psycopg2-binary 2.7.4 => psycopg2 2.7.6 [#2202](ltalirz) "The binary package is a practical choice for development and testing but in production it is advised to use the package built from sources." [1] https://pypi.org/project/psycopg2-binary/
-
Drop the
DbCalcStatetable [#2198](sphuber) Fix #2197The
DbCalcStatetable was introduced to both keep the current job state ofJobCalculationsbut also to function as a sort of locking mechanism to ensure only one process was operating on a job at a time, by enforcing progressive state changes on the database level. In the new system, this responsibility is taken care of by theRunnerclass that is running the task corresponding to the job calculation. The calculation state is now merely set as on the node for informational purposes to the user.Therefore, the simpler solution is to set the job state as a simple attribute and remove the
DbCalcState. This also means that theIMPORTEDcalc state is now obsolete, as that was needed to prevent imported calculations from being picked up by the daemon, which used to determine what calculations to run based on its calculation state. -
Merging
developintoprovenance_design[#2191](sphuber) -
Fix bug in
parse_formulafor formula's with leading or trailing whitespace [#2186](sphuber) FIxes #2185If a formula contains leading or trailing whitespace, one of the parts over which the function loops will be empty, and so the regex match
mwill beNone. To prevent anAttributeErrorfrom being thrown whengroupis called onNonewe check for it and continue. -
Compile the scripts before calling exec in
verdi run[#2166](sphuber) Fixes #2165This extra step is necessary for python modules like
inspectto function normally. Without this change, running a script withverdi runthat defines a workfunction and then runs it would fail, because inspect would fail to determine the path of the source file of the workfunction, i.e. the script that was passed toverdi run. This change fixes that. -
Centralize the management of entities central to the loaded profile [#2164](sphuber) Fixes #2154 and fixes #2145
This PR contains a few commits that address the previous scattering of creation and managing of entities whose configuration depends on the loaded profile, which is now gathered in the
AiiDAManagerclass. This ensures that entities like the RabbitMQ controllers and communicators as well as runners, get the correct configuration based on the loaded profile, as for example during testing. -
Revert current database revision after migration unittests [#2162](sphuber) Fixes #2161
After running a migration unittest that changes the revision of the test database one should ensure that after the tests finishes, the current revision is applied.
-
Set default polling interval for Runners to 1 second [#2160](sphuber) In a recent change this parameter was corrected from 0 to 30 seconds to prevent the daemon workers from spinning the processors too much, but this really slows down the tests. Until resources like runners and communicators are created in a central place, that allows settings to be determined based on the active profile, e.g. with special settings for test profiles, we need to revert this to 1 second. This will allow the tests to run reasonably fast, without overloading processors too much.
-
Engine stability and communications improvements [#2158](sphuber) This PR will update
plumpyto version0.11.4which underneath runs onkiwipyversion0.3.*. The biggest change is that the communications of aRunnernow happen on a separate thread, while all the real operations and callbacks are scheduled on the main thread. This should ensure that even under heavy load, a (daemon) runner remains responsive to heartbeats and remote procedure calls (RPCs) from RabbitMQ. The missing of heartbeats and the subsequent cutting of the connection by RabbitMQ spawned a whole host of other problems that will now no longer surface.Fixes #1994 Fixes #2088 Fixes #1748 Fixes #1426 Fixes #1821 Fixes #1822 Fixes #2157
-
Fix incorrect model attribute deref in SqlAlchemy implementation of Log [#2156](sphuber) Fixes #2155
The
idproperty was piping through to theobjpkof the underlying model instead of theid. -
Modularize
verdi profile delete[#2151](ltalirz) fix #2121- add 3 options to verdi profile delete:
- --include-db/--skip-db
- --include-repository/--skip-repository
- --include-config/--skip-config
- split into delete_repository, delete_db, delete_from_config
- move functions into aiida.control.profile
To do:
- skip deletion of DB in test (problem: cannot reproduce #2121 on my mac)
- add 3 options to verdi profile delete:
-
Upgrade the dependency requirement of SqlAlchemy [#2113](lekah) Fixes #465
This fixes the long-standing issue of upgrading SQLAlchemy. The way attributes stored in a JSON were queried had to be chained. Additional tests were added to see this kind of errors before. In essence, the issue was that the type check and value check were in arbitrary order. Now a case statements enforces type check before value check.