instance_id
stringlengths 21
53
| repo
stringclasses 188
values | language
stringclasses 1
value | pull_number
int64 20
148k
| title
stringlengths 6
144
| body
stringlengths 0
83.4k
| created_at
stringdate 2015-09-25 03:17:17
2025-07-10 16:50:35
| problem_statement
stringlengths 188
240k
| hints_text
stringlengths 0
145k
| resolved_issues
listlengths 1
6
| base_commit
stringlengths 40
40
| commit_to_review
dict | reference_review_comments
listlengths 1
62
| merged_commit
stringlengths 40
40
| merged_patch
stringlengths 297
9.87M
| metadata
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
conan-io__conan-8985@145eaf3
|
conan-io/conan
|
Python
| 8,985
|
CMakeDeps build_requires full support
|
Changelog: Feature: Introduced new options for the `CMakeDeps` generator allowing to manage `build_requires` even declaring the same package as a `require` and `build_require` avoiding the collision of the `config` cmake files and enabling to specify which `build_modules` should be included (e.g protobuf issue)
Docs: https://github.com/conan-io/docs/pull/2104
Closes #8303
Closes #7719
|
2021-05-21T11:10:04Z
|
[question] 'build_modules' of `build_requires` are not included when building consumers with build/host profiles
I'm experimenting with the build and host context profiles and many of my builds are breaking due to the changes in how `build_requires` are handled. So I'm just trying to understand if this is intentional and whether I need to update my recipes.
For example, I have a tool which is used to generate source code from .xml files in the consumer's repository. The sources are then compiled into a static library and packaged. Since consumers only need this tool when building from sources, they have it defined as a `build_requires`. (Though since the code generator version can have an affect on the code, I'm wondering if it should actually be a normal `requires`? )
To simplify it's usage for consumers, the tool package provides functions to consumers through a `build_module`. But now it seems that `build_module`s of `build_requires` dependencies are not included by _conan_basic_setup()_.
My current workaround is to `force_host_context=true` for these dependencies, but I'm not sure if this is the correct way to go about it.
[bug] cmake generator does not write build requirements info when using dual profiles
### Environment Details (include every applicable attribute)
* Operating System+version: macOS Big Sur
* Conan version: develop (1.33.0-dev)
* Python version: 3.8.5
### Steps to reproduce (Include if Applicable)
With this `conanfile.py`:
```python
from conans import ConanFile, CMake, tools
class BugConan(ConanFile):
name = "bug"
version = "0.1"
license = "<Put the package license here>"
author = "<Put your name here> <And your email here>"
url = "<Package recipe repository url here, for issues about the package>"
description = "<Description of Bug here>"
topics = ("<Put some tag here>", "<here>", "<and here>")
settings = "os", "compiler", "build_type", "arch"
options = {"shared": [True, False], "fPIC": [True, False]}
default_options = {"shared": False, "fPIC": True}
generators = "cmake"
def config_options(self):
if self.settings.os == "Windows":
del self.options.fPIC
def build_requirements(self):
self.build_requires("doctest/2.3.8")
def source(self):
pass
def build(self):
pass
def package(self):
pass
def package_info(self):
pass
```
Run `conan install . --profile:host=default --profile:build=default`
The generated `conanbuildinfo.cmake` does not contain `doctest` targets, but it does when `conan install . --profile=default` is used.
|
Hi @GordonJess
In theory, if that tool is an executable that is running in the current build machine, then it should be in the build context, not the host context.
You are right, it seems you hit some missing functionality. So far, the ``build_modules`` feature only works without contexts. When using contexts, the ``cpp_info`` is propagated only through the "host" context, but not the "build" one.
We already separated ``settings_build`` and ``settings_target``, it seems that we should keep extending the model for this use case. Having a first look, doesn't look immediate or obvious, we need to investigate what could be the best solution.
Hi @GordonJess
We are still polishing this feature and we are learning a lot from all these issues and your use-cases, so nothing is written in stone. Said that, this is how I see this scenario:
* `build_modules` are included by default in the files that are consumed by the build-system, these `build_modules` usually contain information about the package being included in the build: path to libraries, includes, definitions,... so it makes sense that only those from the _host_ context are available and included by default.
* your tool, like any other source generator, is a perfect example for a `build_requires`, it runs in the _build_ machine and it needs to take the settings from the `--profile:build` while generating the consumer's library. It will be executed in the `build()` method of the Conan recipe, using `self.run("your-tool")` or inside the build-system scripts. Conan will add the path to your executable to the `PATH` and the tool should run without any problem.
And now there are two important things that deserve more attention:
* **how your tool affects the packageID**: I would say that the XML files your tool uses to generate source files should be contained in the library itself, it is like a `.proto` definition (from protocol buffers): the file that describes the message is inside the library and the `protoc` tool used to generate the actual `.h/.cpp` files is a `build_requires`.
In your case, if those XML files are stored together with the rest of the `.cpp/.h` files from your library, then it is clear how to manage them, like any other source file.
If those XML files are stored together with the tool, then it belongs to a broader discussion about `build_requires`: if they should or not affect the packageID, if it is configurable,...
* let's say **your tool provides something** that the build-system is going to consume (probably the most typical scenario here is a compiler that provides a toolchain). This is still an open question and there are different approaches:
+ some packages takes advantage of one environment variable Conan is using: `CONAN_CMAKE_EXECUTABLE`, they populate this variable to their own CMake wrapper and use it to inject the toolchain. You can find this approach working in the [Android NDK recipe by bincrafters](https://github.com/bincrafters/conan-android_ndk_installer). I really don't like this approach, it is coupling the tool with the environment variable and it doesn't scale (it works only for CMake).
+ other packages have moved this file/functions to a `requires`, like the `protobuf` package in ConanCenter. This approach works because the recipe consuming it will **always** require and build-require the `protobuf` package. You need the tool `protoc` and you need to link with the `libprotobuf` library, both of them are inside the `protobuf` package.
+ a more flexible alternative is provided via the `user_info` attribute. The tool can add the path to the assets/resources/files it is providing in the [`package_info()` method](https://docs.conan.io/en/latest/reference/conanfile/attributes.html?highlight=user_info#user-info), and consumers can take that information from the [`self.user_info_build` attribute](https://docs.conan.io/en/latest/reference/conanfile/attributes.html?highlight=user_info#user-info-build).
----
I know this is not an answer, I'm just enumerating alternatives, but without further information about your exact use-case I really don't know which one fits better with your libraries and your tool. Feel free to ask more details about any of these approaches, and share the challenges or alternatives you face in your way, I'll be pleased to help (...be patient, I won't be available next week).
Thanks!
The thing I most liked about `build_modules` is that consumers automatically got access to any variables/functions provided by their `build_requires` dependencies without having to worry about checking certain attributes and passing them into cmake myself. I'm not sure that I agree that these should only be for dependencies in the `host` context. Something like the [FindDoxygen](https://cmake.org/cmake/help/latest/module/FindDoxygen.html) module for example is only providing helper-functions for a tool run on the build machine - much like my use case.
> Feel free to ask more details about any of these approaches, and share the challenges or alternatives you face in your way
Thanks! I've opened some issues (#7737, #7742) for the other problems I've come across so far using build/host profile. I've not yet found any workaround for these yet to be able to have something usable, but feels like it's almost there!
`FindDoxygen` is a really good example where CMake relies on the functionality of the `find_package` to get information about tools, basically, any code-generator tool is inside this category and we are suffering the problem with all of them: `protobuf (protoc)` probably is the other big player here.
Here are my thoughts, and I really don't know how to solve the situation without transferring all the responsibility to the user. For example, `doxygen` depends on `zlib`, we agree that the project you are generating can depend on `zlib` too. Conan generates all the `FindXXXX.cmake` files for all the dependencies in your _host_ context, so your project can find the `zlib` packages and link with it.
Your `CMakeLists.txt` contains something like this:
```cmake
# Get components for some tools
find_package(Doxygen)
# These are your actual dependencies and project
find_package(zlib)
...
target_link_libraries(yourlib PUBLIC zlib:zlib)
```
...but in the filesystem we can have only one file named `FindZLIB.cmake`, if we were running the generators for the packages in the _build_ context, we will override the `zlib` one.
Can we generate only `FindDoxygen.cmake` and forget about the dependencies? It could be a solution, but it won't work with the `protobuf` use case, here the package is the same for the library and the executable, both would generate `FindProtobuf.cmake`, each of them with different/incompatible content.
---
I really think this is a limitation in CMake, and there are some questions asking how they can work around this situation. CMake needs some kind of `find_tool` macro to disambiguate this scenario, that macro would return only tools like `tool::doxygen`, `tool::protoc` and not components you can link with.... and it should use a different `FindToolXXXX.cmake` file (very naïve proposal).
Being a limitation of CMake I wonder if we should totally forget about it and generate all the files, some will override others and the build will fail, but it is the consumer's responsibility to take it into account and take the actions needed to bypass the problem.
---
For `protobuf` we were able to work around the limitation because the package in the _host_ context contains the functionality to detect `protoc` and return the executable that is already available in the PATH. But this situation doesn't hold for Doxygen, here we are not generating any `FindDoxygen.cmake` file (no `doxygen` package in the _host_ context) that could do the trick.
Hi, I think I'm struggling with the same issue when trying to crosscompile. I want to use x86 `protoc` binary , my `conanfile.txt` looks something along the lines of:
```
[build_requires]
protobuf/3.11.4
[requires]
protobuf/3.11.4
[generators]
make
[options]
```
From what I can tell, there is no way to access the `protoc` binary programatically after running:
`conan install . --profile:host .conan/profiles/arm --profile:build .conan/profiles/x86_64`.
I tried adding this to the conan file, but the copied `protoc` is an arm version from the `requires` section and not from the `build_requires` one
```
[imports]
bin, *-> ./bin
```
Any thoughts? Am I missing here something?
Hi, @tomerlevv
Build requires in your example (with a `conanfile.txt` and using two-profiles) should be considered as tools provided by the environment like the `cmake`, `gcc`,... that you can already run using them from the command line (they are already in the PATH).
Conan provides some generators to get the information from those build-requires and provide you with a couple of files to activate and deactivate them, use `-g virtualenv` (or add it to your generators):
```
conan install . --profile:host .conan/profiles/arm --profile:build .conan/profiles/x86_64 -g virtualenv
```
Conan will generate files `activate.sh/bat/ps1` and `deactivate.sh/bat/ps1` that will _activate_ an environment with all the environment variables (and PATH) provided by your build-requires.
With this, you can add to your environment the executables for `protoc` (build machine) or any other build-require you might want to use in your build process, like a different CMake version or the compiler itself.
Thank you @jgsogo , It seems to be the direction I should take. Currently I am invoking `conan install` from my Makefile, generating a `conan.mak` file. In order to make the suggested solution work I would have to write my own generator which I think I'll pass for now.
Thanks anyway!
Back to the original question in this issue. Copying here the comment in https://github.com/conan-io/conan/issues/8303#issuecomment-757894304
> We've been talking about this issue (and related ones) and the path will be as follows:
>
> * Conan will (1.34?) generate a file with all the information from `cpp_info` in plain CMake format. This file will contain the information as plain as possible. It will be hard to define the name of all the variables, but the file will contain information from all the packages in _host_ and _build_ context that are needed to build the package. (Note.- Maybe it is not only one file, but one per requirement, TBD).
> * This file will be accessible only if using generator `CMakeDeps` (in fact, this generator will use that file/s to populate the targets). Generator `CMakeDeps` will be the recommended (only?) one in Conan 2.0 and we encourage everyone to try it and start using it to smooth the migration.
> * User can `include` that file from their `CMakeLists.txt` and will have access to all these variables.
> * Eventually, Conan will provide some abstraction layers on top of this file to provide targets for the components coming from the _build_ context (protoc, doxygen,...). And some convenient CMake functions (`find_package_build`) or overrides (new definition of `find_package`) that will help with these scenarios and make it possible some transparent integration of Conan with existing CMake files.
With this approach, build-modules of packages coming from _build_ context will be available in that file. Those build-modules from the _build_ context won't (probably) be automatically included, but the paths to those build-modules will be available in a CMake variable.
In our case we avoided the above protobuf issue by splitting protobuf into two packages: `libprotobuf/<version>@company/stable`, containing only the static libraries (libprotobuf) and `protoc/<version>@company/stable`, containing only the `protoc` executable and cmake helper that consumers can use to simply use `protoc` executable to convert their `proto` files into source/header pairs and compile them.
Then, we add `libprotobuf` to the `requires` and `protoc` to `build_requires`.
This works pretty well - the only issue is that `protoc` package still needs to use the old `arch_build/os_build` mechanism in order for the cmake helper to be available for usage in cross-compile scenario. I've reported this bug [here](https://github.com/conan-io/conan/issues/8488) and also provided a simple example that reproduces the issue.
Regarding the affection of `package_id` by `build_requires` - maybe the package should tell for itself whether it can influence other packages' ID?
For example, a conan package containing tools like `cmake` or `doxygen` would not alter ID of the package that `build_requires` it.
On the other hand, a conan package containing the toolchain, like `android-ndk` or `emscripten` would alter ID of the package that `build_requires` it (it's not the same if you build your package with NDK r22 or NDK r21). However, such packages would also need to have the ability to impose settings on the build graph and maybe influence the package ID in that way.
This is exactly what I proposed [here](https://github.com/conan-io/conan/issues/8274).
This is because of the line `all_flags = cmake_dependencies(dependencies=self.deps_build_info.deps)`.
`self.deps_build_info` is empty when using dual profiles
I've added logs in `model/conan_generator.py:
With `--profile=default`:
>self._deps_build_info: <conans.model.env_info.DepsEnvInfo object at 0x105c68b50>
>self._deps_env_info <conans.model.env_info.DepsEnvInfo object at 0x105c68b50>
>self._env_info None
>self._deps_user_info DepsUserInfo(<class 'conans.model.user_info.UserInfo'>, {'doctest': {}})
>self._user_info_build None
With dual profiles:
>self._deps_build_info: <conans.model.env_info.DepsEnvInfo object at 0x1061e58b0>
>self._deps_env_info <conans.model.env_info.DepsEnvInfo object at 0x1061e58b0>
>self._env_info None
>self._deps_user_info DepsUserInfo(<class 'conans.model.user_info.UserInfo'>, {})
>self._user_info_build DepsUserInfo(<class 'conans.model.user_info.UserInfo'>, {'doctest': {}})
Seems that generators do not support `user_info_build`
Hi, @theodelrieu
This is probably a limitation of the `cmake` generator. With the current syntax there is no way to differentiate between a package coming from _host_ context and the same package from _build_ context if both have the same name (i.e.: `protobuf`). It would require variables like `CONAN_USER_<PKG-NAME>_<VAR-NAME>` and `CONAN_USER_<CONTEXT>_<PKG-NAME>_<VAR-NAME>`.
Doable, but not sure if we want to go that way. If the preferred generator `cmake_find_package[_multi]` doesn't provide this functionality, IMHO we shouldn't add it to a generator that might be removed in Conan v2.0.... or at least we shouldn't add it before implementing it in the recommended generator.
But we totally need to unblock this issue, there are many related to this same problem or closer ones. I really think there is something missing in CMake itself, but we need to bet big and offer a way to work around current limitations. I'm thinking about opening an RFC with some risky proposal.
> If the preferred generator cmake_find_package[_multi] doesn't provide this functionality
Indeed, the Finddoctest.cmake is only generated when `--profile default` is used.
I don't know how I can workaround this issue, I don't want to revert to `os_build/arch_build` but I might not have a choice here...
@theodelrieu
Why do you add doctest as a build requirement?
doctest is a header-only library. Shouldn't it be a requirement?
Hi, @theodelrieu
We've been talking about this issue (and related ones) and the path will be as follows:
* Conan will (1.34?) generate a file with all the information from `cpp_info` in plain CMake format. This file will contain the information as plain as possible. It will be hard to define the name of all the variables, but the file will contain information from all the packages in _host_ and _build_ context that are needed to build the package. (Note.- Maybe it is not only one file, but one per requirement, TBD).
* This file will be accessible only if using generator `CMakeDeps` (in fact, this generator will use that file/s to populate the targets). Generator `CMakeDeps` will be the recommended (only?) one in Conan 2.0 and we encourage everyone to try it and start using it to smooth the migration.
* User can `include` that file from their `CMakeLists.txt` and will have access to all these variables.
* Eventually, Conan will provide some abstraction layers on top of this file to provide targets for the components coming from the _build_ context (protoc, doxygen,...). And some convenient CMake functions (`find_package_build`) or overrides (new definition of `find_package`) that will help with these scenarios and make it possible some transparent integration of Conan with existing CMake files.
In my case I need [CapnProto ](https://conan.io/center/capnp?tab=overview) to be compiled for the build system, generate sources, and then be available as `find_package` for building the host binaries.
Is there any workaround for cross compilation right now?
EDIT: maybe a two step `conan install`, so instead of
`conan install .. --profile:build build_profile --profile:host host_profile`
we split it up and call
`conan install package_A package_B --profile build_profile`
--> copy away the files you need...
`conan install package_C package_D --profile host_profile`
--> ... and restore them again
but for that I would need to evaluate the `conanfile.py` manually I guess 😞
As seen below, `--profile:build` just completely skips the generation of any `*.cmake` files for packages listed as `build_requirements`. For `cmake_paths`, `conan_paths.cmake` does not contain any info about them as well.
For this `conanfile.py`:
```python
generators = "cmake_find_package"
...
def build_requirements(self):
self.build_requires("capnproto/0.8.0")
def requirements(self):
self.requires("catch2/2.13.3")
```
`conan install .. --profile:build default`:
```python
conanfile.py (LumPDK/None): Applying build-requirement: capnproto/0.8.0
conanfile.py (LumPDK/None): Generator cmake_find_package created FindCatch2.cmake
```
`conan install .. --profile:host default`:
```python
conanfile.py (LumPDK/None): Applying build-requirement: capnproto/0.8.0
conanfile.py (LumPDK/None): Generator cmake_find_package created FindCapnProto.cmake
conanfile.py (LumPDK/None): Generator cmake_find_package created FindCatch2.cmake
```
`conan install .. --profile default`:
```python
conanfile.py (LumPDK/None): Applying build-requirement: capnproto/0.8.0
conanfile.py (LumPDK/None): Generator cmake_find_package created FindCapnProto.cmake
conanfile.py (LumPDK/None): Generator cmake_find_package created FindCatch2.cmake
```
@blackliner The way to go is to add `capnproto` to requires and build-requires:
```python
generators = "cmake_find_package"
...
def build_requirements(self):
self.build_requires("capnproto/0.8.0")
def requirements(self):
self.requires("capnproto/2.13.3")
self.requires("catch2/2.13.3")
def build(self):
# All the environment provided by `capnproto` from the build context is available here.
```
and then use the two-profiles approach:
```
conan create conanfile.py --profile:host=host --profile:build=default
```
Conan will populate the environment before entering `build()` method with the information provided in the `env_info` of the `capnproto` recipe.
Unfortunately we do not use conan to build our own package, we just use it to consume dependencies. I went for this solution/workaround for now:
```python
def conan(args):
if not (BASE_DIR / "conanfile.py").is_file():
logging.info("Skipping conan step")
return
export_recipes(args)
conan_command = [
"conan",
"install",
"..",
"--settings",
"build_type=" + args.build_type,
"--build=missing",
"--build=acado",
]
if args.conan_profile:
conan_command.append(f"--profile={CALLER_DIR / args.conan_profile}")
else:
if args.conan_build_profile:
conan_command_build_profile = conan_command.copy()
conan_command_build_profile.append(f"--profile={CALLER_DIR / args.conan_build_profile}")
subprocess.check_call(conan_command_build_profile, cwd=BUILD_DIR)
for cmake_file in BUILD_DIR.glob("*.cmake"):
cmake_file.rename(cmake_file.with_suffix(".build_bak"))
conan_command.append(f"--profile:build={CALLER_DIR / args.conan_build_profile}")
if args.conan_host_profile:
conan_command.append(f"--profile:host={CALLER_DIR / args.conan_host_profile}")
logging.info("Conan command:")
logging.info(" ".join(conan_command))
subprocess.check_call(conan_command, cwd=BUILD_DIR)
for cmake_bak_file in BUILD_DIR.glob("*.build_bak"):
new_name = cmake_bak_file.with_suffix(".cmake")
if not new_name.is_file():
cmake_bak_file.rename(new_name)
```
So for now I save all the `FindXXX.cmake` from the build context and restore them later 🤷
|
[
{
"body": "I'm experimenting with the build and host context profiles and many of my builds are breaking due to the changes in how `build_requires` are handled. So I'm just trying to understand if this is intentional and whether I need to update my recipes.\r\n\r\nFor example, I have a tool which is used to generate source code from .xml files in the consumer's repository. The sources are then compiled into a static library and packaged. Since consumers only need this tool when building from sources, they have it defined as a `build_requires`. (Though since the code generator version can have an affect on the code, I'm wondering if it should actually be a normal `requires`? )\r\n\r\nTo simplify it's usage for consumers, the tool package provides functions to consumers through a `build_module`. But now it seems that `build_module`s of `build_requires` dependencies are not included by _conan_basic_setup()_.\r\n\r\nMy current workaround is to `force_host_context=true` for these dependencies, but I'm not sure if this is the correct way to go about it.",
"number": 7719,
"title": "[question] 'build_modules' of `build_requires` are not included when building consumers with build/host profiles"
},
{
"body": "### Environment Details (include every applicable attribute)\r\n * Operating System+version: macOS Big Sur\r\n * Conan version: develop (1.33.0-dev)\r\n * Python version: 3.8.5\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\nWith this `conanfile.py`:\r\n\r\n```python\r\nfrom conans import ConanFile, CMake, tools\r\n\r\n\r\nclass BugConan(ConanFile):\r\n name = \"bug\"\r\n version = \"0.1\"\r\n license = \"<Put the package license here>\"\r\n author = \"<Put your name here> <And your email here>\"\r\n url = \"<Package recipe repository url here, for issues about the package>\"\r\n description = \"<Description of Bug here>\"\r\n topics = (\"<Put some tag here>\", \"<here>\", \"<and here>\")\r\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\r\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\r\n default_options = {\"shared\": False, \"fPIC\": True}\r\n generators = \"cmake\"\r\n\r\n def config_options(self):\r\n if self.settings.os == \"Windows\":\r\n del self.options.fPIC\r\n\r\n def build_requirements(self):\r\n self.build_requires(\"doctest/2.3.8\")\r\n\r\n def source(self):\r\n pass\r\n\r\n def build(self):\r\n pass\r\n\r\n def package(self):\r\n pass\r\n\r\n def package_info(self):\r\n pass\r\n```\r\n\r\nRun `conan install . --profile:host=default --profile:build=default`\r\n\r\nThe generated `conanbuildinfo.cmake` does not contain `doctest` targets, but it does when `conan install . --profile=default` is used.",
"number": 8303,
"title": "[bug] cmake generator does not write build requirements info when using dual profiles"
}
] |
8de0c3315fa9a16da66a143ba68b3936b88a435a
|
{
"head_commit": "145eaf32fad38d465ff34d38a1d8190970133d4d",
"head_commit_message": "fix win",
"patch_to_review": "diff --git a/conan/tools/cmake/cmakedeps/cmakedeps.py b/conan/tools/cmake/cmakedeps/cmakedeps.py\nindex a1619a68a4d..1deb5553237 100644\n--- a/conan/tools/cmake/cmakedeps/cmakedeps.py\n+++ b/conan/tools/cmake/cmakedeps/cmakedeps.py\n@@ -6,6 +6,7 @@\n from conan.tools.cmake.cmakedeps.templates.target_configuration import TargetConfigurationTemplate\n from conan.tools.cmake.cmakedeps.templates.target_data import ConfigDataTemplate\n from conan.tools.cmake.cmakedeps.templates.targets import TargetsTemplate\n+from conans.errors import ConanException\n from conans.util.files import save\n \n \n@@ -16,6 +17,11 @@ def __init__(self, conanfile):\n self.arch = self._conanfile.settings.get_safe(\"arch\")\n self.configuration = str(self._conanfile.settings.build_type)\n self.configurations = [v for v in conanfile.settings.build_type.values_range if v != \"None\"]\n+ # By default, the build modules are generated for host context only\n+ self.build_context_build_modules = []\n+ # If specified, the files/targets/variables for the build context will be renamed appeding\n+ # a suffix. It is necessary in case of same require and build_require and will cause an error\n+ self.build_context_suffix = {}\n \n def generate(self):\n # Current directory is the generators_folder\n@@ -28,10 +34,20 @@ def content(self):\n macros = MacrosTemplate()\n ret = {macros.filename: macros.render()}\n \n- host_requires = {r.ref.name: r for r in\n- self._conanfile.dependencies.transitive_host_requires}\n+ host_req = self._conanfile.dependencies.transitive_host_requires\n+ build_req = self._conanfile.dependencies.build_requires_build_context\n+\n+ # Check if the same package is at host and build and the same time\n+ common = {r.ref.name for r in host_req}.intersection({r.ref.name for r in build_req})\n+ for name in common:\n+ if name not in self.build_context_suffix:\n+ raise ConanException(\"The package '{}' exists both as 'require' and as \"\n+ \"'build require'. You need to specify a suffix using the \"\n+ \"'build_context_suffix' attribute at the CMakeDeps \"\n+ \"generator.\".format(name))\n+\n # Iterate all the transitive requires\n- for req in host_requires.values():\n+ for req in host_req + build_req:\n \n config_version = ConfigVersionTemplate(self, req)\n ret[config_version.filename] = config_version.render()\ndiff --git a/conan/tools/cmake/cmakedeps/templates/__init__.py b/conan/tools/cmake/cmakedeps/templates/__init__.py\nindex e6de41bffaf..bc8c0fb5a97 100644\n--- a/conan/tools/cmake/cmakedeps/templates/__init__.py\n+++ b/conan/tools/cmake/cmakedeps/templates/__init__.py\n@@ -8,13 +8,37 @@ class CMakeDepsFileTemplate(object):\n \n def __init__(self, cmakedeps, req):\n self.cmakedeps = cmakedeps\n- if req is not None:\n- self.conanfile = req\n- self.pkg_name = req.ref.name\n- self.package_folder = req.package_folder.\\\n- replace('\\\\', '/').replace('$', '\\\\$').replace('\"', '\\\\\"')\n- self.target_namespace = get_target_namespace(self.conanfile)\n- self.file_name = get_file_name(self.conanfile)\n+ self.conanfile = req\n+\n+ @property\n+ def pkg_name(self):\n+ return self.conanfile.ref.name + self.suffix\n+\n+ @property\n+ def package_folder(self):\n+ return self.conanfile.package_folder.\\\n+ replace('\\\\', '/').replace('$', '\\\\$').replace('\"', '\\\\\"')\n+\n+ @property\n+ def target_namespace(self):\n+ return get_target_namespace(self.conanfile) + self.suffix\n+\n+ @property\n+ def file_name(self):\n+ return get_file_name(self.conanfile) + self.suffix\n+\n+ @property\n+ def suffix(self):\n+ if not self.conanfile.is_build_context:\n+ return \"\"\n+ return self.cmakedeps.build_context_suffix.get(self.conanfile.ref.name, \"\")\n+\n+ @property\n+ def build_modules_activated(self):\n+ if self.conanfile.is_build_context:\n+ return self.conanfile.ref.name in self.cmakedeps.build_context_build_modules\n+ else:\n+ return self.conanfile.ref.name not in self.cmakedeps.build_context_build_modules\n \n def render(self):\n context = self.context\ndiff --git a/conan/tools/cmake/cmakedeps/templates/target_configuration.py b/conan/tools/cmake/cmakedeps/templates/target_configuration.py\nindex 9fb54afa966..90ab8f4d203 100644\n--- a/conan/tools/cmake/cmakedeps/templates/target_configuration.py\n+++ b/conan/tools/cmake/cmakedeps/templates/target_configuration.py\n@@ -14,7 +14,8 @@ class TargetConfigurationTemplate(CMakeDepsFileTemplate):\n \n @property\n def filename(self):\n- return \"{}Target-{}.cmake\".format(self.file_name, self.cmakedeps.configuration.lower())\n+ return \"{}Target-{}.cmake\".format(self.file_name,\n+ self.cmakedeps.configuration.lower())\n \n @property\n def context(self):\ndiff --git a/conan/tools/cmake/cmakedeps/templates/target_data.py b/conan/tools/cmake/cmakedeps/templates/target_data.py\nindex 1b28e6e61ac..fd309a19aa7 100644\n--- a/conan/tools/cmake/cmakedeps/templates/target_data.py\n+++ b/conan/tools/cmake/cmakedeps/templates/target_data.py\n@@ -24,6 +24,9 @@ def filename(self):\n @property\n def context(self):\n global_cpp = self.get_global_cpp_cmake()\n+ if not self.build_modules_activated:\n+ global_cpp.build_modules_paths = \"\"\n+\n components_cpp = self.get_required_components_cpp()\n components_renames = \" \".join([component_rename for component_rename, _ in\n reversed(components_cpp)])\ndiff --git a/conans/client/graph/conanfile_dependencies.py b/conans/client/graph/conanfile_dependencies.py\nindex e1a3f952b51..c9379acd2bd 100644\n--- a/conans/client/graph/conanfile_dependencies.py\n+++ b/conans/client/graph/conanfile_dependencies.py\n@@ -1,4 +1,4 @@\n-from conans.client.graph.graph import CONTEXT_HOST\n+from conans.client.graph.graph import CONTEXT_HOST, CONTEXT_BUILD\n from conans.errors import ConanException\n from conans.model.conanfile_interface import ConanFileInterface\n \n@@ -26,6 +26,9 @@ def __getitem__(self, name):\n raise ConanException(\"No dependency found\")\n return result[0]\n \n+ def __add__(self, other):\n+ return DependencyOrderedSet(self._deps + other._deps)\n+\n \n class ConanFileDependencies:\n \n@@ -40,6 +43,18 @@ def build_requires(self):\n return DependencyOrderedSet([ConanFileInterface(edge.dst.conanfile)\n for edge in self._node.dependencies if edge.build_require])\n \n+ @property\n+ def build_requires_build_context(self):\n+ \"\"\"\n+ :return: list of immediate direct build_requires, on build context.\n+ FIXME: Why this method? To overcome the legacy use case without 2 profiles where everthing\n+ is host, otherwise we can receive the same build require twice, one in\n+ .transitive_host_requires and one in .build_requires\n+ \"\"\"\n+ return DependencyOrderedSet([ConanFileInterface(edge.dst.conanfile)\n+ for edge in self._node.dependencies if edge.build_require and\n+ edge.dst.context == CONTEXT_BUILD])\n+\n @property\n def requires(self):\n \"\"\"\ndiff --git a/conans/model/conanfile_interface.py b/conans/model/conanfile_interface.py\nindex 7521ed4f01c..94f157f67a5 100644\n--- a/conans/model/conanfile_interface.py\n+++ b/conans/model/conanfile_interface.py\n@@ -1,3 +1,4 @@\n+from conans.client.graph.graph import CONTEXT_BUILD\n \n \n class ConanFileInterface:\n@@ -60,3 +61,7 @@ def context(self):\n @property\n def dependencies(self):\n return self._conanfile.dependencies\n+\n+ @property\n+ def is_build_context(self):\n+ return self._conanfile.context == CONTEXT_BUILD\ndiff --git a/conans/test/functional/toolchains/cmake/cmakedeps/test_build_context_protobuf.py b/conans/test/functional/toolchains/cmake/cmakedeps/test_build_context_protobuf.py\nnew file mode 100644\nindex 00000000000..aafbf0feb2a\n--- /dev/null\n+++ b/conans/test/functional/toolchains/cmake/cmakedeps/test_build_context_protobuf.py\n@@ -0,0 +1,233 @@\n+import textwrap\n+\n+import pytest\n+\n+from conans.test.utils.tools import TestClient\n+\n+\[email protected]\n+def client():\n+ c = TestClient()\n+ conanfile = textwrap.dedent('''\n+ from conans import ConanFile\n+ from conans.tools import save, chdir\n+ import os\n+\n+ class Protobuf(ConanFile):\n+ settings = \"build_type\", \"os\", \"arch\", \"compiler\"\n+\n+ def package(self):\n+ my_cmake_module = \"\"\"\n+ function(foo_generate)\n+ write_file(foo_generated.h \"int from_context = %s;\")\n+ endfunction()\n+ \"\"\"\n+\n+ with chdir(self.package_folder):\n+ save(\"include_build/protobuff.h\", \"int protubuff_stuff(){ return 1; }\")\n+ save(\"include_host/protobuff.h\", \"int protubuff_stuff(){ return 2; }\")\n+ save(\"build/my_tools_build.cmake\", my_cmake_module % \"1\")\n+ save(\"build/my_tools_host.cmake\", my_cmake_module % \"2\")\n+\n+ def package_info(self):\n+ # This info depends on self.context !!\n+ self.cpp_info.includedirs = [\"include_{}\".format(self.context)]\n+ path_build_modules = os.path.join(\"build\", \"my_tools_{}.cmake\".format(self.context))\n+ self.cpp_info.set_property(\"cmake_build_modules\", [path_build_modules])\n+\n+ ''')\n+ c.save({\"conanfile.py\": conanfile})\n+ c.run(\"create . protobuff/1.0@\")\n+ return c\n+\n+\n+main = textwrap.dedent(\"\"\"\n+ #include <iostream>\n+ #include \"protobuff.h\"\n+ #include \"foo_generated.h\"\n+\n+\n+ int main(){\n+ int ret = protubuff_stuff();\n+\n+ if(ret == 1){\n+ std::cout << \" Library from build context!\" << std::endl;\n+ }\n+ else if(ret == 2){\n+ std::cout << \" Library from host context!\" << std::endl;\n+ }\n+\n+ // Variable declared at the foo_generated\n+ if(from_context == 1){\n+ std::cout << \" Generated code in build context!\" << std::endl;\n+ }\n+ else if(from_context == 2){\n+ std::cout << \" Generated code in host context!\" << std::endl;\n+ }\n+ return 0;\n+ }\n+ \"\"\")\n+\n+consumer_conanfile = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile\n+ from conan.tools.cmake import CMake, CMakeToolchain, CMakeDeps\n+\n+ class Consumer(ConanFile):\n+ settings = \"build_type\", \"os\", \"arch\", \"compiler\"\n+ exports_sources = \"CMakeLists.txt\", \"main.cpp\"\n+ requires = \"protobuff/1.0\"\n+ build_requires = \"protobuff/1.0\"\n+\n+ def generate(self):\n+ toolchain = CMakeToolchain(self)\n+ toolchain.generate()\n+\n+ deps = CMakeDeps(self)\n+ {}\n+ deps.generate()\n+\n+ def build(self):\n+ cmake = CMake(self)\n+ cmake.configure()\n+ cmake.build()\n+ folder = str(self.settings.build_type) if self.settings.os == \"Windows\" else \".\"\n+ self.run(os.sep.join([folder, \"app\"]))\n+ \"\"\")\n+\n+\n+def test_build_modules_from_build_context(client):\n+ consumer_cmake = textwrap.dedent(\"\"\"\n+ set(CMAKE_CXX_COMPILER_WORKS 1)\n+ set(CMAKE_CXX_ABI_COMPILED 1)\n+ cmake_minimum_required(VERSION 3.15)\n+ project(MyApp CXX)\n+\n+ find_package(protobuff)\n+ find_package(protobuff_KK)\n+ add_executable(app main.cpp)\n+ foo_generate()\n+ target_link_libraries(app protobuff::protobuff)\n+ \"\"\")\n+\n+ cmake_deps_conf = \"\"\"\n+ deps.build_context_build_modules = [\"protobuff\"]\n+ deps.build_context_suffix = {\"protobuff\": \"_KK\"}\n+ \"\"\"\n+\n+ client.save({\"conanfile.py\": consumer_conanfile.format(cmake_deps_conf),\n+ \"CMakeLists.txt\": consumer_cmake.format(cmake_deps_conf),\n+ \"main.cpp\": main})\n+\n+ client.run(\"create . app/1.0@ -pr:b default -pr:h default\")\n+ assert \"Library from host context!\" in client.out\n+ assert \"Generated code in build context!\" in client.out\n+\n+\n+def test_build_modules_and_target_from_build_context(client):\n+ consumer_cmake = textwrap.dedent(\"\"\"\n+ set(CMAKE_CXX_COMPILER_WORKS 1)\n+ set(CMAKE_CXX_ABI_COMPILED 1)\n+ cmake_minimum_required(VERSION 3.15)\n+ project(MyApp CXX)\n+\n+ find_package(protobuff)\n+ find_package(protobuff_KK)\n+ add_executable(app main.cpp)\n+ foo_generate()\n+ target_link_libraries(app protobuff_KK::protobuff_KK)\n+ \"\"\")\n+\n+ cmake_deps_conf = \"\"\"\n+ deps.build_context_build_modules = [\"protobuff\"]\n+ deps.build_context_suffix = {\"protobuff\": \"_KK\"}\n+ \"\"\"\n+\n+ client.save({\"conanfile.py\": consumer_conanfile.format(cmake_deps_conf),\n+ \"CMakeLists.txt\": consumer_cmake.format(cmake_deps_conf),\n+ \"main.cpp\": main})\n+\n+ client.run(\"create . app/1.0@ -pr:b default -pr:h default\")\n+ assert \"Library from build context!\" in client.out\n+ assert \"Generated code in build context!\" in client.out\n+\n+\n+def test_build_modules_from_host_and_target_from_build_context(client):\n+ consumer_cmake = textwrap.dedent(\"\"\"\n+ set(CMAKE_CXX_COMPILER_WORKS 1)\n+ set(CMAKE_CXX_ABI_COMPILED 1)\n+ cmake_minimum_required(VERSION 3.15)\n+ project(MyApp CXX)\n+\n+ find_package(protobuff)\n+ find_package(protobuff_KK)\n+ add_executable(app main.cpp)\n+ foo_generate()\n+ target_link_libraries(app protobuff_KK::protobuff_KK)\n+ \"\"\")\n+\n+ cmake_deps_conf = \"\"\"\n+ deps.build_context_suffix = {\"protobuff\": \"_KK\"}\n+ \"\"\"\n+\n+ client.save({\"conanfile.py\": consumer_conanfile.format(cmake_deps_conf),\n+ \"CMakeLists.txt\": consumer_cmake.format(cmake_deps_conf),\n+ \"main.cpp\": main})\n+\n+ client.run(\"create . app/1.0@ -pr:b default -pr:h default\")\n+ assert \"Library from build context!\" in client.out\n+ assert \"Generated code in host context!\" in client.out\n+\n+\n+def test_build_modules_and_target_from_host_context(client):\n+ consumer_cmake = textwrap.dedent(\"\"\"\n+ set(CMAKE_CXX_COMPILER_WORKS 1)\n+ set(CMAKE_CXX_ABI_COMPILED 1)\n+ cmake_minimum_required(VERSION 3.15)\n+ project(MyApp CXX)\n+\n+ find_package(protobuff)\n+ find_package(protobuff_KK)\n+ add_executable(app main.cpp)\n+ foo_generate()\n+ target_link_libraries(app protobuff::protobuff)\n+ \"\"\")\n+\n+ cmake_deps_conf = \"\"\"\n+ deps.build_context_build_modules = []\n+ deps.build_context_suffix = {\"protobuff\": \"_KK\"}\n+ \"\"\"\n+\n+ client.save({\"conanfile.py\": consumer_conanfile.format(cmake_deps_conf),\n+ \"CMakeLists.txt\": consumer_cmake.format(cmake_deps_conf),\n+ \"main.cpp\": main})\n+\n+ client.run(\"create . app/1.0@ -pr:b default -pr:h default\")\n+ assert \"Library from host context!\" in client.out\n+ assert \"Generated code in host context!\" in client.out\n+\n+\n+def test_exception_when_not_prefix_specified(client):\n+ consumer_cmake = textwrap.dedent(\"\"\"\n+ set(CMAKE_CXX_COMPILER_WORKS 1)\n+ set(CMAKE_CXX_ABI_COMPILED 1)\n+ cmake_minimum_required(VERSION 3.15)\n+ project(MyApp CXX)\n+\n+ find_package(protobuff)\n+ add_executable(app main.cpp)\n+ foo_generate()\n+ target_link_libraries(app protobuff::protobuff)\n+ \"\"\")\n+\n+ cmake_deps_conf = \"\"\"\n+ \"\"\"\n+\n+ client.save({\"conanfile.py\": consumer_conanfile.format(cmake_deps_conf),\n+ \"CMakeLists.txt\": consumer_cmake.format(cmake_deps_conf),\n+ \"main.cpp\": main})\n+\n+ client.run(\"create . app/1.0@ -pr:b default -pr:h default\", assert_error=True)\n+ assert \"The package 'protobuff' exists both as 'require' and as 'build require'. \" \\\n+ \"You need to specify a suffix using the 'build_context_suffix' attribute at the \" \\\n+ \"CMakeDeps generator.\" in client.out\ndiff --git a/conans/test/unittests/tools/cmake/test_cmakedeps.py b/conans/test/unittests/tools/cmake/test_cmakedeps.py\nindex ad9249998f8..c897ff36f1a 100644\n--- a/conans/test/unittests/tools/cmake/test_cmakedeps.py\n+++ b/conans/test/unittests/tools/cmake/test_cmakedeps.py\n@@ -13,6 +13,8 @@\n @pytest.mark.parametrize(\"using_properties\", [True, False])\n def test_cpp_info_name_cmakedeps(using_properties):\n conanfile = ConanFile(Mock(), None)\n+ conanfile._conan_node = Mock()\n+ conanfile._conan_node.context = \"host\"\n conanfile.settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n conanfile.initialize(Settings({\"os\": [\"Windows\"],\n \"compiler\": [\"gcc\"],\n@@ -31,6 +33,8 @@ def test_cpp_info_name_cmakedeps(using_properties):\n \n conanfile_dep = ConanFile(Mock(), None)\n conanfile_dep.cpp_info = cpp_info\n+ conanfile_dep._conan_node = Mock()\n+ conanfile_dep._conan_node.context = \"host\"\n \n with mock.patch('conans.ConanFile.ref', new_callable=mock.PropertyMock) as mock_ref:\n with mock.patch('conans.ConanFile.dependencies', new_callable=mock.PropertyMock) as mock_deps:\n@@ -40,6 +44,7 @@ def test_cpp_info_name_cmakedeps(using_properties):\n conanfile_dep.package_folder = \"/path/to/folder_dep\"\n conanfile.dependencies.transitive_host_requires = [ConanFileInterface(conanfile_dep)]\n conanfile.dependencies.host_requires = [ConanFileInterface(conanfile_dep)]\n+ conanfile.dependencies.build_requires_build_context = []\n \n cmakedeps = CMakeDeps(conanfile)\n files = cmakedeps.content\n@@ -52,6 +57,8 @@ def test_cpp_info_name_cmakedeps(using_properties):\n @pytest.mark.parametrize(\"using_properties\", [True, False])\n def test_cpp_info_name_cmakedeps_components(using_properties):\n conanfile = ConanFile(Mock(), None)\n+ conanfile._conan_node = Mock()\n+ conanfile._conan_node.context = \"host\"\n conanfile.settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n conanfile.initialize(Settings({\"os\": [\"Windows\"],\n \"compiler\": [\"gcc\"],\n@@ -72,6 +79,8 @@ def test_cpp_info_name_cmakedeps_components(using_properties):\n \n conanfile_dep = ConanFile(Mock(), None)\n conanfile_dep.cpp_info = cpp_info\n+ conanfile_dep._conan_node = Mock()\n+ conanfile_dep._conan_node.context = \"host\"\n \n with mock.patch('conans.ConanFile.ref', new_callable=mock.PropertyMock) as mock_ref:\n with mock.patch('conans.ConanFile.dependencies', new_callable=mock.PropertyMock) as mock_deps:\n@@ -81,6 +90,7 @@ def test_cpp_info_name_cmakedeps_components(using_properties):\n conanfile_dep.package_folder = \"/path/to/folder_dep\"\n conanfile.dependencies.transitive_host_requires = [ConanFileInterface(conanfile_dep)]\n conanfile.dependencies.host_requires = [ConanFileInterface(conanfile_dep)]\n+ conanfile.dependencies.build_requires_build_context = []\n \n cmakedeps = CMakeDeps(conanfile)\n files = cmakedeps.content\n@@ -96,6 +106,8 @@ def test_cpp_info_name_cmakedeps_components(using_properties):\n def test_cmake_deps_links_flags():\n # https://github.com/conan-io/conan/issues/8703\n conanfile = ConanFile(Mock(), None)\n+ conanfile._conan_node = Mock()\n+ conanfile._conan_node.context = \"host\"\n conanfile.settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n conanfile.initialize(Settings({\"os\": [\"Windows\"],\n \"compiler\": [\"gcc\"],\n@@ -110,6 +122,8 @@ def test_cmake_deps_links_flags():\n cpp_info.exelinkflags = [\"-OPT:NOICF\"]\n conanfile_dep = ConanFile(Mock(), None)\n conanfile_dep.cpp_info = cpp_info\n+ conanfile_dep._conan_node = Mock()\n+ conanfile_dep._conan_node.context = \"host\"\n \n with mock.patch('conans.ConanFile.ref', new_callable=mock.PropertyMock) as mock_ref:\n with mock.patch('conans.ConanFile.dependencies',\n@@ -122,6 +136,7 @@ def test_cmake_deps_links_flags():\n conanfile_dep.package_folder = \"/path/to/folder_dep\"\n conanfile.dependencies.transitive_host_requires = [ConanFileInterface(conanfile_dep)]\n conanfile.dependencies.host_requires = [ConanFileInterface(conanfile_dep)]\n+ conanfile.dependencies.build_requires_build_context = []\n \n cmakedeps = CMakeDeps(conanfile)\n files = cmakedeps.content\n"
}
|
[
{
"diff_hunk": "@@ -8,13 +8,37 @@ class CMakeDepsFileTemplate(object):\n \n def __init__(self, cmakedeps, req):\n self.cmakedeps = cmakedeps\n- if req is not None:\n- self.conanfile = req\n- self.pkg_name = req.ref.name\n- self.package_folder = req.package_folder.\\\n- replace('\\\\', '/').replace('$', '\\\\$').replace('\"', '\\\\\"')\n- self.target_namespace = get_target_namespace(self.conanfile)\n- self.file_name = get_file_name(self.conanfile)\n+ self.conanfile = req\n+\n+ @property\n+ def pkg_name(self):\n+ return self.conanfile.ref.name + self.suffix\n+\n+ @property\n+ def package_folder(self):\n+ return self.conanfile.package_folder.\\",
"line": null,
"original_line": 19,
"original_start_line": null,
"path": "conan/tools/cmake/cmakedeps/templates/__init__.py",
"start_line": null,
"text": "@user1:\nThis name is a bit confusing, it will return a escaped package_folder"
}
] |
24212f4ab1bc03aa04ed2a890318c51714371914
|
diff --git a/conan/tools/cmake/cmakedeps/cmakedeps.py b/conan/tools/cmake/cmakedeps/cmakedeps.py
index a1619a68a4d..1deb5553237 100644
--- a/conan/tools/cmake/cmakedeps/cmakedeps.py
+++ b/conan/tools/cmake/cmakedeps/cmakedeps.py
@@ -6,6 +6,7 @@
from conan.tools.cmake.cmakedeps.templates.target_configuration import TargetConfigurationTemplate
from conan.tools.cmake.cmakedeps.templates.target_data import ConfigDataTemplate
from conan.tools.cmake.cmakedeps.templates.targets import TargetsTemplate
+from conans.errors import ConanException
from conans.util.files import save
@@ -16,6 +17,11 @@ def __init__(self, conanfile):
self.arch = self._conanfile.settings.get_safe("arch")
self.configuration = str(self._conanfile.settings.build_type)
self.configurations = [v for v in conanfile.settings.build_type.values_range if v != "None"]
+ # By default, the build modules are generated for host context only
+ self.build_context_build_modules = []
+ # If specified, the files/targets/variables for the build context will be renamed appeding
+ # a suffix. It is necessary in case of same require and build_require and will cause an error
+ self.build_context_suffix = {}
def generate(self):
# Current directory is the generators_folder
@@ -28,10 +34,20 @@ def content(self):
macros = MacrosTemplate()
ret = {macros.filename: macros.render()}
- host_requires = {r.ref.name: r for r in
- self._conanfile.dependencies.transitive_host_requires}
+ host_req = self._conanfile.dependencies.transitive_host_requires
+ build_req = self._conanfile.dependencies.build_requires_build_context
+
+ # Check if the same package is at host and build and the same time
+ common = {r.ref.name for r in host_req}.intersection({r.ref.name for r in build_req})
+ for name in common:
+ if name not in self.build_context_suffix:
+ raise ConanException("The package '{}' exists both as 'require' and as "
+ "'build require'. You need to specify a suffix using the "
+ "'build_context_suffix' attribute at the CMakeDeps "
+ "generator.".format(name))
+
# Iterate all the transitive requires
- for req in host_requires.values():
+ for req in host_req + build_req:
config_version = ConfigVersionTemplate(self, req)
ret[config_version.filename] = config_version.render()
diff --git a/conan/tools/cmake/cmakedeps/templates/__init__.py b/conan/tools/cmake/cmakedeps/templates/__init__.py
index e6de41bffaf..5944d0894ec 100644
--- a/conan/tools/cmake/cmakedeps/templates/__init__.py
+++ b/conan/tools/cmake/cmakedeps/templates/__init__.py
@@ -8,13 +8,32 @@ class CMakeDepsFileTemplate(object):
def __init__(self, cmakedeps, req):
self.cmakedeps = cmakedeps
- if req is not None:
- self.conanfile = req
- self.pkg_name = req.ref.name
- self.package_folder = req.package_folder.\
- replace('\\', '/').replace('$', '\\$').replace('"', '\\"')
- self.target_namespace = get_target_namespace(self.conanfile)
- self.file_name = get_file_name(self.conanfile)
+ self.conanfile = req
+
+ @property
+ def pkg_name(self):
+ return self.conanfile.ref.name + self.suffix
+
+ @property
+ def target_namespace(self):
+ return get_target_namespace(self.conanfile) + self.suffix
+
+ @property
+ def file_name(self):
+ return get_file_name(self.conanfile) + self.suffix
+
+ @property
+ def suffix(self):
+ if not self.conanfile.is_build_context:
+ return ""
+ return self.cmakedeps.build_context_suffix.get(self.conanfile.ref.name, "")
+
+ @property
+ def build_modules_activated(self):
+ if self.conanfile.is_build_context:
+ return self.conanfile.ref.name in self.cmakedeps.build_context_build_modules
+ else:
+ return self.conanfile.ref.name not in self.cmakedeps.build_context_build_modules
def render(self):
context = self.context
diff --git a/conan/tools/cmake/cmakedeps/templates/target_configuration.py b/conan/tools/cmake/cmakedeps/templates/target_configuration.py
index 9fb54afa966..90ab8f4d203 100644
--- a/conan/tools/cmake/cmakedeps/templates/target_configuration.py
+++ b/conan/tools/cmake/cmakedeps/templates/target_configuration.py
@@ -14,7 +14,8 @@ class TargetConfigurationTemplate(CMakeDepsFileTemplate):
@property
def filename(self):
- return "{}Target-{}.cmake".format(self.file_name, self.cmakedeps.configuration.lower())
+ return "{}Target-{}.cmake".format(self.file_name,
+ self.cmakedeps.configuration.lower())
@property
def context(self):
diff --git a/conan/tools/cmake/cmakedeps/templates/target_data.py b/conan/tools/cmake/cmakedeps/templates/target_data.py
index 1b28e6e61ac..d74551571f6 100644
--- a/conan/tools/cmake/cmakedeps/templates/target_data.py
+++ b/conan/tools/cmake/cmakedeps/templates/target_data.py
@@ -24,13 +24,18 @@ def filename(self):
@property
def context(self):
global_cpp = self.get_global_cpp_cmake()
+ if not self.build_modules_activated:
+ global_cpp.build_modules_paths = ""
+
components_cpp = self.get_required_components_cpp()
components_renames = " ".join([component_rename for component_rename, _ in
reversed(components_cpp)])
dependency_filenames = self.get_dependency_filenames()
+ package_folder = self.conanfile.package_folder.replace('\\', '/')\
+ .replace('$', '\\$').replace('"', '\\"')
return {"global_cpp": global_cpp,
"pkg_name": self.pkg_name,
- "package_folder": self.package_folder,
+ "package_folder": package_folder,
"config_suffix": self.config_suffix,
"components_renames": components_renames,
"components_cpp": components_cpp,
diff --git a/conans/client/graph/conanfile_dependencies.py b/conans/client/graph/conanfile_dependencies.py
index e1a3f952b51..c9379acd2bd 100644
--- a/conans/client/graph/conanfile_dependencies.py
+++ b/conans/client/graph/conanfile_dependencies.py
@@ -1,4 +1,4 @@
-from conans.client.graph.graph import CONTEXT_HOST
+from conans.client.graph.graph import CONTEXT_HOST, CONTEXT_BUILD
from conans.errors import ConanException
from conans.model.conanfile_interface import ConanFileInterface
@@ -26,6 +26,9 @@ def __getitem__(self, name):
raise ConanException("No dependency found")
return result[0]
+ def __add__(self, other):
+ return DependencyOrderedSet(self._deps + other._deps)
+
class ConanFileDependencies:
@@ -40,6 +43,18 @@ def build_requires(self):
return DependencyOrderedSet([ConanFileInterface(edge.dst.conanfile)
for edge in self._node.dependencies if edge.build_require])
+ @property
+ def build_requires_build_context(self):
+ """
+ :return: list of immediate direct build_requires, on build context.
+ FIXME: Why this method? To overcome the legacy use case without 2 profiles where everthing
+ is host, otherwise we can receive the same build require twice, one in
+ .transitive_host_requires and one in .build_requires
+ """
+ return DependencyOrderedSet([ConanFileInterface(edge.dst.conanfile)
+ for edge in self._node.dependencies if edge.build_require and
+ edge.dst.context == CONTEXT_BUILD])
+
@property
def requires(self):
"""
diff --git a/conans/model/conanfile_interface.py b/conans/model/conanfile_interface.py
index 7521ed4f01c..94f157f67a5 100644
--- a/conans/model/conanfile_interface.py
+++ b/conans/model/conanfile_interface.py
@@ -1,3 +1,4 @@
+from conans.client.graph.graph import CONTEXT_BUILD
class ConanFileInterface:
@@ -60,3 +61,7 @@ def context(self):
@property
def dependencies(self):
return self._conanfile.dependencies
+
+ @property
+ def is_build_context(self):
+ return self._conanfile.context == CONTEXT_BUILD
diff --git a/conans/test/functional/toolchains/cmake/cmakedeps/test_build_context_protobuf.py b/conans/test/functional/toolchains/cmake/cmakedeps/test_build_context_protobuf.py
new file mode 100644
index 00000000000..52cc5719800
--- /dev/null
+++ b/conans/test/functional/toolchains/cmake/cmakedeps/test_build_context_protobuf.py
@@ -0,0 +1,233 @@
+import textwrap
+
+import pytest
+
+from conans.test.utils.tools import TestClient
+
+
[email protected]
+def client():
+ c = TestClient()
+ conanfile = textwrap.dedent('''
+ from conans import ConanFile
+ from conans.tools import save, chdir
+ import os
+
+ class Protobuf(ConanFile):
+ settings = "build_type", "os", "arch", "compiler"
+
+ def package(self):
+ my_cmake_module = """
+ function(foo_generate)
+ write_file(foo_generated.h "int from_context = %s;")
+ endfunction()
+ """
+
+ with chdir(self.package_folder):
+ save("include_build/protobuff.h", "int protubuff_stuff(){ return 1; }")
+ save("include_host/protobuff.h", "int protubuff_stuff(){ return 2; }")
+ save("build/my_tools_build.cmake", my_cmake_module % "1")
+ save("build/my_tools_host.cmake", my_cmake_module % "2")
+
+ def package_info(self):
+ # This info depends on self.context !!
+ self.cpp_info.includedirs = ["include_{}".format(self.context)]
+ path_build_modules = os.path.join("build", "my_tools_{}.cmake".format(self.context))
+ self.cpp_info.set_property("cmake_build_modules", [path_build_modules])
+
+ ''')
+ c.save({"conanfile.py": conanfile})
+ c.run("create . protobuff/1.0@")
+ return c
+
+
+main = textwrap.dedent("""
+ #include <iostream>
+ #include "protobuff.h"
+ #include "foo_generated.h"
+
+
+ int main(){
+ int ret = protubuff_stuff();
+
+ if(ret == 1){
+ std::cout << " Library from build context!" << std::endl;
+ }
+ else if(ret == 2){
+ std::cout << " Library from host context!" << std::endl;
+ }
+
+ // Variable declared at the foo_generated
+ if(from_context == 1){
+ std::cout << " Generated code in build context!" << std::endl;
+ }
+ else if(from_context == 2){
+ std::cout << " Generated code in host context!" << std::endl;
+ }
+ return 0;
+ }
+ """)
+
+consumer_conanfile = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ from conan.tools.cmake import CMake, CMakeToolchain, CMakeDeps
+
+ class Consumer(ConanFile):
+ settings = "build_type", "os", "arch", "compiler"
+ exports_sources = "CMakeLists.txt", "main.cpp"
+ requires = "protobuff/1.0"
+ build_requires = "protobuff/1.0"
+
+ def generate(self):
+ toolchain = CMakeToolchain(self)
+ toolchain.generate()
+
+ deps = CMakeDeps(self)
+ {}
+ deps.generate()
+
+ def build(self):
+ cmake = CMake(self)
+ cmake.configure()
+ cmake.build()
+ folder = str(self.settings.build_type) if self.settings.os == "Windows" else "."
+ self.run(os.sep.join([folder, "app"]))
+ """)
+
+
+def test_build_modules_from_build_context(client):
+ consumer_cmake = textwrap.dedent("""
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ set(CMAKE_CXX_ABI_COMPILED 1)
+ cmake_minimum_required(VERSION 3.15)
+ project(MyApp CXX)
+
+ find_package(protobuff)
+ find_package(protobuff_BUILD)
+ add_executable(app main.cpp)
+ foo_generate()
+ target_link_libraries(app protobuff::protobuff)
+ """)
+
+ cmake_deps_conf = """
+ deps.build_context_build_modules = ["protobuff"]
+ deps.build_context_suffix = {"protobuff": "_BUILD"}
+ """
+
+ client.save({"conanfile.py": consumer_conanfile.format(cmake_deps_conf),
+ "CMakeLists.txt": consumer_cmake.format(cmake_deps_conf),
+ "main.cpp": main})
+
+ client.run("create . app/1.0@ -pr:b default -pr:h default")
+ assert "Library from host context!" in client.out
+ assert "Generated code in build context!" in client.out
+
+
+def test_build_modules_and_target_from_build_context(client):
+ consumer_cmake = textwrap.dedent("""
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ set(CMAKE_CXX_ABI_COMPILED 1)
+ cmake_minimum_required(VERSION 3.15)
+ project(MyApp CXX)
+
+ find_package(protobuff)
+ find_package(protobuff_BUILD)
+ add_executable(app main.cpp)
+ foo_generate()
+ target_link_libraries(app protobuff_BUILD::protobuff_BUILD)
+ """)
+
+ cmake_deps_conf = """
+ deps.build_context_build_modules = ["protobuff"]
+ deps.build_context_suffix = {"protobuff": "_BUILD"}
+ """
+
+ client.save({"conanfile.py": consumer_conanfile.format(cmake_deps_conf),
+ "CMakeLists.txt": consumer_cmake.format(cmake_deps_conf),
+ "main.cpp": main})
+
+ client.run("create . app/1.0@ -pr:b default -pr:h default")
+ assert "Library from build context!" in client.out
+ assert "Generated code in build context!" in client.out
+
+
+def test_build_modules_from_host_and_target_from_build_context(client):
+ consumer_cmake = textwrap.dedent("""
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ set(CMAKE_CXX_ABI_COMPILED 1)
+ cmake_minimum_required(VERSION 3.15)
+ project(MyApp CXX)
+
+ find_package(protobuff)
+ find_package(protobuff_BUILD)
+ add_executable(app main.cpp)
+ foo_generate()
+ target_link_libraries(app protobuff_BUILD::protobuff_BUILD)
+ """)
+
+ cmake_deps_conf = """
+ deps.build_context_suffix = {"protobuff": "_BUILD"}
+ """
+
+ client.save({"conanfile.py": consumer_conanfile.format(cmake_deps_conf),
+ "CMakeLists.txt": consumer_cmake.format(cmake_deps_conf),
+ "main.cpp": main})
+
+ client.run("create . app/1.0@ -pr:b default -pr:h default")
+ assert "Library from build context!" in client.out
+ assert "Generated code in host context!" in client.out
+
+
+def test_build_modules_and_target_from_host_context(client):
+ consumer_cmake = textwrap.dedent("""
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ set(CMAKE_CXX_ABI_COMPILED 1)
+ cmake_minimum_required(VERSION 3.15)
+ project(MyApp CXX)
+
+ find_package(protobuff)
+ find_package(protobuff_BUILD)
+ add_executable(app main.cpp)
+ foo_generate()
+ target_link_libraries(app protobuff::protobuff)
+ """)
+
+ cmake_deps_conf = """
+ deps.build_context_build_modules = []
+ deps.build_context_suffix = {"protobuff": "_BUILD"}
+ """
+
+ client.save({"conanfile.py": consumer_conanfile.format(cmake_deps_conf),
+ "CMakeLists.txt": consumer_cmake.format(cmake_deps_conf),
+ "main.cpp": main})
+
+ client.run("create . app/1.0@ -pr:b default -pr:h default")
+ assert "Library from host context!" in client.out
+ assert "Generated code in host context!" in client.out
+
+
+def test_exception_when_not_prefix_specified(client):
+ consumer_cmake = textwrap.dedent("""
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ set(CMAKE_CXX_ABI_COMPILED 1)
+ cmake_minimum_required(VERSION 3.15)
+ project(MyApp CXX)
+
+ find_package(protobuff)
+ add_executable(app main.cpp)
+ foo_generate()
+ target_link_libraries(app protobuff::protobuff)
+ """)
+
+ cmake_deps_conf = """
+ """
+
+ client.save({"conanfile.py": consumer_conanfile.format(cmake_deps_conf),
+ "CMakeLists.txt": consumer_cmake.format(cmake_deps_conf),
+ "main.cpp": main})
+
+ client.run("create . app/1.0@ -pr:b default -pr:h default", assert_error=True)
+ assert "The package 'protobuff' exists both as 'require' and as 'build require'. " \
+ "You need to specify a suffix using the 'build_context_suffix' attribute at the " \
+ "CMakeDeps generator." in client.out
diff --git a/conans/test/unittests/tools/cmake/test_cmakedeps.py b/conans/test/unittests/tools/cmake/test_cmakedeps.py
index ad9249998f8..c897ff36f1a 100644
--- a/conans/test/unittests/tools/cmake/test_cmakedeps.py
+++ b/conans/test/unittests/tools/cmake/test_cmakedeps.py
@@ -13,6 +13,8 @@
@pytest.mark.parametrize("using_properties", [True, False])
def test_cpp_info_name_cmakedeps(using_properties):
conanfile = ConanFile(Mock(), None)
+ conanfile._conan_node = Mock()
+ conanfile._conan_node.context = "host"
conanfile.settings = "os", "compiler", "build_type", "arch"
conanfile.initialize(Settings({"os": ["Windows"],
"compiler": ["gcc"],
@@ -31,6 +33,8 @@ def test_cpp_info_name_cmakedeps(using_properties):
conanfile_dep = ConanFile(Mock(), None)
conanfile_dep.cpp_info = cpp_info
+ conanfile_dep._conan_node = Mock()
+ conanfile_dep._conan_node.context = "host"
with mock.patch('conans.ConanFile.ref', new_callable=mock.PropertyMock) as mock_ref:
with mock.patch('conans.ConanFile.dependencies', new_callable=mock.PropertyMock) as mock_deps:
@@ -40,6 +44,7 @@ def test_cpp_info_name_cmakedeps(using_properties):
conanfile_dep.package_folder = "/path/to/folder_dep"
conanfile.dependencies.transitive_host_requires = [ConanFileInterface(conanfile_dep)]
conanfile.dependencies.host_requires = [ConanFileInterface(conanfile_dep)]
+ conanfile.dependencies.build_requires_build_context = []
cmakedeps = CMakeDeps(conanfile)
files = cmakedeps.content
@@ -52,6 +57,8 @@ def test_cpp_info_name_cmakedeps(using_properties):
@pytest.mark.parametrize("using_properties", [True, False])
def test_cpp_info_name_cmakedeps_components(using_properties):
conanfile = ConanFile(Mock(), None)
+ conanfile._conan_node = Mock()
+ conanfile._conan_node.context = "host"
conanfile.settings = "os", "compiler", "build_type", "arch"
conanfile.initialize(Settings({"os": ["Windows"],
"compiler": ["gcc"],
@@ -72,6 +79,8 @@ def test_cpp_info_name_cmakedeps_components(using_properties):
conanfile_dep = ConanFile(Mock(), None)
conanfile_dep.cpp_info = cpp_info
+ conanfile_dep._conan_node = Mock()
+ conanfile_dep._conan_node.context = "host"
with mock.patch('conans.ConanFile.ref', new_callable=mock.PropertyMock) as mock_ref:
with mock.patch('conans.ConanFile.dependencies', new_callable=mock.PropertyMock) as mock_deps:
@@ -81,6 +90,7 @@ def test_cpp_info_name_cmakedeps_components(using_properties):
conanfile_dep.package_folder = "/path/to/folder_dep"
conanfile.dependencies.transitive_host_requires = [ConanFileInterface(conanfile_dep)]
conanfile.dependencies.host_requires = [ConanFileInterface(conanfile_dep)]
+ conanfile.dependencies.build_requires_build_context = []
cmakedeps = CMakeDeps(conanfile)
files = cmakedeps.content
@@ -96,6 +106,8 @@ def test_cpp_info_name_cmakedeps_components(using_properties):
def test_cmake_deps_links_flags():
# https://github.com/conan-io/conan/issues/8703
conanfile = ConanFile(Mock(), None)
+ conanfile._conan_node = Mock()
+ conanfile._conan_node.context = "host"
conanfile.settings = "os", "compiler", "build_type", "arch"
conanfile.initialize(Settings({"os": ["Windows"],
"compiler": ["gcc"],
@@ -110,6 +122,8 @@ def test_cmake_deps_links_flags():
cpp_info.exelinkflags = ["-OPT:NOICF"]
conanfile_dep = ConanFile(Mock(), None)
conanfile_dep.cpp_info = cpp_info
+ conanfile_dep._conan_node = Mock()
+ conanfile_dep._conan_node.context = "host"
with mock.patch('conans.ConanFile.ref', new_callable=mock.PropertyMock) as mock_ref:
with mock.patch('conans.ConanFile.dependencies',
@@ -122,6 +136,7 @@ def test_cmake_deps_links_flags():
conanfile_dep.package_folder = "/path/to/folder_dep"
conanfile.dependencies.transitive_host_requires = [ConanFileInterface(conanfile_dep)]
conanfile.dependencies.host_requires = [ConanFileInterface(conanfile_dep)]
+ conanfile.dependencies.build_requires_build_context = []
cmakedeps = CMakeDeps(conanfile)
files = cmakedeps.content
|
{
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-8927@0e045ac
|
conan-io/conan
|
Python
| 8,927
|
[fix] Respect order of declared directories when using components
|
Changelog: BugFix: Respect order of declared directories when using components.
Docs: omit
- [x] Refer to the issue that supports this Pull Request: close #8904
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2021-05-06T16:09:56Z
|
[question] How to specify 'includedirs'?
This refers to Windows and Visual Studio.
I try to change default includedirs of a components on Windows, in package_info() method
I say
self.cpp_info.components["BASE"].includedirs = ['include/Release', 'include']
and get in CONAN_INCLUDE_DIRS: ['include', 'include/Release'] - reverse order!
in conanbuildinfo.cmake, I mean
I say
self.cpp_info.components["BASE"].includedirs = ['include', 'include/Release']
and get in CONAN_INCLUDE_DIRS: ['include', 'include/Release'] - correct, but not what I want
What I really want is
self.cpp_info.components["BASE"].includedirs = ['include/$(Configuration)', 'include']
but in this case I get CONAN_INCLUDE_DIRS: ['include'] - which is totally wrong
|
Hi @gouriano,
What generator are you using to check the result of CONAN_INCLUDE_DIRS?
It is CMake
I am looking at the contents of conanbuildinfo.cmake,
So not sure what might be happening here but I guess is something related to repeated directories across the dependency graph. CONAN_INCLUDE_DIRS variable aggregates the include directories of all the dependencies in the graph you are installing. As "include" is the value by default, it might be first in the list because it is coming from an upstream dependency.
Another option could be that the generator is picking the value from ``cpp_info.includedirs`` and ``cpp_info.components["BASE"].includedirs``, but I think doing this is not allowed:
```
self.cpp_info.includedirs = []
self.cpp_info.components["BASE"].includedirs = ['include/Release', 'include']
```
Could you give it a go and let me know? thank you
What you suggest cannot work, because:
ConanException: CxxTest/0.1.6 package_info(): self.cpp_info.components cannot be used with self.cpp_info global values at the same time
(Yes, I have verified it)
I have tried few more variants:
When I specify, for example (note that all these directories exist in my package root)
self.cpp_info.components["BASE"].includedirs = ['cmake', 'include/common', 'include']
I get in CONAN_INCLUDE_DIRS = ['include', 'cmake', 'include/common']
'include' always goes first.
When I specify a directory which does not exits in package root:
self.cpp_info.components["BASE"].includedirs = ['notfound', 'include']
it is simply dropped.
That is not good, I believe. Nonexistent directory is my problem (I write recipe), not yours.
I mean I still want to be able to specify
self.cpp_info.components["BASE"].includedirs = ['include/$(Configuration)', 'include']
Thanks
I have figured out how you can reproduce it.
Create new Conan package:
`conan new hello/0.1`
Change package_info into the following
def package_info(self):
self.cpp_info.libs = ["hello"]
self.cpp_info.includedirs = ['lib', 'include']
run "conan create ."
Then, in a separate directory, create conanfile.txt:
[requires]
hello/0.1
[generators]
cmake
run "conan install ."
In the generated conanbuildinfo.cmake, CONAN_INCLUDE_DIRS in the correct order: ['lib', 'include']
Now change package_info
def package_info(self):
self.cpp_info.components["BASE"].defines.append("_UNICODE")
self.cpp_info.components["BASE"].includedirs = ['lib', 'include']
self.cpp_info.components["hellolib"].libs = ["hello"]
self.cpp_info.components["hellolib"].requires = ["BASE"]
again, "conan create .", then "conan install .", then look into conanbuildinfo.cmake.
CONAN_INCLUDE_DIRS are now in wrong order: [include', 'lib']
So, is it a bug? or I do something wrong?
I use Conan v1.35.2
thanks for the quick response and the clear steps to reproduce the error.
I tried it and I think this is a bug in the way the directories are aggregated when components are used. I have created a tentative PR with the fix at #8927 to see if it does not brake other use cases that might be relying on this behavior.
Thanks a lot for reporting again!
Maybe, once we are at it, it is possible to allow "include/$(Configuration)" ?
that would be useful
|
[
{
"body": "This refers to Windows and Visual Studio. \r\nI try to change default includedirs of a components on Windows, in package_info() method\r\n\r\nI say\r\nself.cpp_info.components[\"BASE\"].includedirs = ['include/Release', 'include']\r\nand get in CONAN_INCLUDE_DIRS: ['include', 'include/Release'] - reverse order!\r\nin conanbuildinfo.cmake, I mean\r\n\r\nI say\r\nself.cpp_info.components[\"BASE\"].includedirs = ['include', 'include/Release']\r\nand get in CONAN_INCLUDE_DIRS: ['include', 'include/Release'] - correct, but not what I want\r\n\r\nWhat I really want is\r\nself.cpp_info.components[\"BASE\"].includedirs = ['include/$(Configuration)', 'include']\r\nbut in this case I get CONAN_INCLUDE_DIRS: ['include'] - which is totally wrong\r\n\r\n\r\n\r\n\r\n\r\n",
"number": 8904,
"title": "[question] How to specify 'includedirs'?"
}
] |
810a11aa84827bad61e8221e7016ded9614a4506
|
{
"head_commit": "0e045ac0beac29637ce795b436ac705500e4eb73",
"head_commit_message": "separate functions for aggregating lists and dicts",
"patch_to_review": "diff --git a/conans/model/build_info.py b/conans/model/build_info.py\nindex 514f44c8b62..39f8cbaff68 100644\n--- a/conans/model/build_info.py\n+++ b/conans/model/build_info.py\n@@ -556,14 +556,29 @@ def __getattr__(self, item):\n attr = self._cpp_info.__getattr__(item)\n return attr\n \n- def _aggregated_values(self, item, agg_func=merge_lists):\n+ def _aggregated_dict_values(self, item):\n values = getattr(self, \"_%s\" % item)\n if values is not None:\n return values\n- values = getattr(self._cpp_info, item)\n if self._cpp_info.components:\n+ values = {}\n for component in self._get_sorted_components().values():\n- values = agg_func(values, getattr(component, item))\n+ values = merge_dicts(values, getattr(component, item))\n+ else:\n+ values = getattr(self._cpp_info, item)\n+ setattr(self, \"_%s\" % item, values)\n+ return values\n+\n+ def _aggregated_list_values(self, item):\n+ values = getattr(self, \"_%s\" % item)\n+ if values is not None:\n+ return values\n+ if self._cpp_info.components:\n+ values = []\n+ for component in self._get_sorted_components().values():\n+ values = merge_lists(values, getattr(component, item))\n+ else:\n+ values = getattr(self._cpp_info, item)\n setattr(self, \"_%s\" % item, values)\n return values\n \n@@ -614,71 +629,71 @@ def _get_sorted_components(self):\n \n @property\n def build_modules_paths(self):\n- return self._aggregated_values(\"build_modules_paths\", agg_func=merge_dicts)\n+ return self._aggregated_dict_values(\"build_modules_paths\")\n \n @property\n def include_paths(self):\n- return self._aggregated_values(\"include_paths\")\n+ return self._aggregated_list_values(\"include_paths\")\n \n @property\n def lib_paths(self):\n- return self._aggregated_values(\"lib_paths\")\n+ return self._aggregated_list_values(\"lib_paths\")\n \n @property\n def src_paths(self):\n- return self._aggregated_values(\"src_paths\")\n+ return self._aggregated_list_values(\"src_paths\")\n \n @property\n def bin_paths(self):\n- return self._aggregated_values(\"bin_paths\")\n+ return self._aggregated_list_values(\"bin_paths\")\n \n @property\n def build_paths(self):\n- return self._aggregated_values(\"build_paths\")\n+ return self._aggregated_list_values(\"build_paths\")\n \n @property\n def res_paths(self):\n- return self._aggregated_values(\"res_paths\")\n+ return self._aggregated_list_values(\"res_paths\")\n \n @property\n def framework_paths(self):\n- return self._aggregated_values(\"framework_paths\")\n+ return self._aggregated_list_values(\"framework_paths\")\n \n @property\n def libs(self):\n- return self._aggregated_values(\"libs\")\n+ return self._aggregated_list_values(\"libs\")\n \n @property\n def system_libs(self):\n- return self._aggregated_values(\"system_libs\")\n+ return self._aggregated_list_values(\"system_libs\")\n \n @property\n def frameworks(self):\n- return self._aggregated_values(\"frameworks\")\n+ return self._aggregated_list_values(\"frameworks\")\n \n @property\n def defines(self):\n- return self._aggregated_values(\"defines\")\n+ return self._aggregated_list_values(\"defines\")\n \n @property\n def cxxflags(self):\n- return self._aggregated_values(\"cxxflags\")\n+ return self._aggregated_list_values(\"cxxflags\")\n \n @property\n def cflags(self):\n- return self._aggregated_values(\"cflags\")\n+ return self._aggregated_list_values(\"cflags\")\n \n @property\n def sharedlinkflags(self):\n- return self._aggregated_values(\"sharedlinkflags\")\n+ return self._aggregated_list_values(\"sharedlinkflags\")\n \n @property\n def exelinkflags(self):\n- return self._aggregated_values(\"exelinkflags\")\n+ return self._aggregated_list_values(\"exelinkflags\")\n \n @property\n def requires(self):\n- return self._aggregated_values(\"requires\")\n+ return self._aggregated_list_values(\"requires\")\n \n \n class DepsCppInfo(_BaseDepsCppInfo):\ndiff --git a/conans/test/unittests/model/build_info/components_test.py b/conans/test/unittests/model/build_info/components_test.py\nindex 01134759322..acb0e5d6c0b 100644\n--- a/conans/test/unittests/model/build_info/components_test.py\n+++ b/conans/test/unittests/model/build_info/components_test.py\n@@ -414,3 +414,18 @@ def test_deps_cpp_info_components(self):\n self.assertListEqual([os.path.join(folder, \"include\")], list(deps_cpp_info.include_paths))\n self.assertListEqual([os.path.join(folder, \"include\")],\n list(deps_cpp_info[\"my_lib\"].components[\"Component\"].include_paths))\n+\n+ def test_deps_cpp_info_components_includedirs(self):\n+ folder = temp_folder()\n+ info = CppInfo(\"my_lib\", folder)\n+ # Create files so paths are not cleared\n+ save(os.path.join(folder, \"lib\", \"kk.lib\"), \"\")\n+ save(os.path.join(folder, \"include\", \"kk.h\"), \"\")\n+ info.components[\"component\"].includedirs = [\"lib\", \"include\"]\n+ dep_info = DepCppInfo(info)\n+ expected = [os.path.join(folder, \"lib\"), os.path.join(folder, \"include\")]\n+ self.assertListEqual(expected, list(dep_info.include_paths))\n+ deps_cpp_info = DepsCppInfo()\n+ deps_cpp_info.add(\"my_lib\", dep_info)\n+ expected = [os.path.join(folder, \"lib\"), os.path.join(folder, \"include\")]\n+ self.assertListEqual(expected, list(deps_cpp_info.includedirs))\n"
}
|
[
{
"diff_hunk": "@@ -414,3 +414,18 @@ def test_deps_cpp_info_components(self):\n self.assertListEqual([os.path.join(folder, \"include\")], list(deps_cpp_info.include_paths))\n self.assertListEqual([os.path.join(folder, \"include\")],\n list(deps_cpp_info[\"my_lib\"].components[\"Component\"].include_paths))\n+\n+ def test_deps_cpp_info_components_includedirs(self):\n+ folder = temp_folder()\n+ info = CppInfo(\"my_lib\", folder)\n+ # Create files so paths are not cleared\n+ save(os.path.join(folder, \"lib\", \"kk.lib\"), \"\")",
"line": null,
"original_line": 422,
"original_start_line": null,
"path": "conans/test/unittests/model/build_info/components_test.py",
"start_line": null,
"text": "@user1:\nYou can use `info.filter_empty=False`"
}
] |
fd04d94a6a93d22b0aa976096c71eb2eea987011
|
diff --git a/conans/model/build_info.py b/conans/model/build_info.py
index 514f44c8b62..39f8cbaff68 100644
--- a/conans/model/build_info.py
+++ b/conans/model/build_info.py
@@ -556,14 +556,29 @@ def __getattr__(self, item):
attr = self._cpp_info.__getattr__(item)
return attr
- def _aggregated_values(self, item, agg_func=merge_lists):
+ def _aggregated_dict_values(self, item):
values = getattr(self, "_%s" % item)
if values is not None:
return values
- values = getattr(self._cpp_info, item)
if self._cpp_info.components:
+ values = {}
for component in self._get_sorted_components().values():
- values = agg_func(values, getattr(component, item))
+ values = merge_dicts(values, getattr(component, item))
+ else:
+ values = getattr(self._cpp_info, item)
+ setattr(self, "_%s" % item, values)
+ return values
+
+ def _aggregated_list_values(self, item):
+ values = getattr(self, "_%s" % item)
+ if values is not None:
+ return values
+ if self._cpp_info.components:
+ values = []
+ for component in self._get_sorted_components().values():
+ values = merge_lists(values, getattr(component, item))
+ else:
+ values = getattr(self._cpp_info, item)
setattr(self, "_%s" % item, values)
return values
@@ -614,71 +629,71 @@ def _get_sorted_components(self):
@property
def build_modules_paths(self):
- return self._aggregated_values("build_modules_paths", agg_func=merge_dicts)
+ return self._aggregated_dict_values("build_modules_paths")
@property
def include_paths(self):
- return self._aggregated_values("include_paths")
+ return self._aggregated_list_values("include_paths")
@property
def lib_paths(self):
- return self._aggregated_values("lib_paths")
+ return self._aggregated_list_values("lib_paths")
@property
def src_paths(self):
- return self._aggregated_values("src_paths")
+ return self._aggregated_list_values("src_paths")
@property
def bin_paths(self):
- return self._aggregated_values("bin_paths")
+ return self._aggregated_list_values("bin_paths")
@property
def build_paths(self):
- return self._aggregated_values("build_paths")
+ return self._aggregated_list_values("build_paths")
@property
def res_paths(self):
- return self._aggregated_values("res_paths")
+ return self._aggregated_list_values("res_paths")
@property
def framework_paths(self):
- return self._aggregated_values("framework_paths")
+ return self._aggregated_list_values("framework_paths")
@property
def libs(self):
- return self._aggregated_values("libs")
+ return self._aggregated_list_values("libs")
@property
def system_libs(self):
- return self._aggregated_values("system_libs")
+ return self._aggregated_list_values("system_libs")
@property
def frameworks(self):
- return self._aggregated_values("frameworks")
+ return self._aggregated_list_values("frameworks")
@property
def defines(self):
- return self._aggregated_values("defines")
+ return self._aggregated_list_values("defines")
@property
def cxxflags(self):
- return self._aggregated_values("cxxflags")
+ return self._aggregated_list_values("cxxflags")
@property
def cflags(self):
- return self._aggregated_values("cflags")
+ return self._aggregated_list_values("cflags")
@property
def sharedlinkflags(self):
- return self._aggregated_values("sharedlinkflags")
+ return self._aggregated_list_values("sharedlinkflags")
@property
def exelinkflags(self):
- return self._aggregated_values("exelinkflags")
+ return self._aggregated_list_values("exelinkflags")
@property
def requires(self):
- return self._aggregated_values("requires")
+ return self._aggregated_list_values("requires")
class DepsCppInfo(_BaseDepsCppInfo):
diff --git a/conans/test/unittests/model/build_info/components_test.py b/conans/test/unittests/model/build_info/components_test.py
index 01134759322..7fb758d9c4d 100644
--- a/conans/test/unittests/model/build_info/components_test.py
+++ b/conans/test/unittests/model/build_info/components_test.py
@@ -414,3 +414,14 @@ def test_deps_cpp_info_components(self):
self.assertListEqual([os.path.join(folder, "include")], list(deps_cpp_info.include_paths))
self.assertListEqual([os.path.join(folder, "include")],
list(deps_cpp_info["my_lib"].components["Component"].include_paths))
+
+ def test_deps_cpp_info_components_includedirs(self):
+ info = CppInfo("my_lib", "root")
+ info.components["component"].includedirs = ["include1", "include2"]
+ info.components["component"].filter_empty = False
+ dep_info = DepCppInfo(info)
+ expected = [os.path.join("root", "include1"), os.path.join("root", "include2")]
+ self.assertListEqual(expected, list(dep_info.include_paths))
+ deps_cpp_info = DepsCppInfo()
+ deps_cpp_info.add("my_lib", dep_info)
+ self.assertListEqual(expected, list(deps_cpp_info.includedirs))
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Documentation Updates"
}
|
conan-io__conan-8819@8adcb8e
|
conan-io/conan
|
Python
| 8,819
|
Feature : cross-building tests for new AutoTools build helper
|
closes: #8762
Changelog: Feature: Add cross-building tests for new AutoTools build helper.
Docs: omit
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2021-04-16T12:56:56Z
|
[feature] Implementation of cross-build triplets in new conan.tools.gnu AutotoolsToolchain
- Triplets must be defined at toolchain time (``generate()``), and stored in a file or any mechanism that makes as easy as possible to be used by users. In the worst case, yes, copy&paste from file to avoid retyping.
- The ``Autootools()`` helper in the ``build()`` method will just use this from the result of ``generate()``
- Use exclusively the build/host contexts and profiles. No usage of any detection or current platform, everything must come from profiles.
- If necessary use the new [conf] to define extra complements, but try to avoid as much as possible and draft a minimalistic approach
- It is fine if it only starts with build/host but not the target one
- Try to include a functional test that really cross-build, even if it doesn't pass CI now (we can talk to @czoido to install something, should be the easiest to install cross-compiler), you can use an @pytest.mark.xfail at the moment
|
Please @SSE4 use the new ``CONAN_TOOLCHAIN_ARGS_FILE = "conanbuild.json"`` file to gather variables for ``Autotools`` instead of separate files, and lets use that accross build systems. Otherwise it is becoming a bit cluttered.
|
[
{
"body": "- Triplets must be defined at toolchain time (``generate()``), and stored in a file or any mechanism that makes as easy as possible to be used by users. In the worst case, yes, copy&paste from file to avoid retyping.\r\n- The ``Autootools()`` helper in the ``build()`` method will just use this from the result of ``generate()``\r\n- Use exclusively the build/host contexts and profiles. No usage of any detection or current platform, everything must come from profiles.\r\n- If necessary use the new [conf] to define extra complements, but try to avoid as much as possible and draft a minimalistic approach\r\n- It is fine if it only starts with build/host but not the target one\r\n- Try to include a functional test that really cross-build, even if it doesn't pass CI now (we can talk to @czoido to install something, should be the easiest to install cross-compiler), you can use an @pytest.mark.xfail at the moment\r\n",
"number": 8762,
"title": "[feature] Implementation of cross-build triplets in new conan.tools.gnu AutotoolsToolchain"
}
] |
4d00a3e51e8d1b6ac1431831b146b34a1d8829c7
|
{
"head_commit": "8adcb8ed8cb9aac41ff1b768331d5f1365668ab3",
"head_commit_message": "- cross-building tests for new AutoTools build helper\n\nSigned-off-by: SSE4 <[email protected]>",
"patch_to_review": "diff --git a/conan/tools/gnu/autotools.py b/conan/tools/gnu/autotools.py\nindex 6c912487b43..fd0329e3d1e 100644\n--- a/conan/tools/gnu/autotools.py\n+++ b/conan/tools/gnu/autotools.py\n@@ -1,6 +1,9 @@\n-import platform\n+import json\n+import os\n \n+from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE\n from conan.tools._compilers import use_win_mingw\n+from conans.util.files import load\n \n \n class Autotools(object):\n@@ -20,9 +23,14 @@ def __init__(self, conanfile):\n self._build_type = conanfile.settings.get_safe(\"build_type\")\n self._compiler = conanfile.settings.get_safe(\"compiler\")\n self._compiler_version = conanfile.settings.get_safe(\"compiler.version\")\n-\n- # Precalculate build, host, target triplets\n- # TODO self.build, self.host, self.target = self._get_host_build_target_flags()\n+ self._build = None\n+ self._host = None\n+ self._target = None\n+ if os.path.isfile(CONAN_TOOLCHAIN_ARGS_FILE):\n+ args = json.loads(load(CONAN_TOOLCHAIN_ARGS_FILE))\n+ self._build = args[\"build\"] if \"build\" in args else None\n+ self._host = args[\"host\"] if \"host\" in args else None\n+ self._target = args[\"target\"] if \"target\" in args else None\n \n def configure(self):\n \"\"\"\n@@ -33,11 +41,13 @@ def configure(self):\n return\n configure_dir = \".\"\n \n- # TODO: Management of build, host, target triplet\n # TODO: Management of PKG_CONFIG_PATH\n # TODO: Implement management of --prefix, bindir, sbindir, libexecdir, libdir, includedir\n \n cmd = \"%s/configure\" % configure_dir\n+ cmd += ' --host=%s' % self._host if self._host else ''\n+ cmd += ' --build=%s' % self._build if self._build else ''\n+ cmd += ' --target=%s' % self._target if self._target else ''\n self._conanfile.output.info(\"Calling:\\n > %s\" % cmd)\n self._conanfile.run(cmd)\n \ndiff --git a/conan/tools/gnu/autotoolsgen.py b/conan/tools/gnu/autotoolsgen.py\nindex ac0b523fefc..ebe2d95b8be 100644\n--- a/conan/tools/gnu/autotoolsgen.py\n+++ b/conan/tools/gnu/autotoolsgen.py\n@@ -33,3 +33,5 @@ def generate(self):\n else:\n build_env.save_sh(\"conanbuildenv.sh\")\n run_env.save_sh(\"conanrunenv.sh\")\n+\n+ self.toolchain.generate()\ndiff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py\nindex c9358d85f70..d76704d8d5b 100644\n--- a/conan/tools/gnu/autotoolstoolchain.py\n+++ b/conan/tools/gnu/autotoolstoolchain.py\n@@ -1,7 +1,12 @@\n+import json\n+\n+from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE\n from conan.tools._compilers import architecture_flag, build_type_flags\n from conan.tools.env import Environment\n # FIXME: need to refactor this import and bring to conan.tools\n from conans.client.build.cppstd_flags import cppstd_flag_new\n+from conans.client.tools.oss import cross_building, get_cross_building_settings, get_gnu_triplet\n+from conans.util.files import save\n \n \n class AutotoolsToolchain:\n@@ -30,6 +35,15 @@ def __init__(self, conanfile):\n # TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)\n self.build_type_flags = build_type_flags(self._conanfile.settings)\n \n+ self._host = None\n+ self._build = None\n+ self._target = None\n+\n+ if cross_building(self._conanfile):\n+ os_build, arch_build, os_host, arch_host = get_cross_building_settings(self._conanfile)\n+ self._host = get_gnu_triplet(os_host, arch_host)\n+ self._build = get_gnu_triplet(os_build, arch_build)\n+\n def _rpaths_link(self):\n # TODO: Not implemented yet\n pass\n@@ -109,3 +123,9 @@ def generate(self):\n env = self.environment()\n env.save_sh(\"conanautotoolstoolchain.sh\")\n env.save_bat(\"conanautotoolstoolchain.bat\")\n+\n+ args = {\"build\": self._build,\n+ \"host\": self._host,\n+ \"target\": self._target}\n+ args = {k: v for k, v in args.items() if v is not None}\n+ save(CONAN_TOOLCHAIN_ARGS_FILE, json.dumps(args))\ndiff --git a/conans/test/assets/autotools.py b/conans/test/assets/autotools.py\nindex d074cf11cb9..473b291d252 100644\n--- a/conans/test/assets/autotools.py\n+++ b/conans/test/assets/autotools.py\n@@ -14,7 +14,9 @@\n {% if main and lib %}\n {{main}}_LDADD = {{ lib }}\n {% endif %}\n+\n \"\"\"\n+# newline at the end is important: m4: INTERNAL ERROR: recursive push_string!\n \n \n def gen_makefile_am(**context):\n@@ -30,7 +32,9 @@ def gen_makefile_am(**context):\n AM_PROG_AR\n AC_CONFIG_FILES([Makefile])\n AC_OUTPUT\n+\n \"\"\"\n+# newline at the end is important: m4: INTERNAL ERROR: recursive push_string!\n \n \n def gen_configure_ac(**context):\ndiff --git a/conans/test/functional/toolchains/gnu/test_autotools.py b/conans/test/functional/toolchains/gnu/test_autotools.py\nindex 59f7afeb06d..8bf518023d3 100644\n--- a/conans/test/functional/toolchains/gnu/test_autotools.py\n+++ b/conans/test/functional/toolchains/gnu/test_autotools.py\n@@ -13,7 +13,7 @@\n from conans.util.files import touch\n \n \[email protected](platform.system() != \"Linux\", reason=\"Requires Autotools\")\[email protected](platform.system() not in [\"Linux\", \"Darwin\"], reason=\"Requires Autotools\")\n @pytest.mark.tool_autotools()\n def test_autotools():\n client = TestClient(path_with_spaces=False)\n@@ -50,7 +50,8 @@ def build(self):\n client.run(\"install .\")\n client.run(\"build .\")\n client.run_command(\"./main\")\n- check_exe_run(client.out, \"main\", \"gcc\", None, \"Release\", \"x86_64\", None, cxx11_abi=0)\n+ cxx11_abi = 0 if platform.system() == \"Linux\" else None\n+ check_exe_run(client.out, \"main\", \"gcc\", None, \"Release\", \"x86_64\", None, cxx11_abi=cxx11_abi)\n assert \"hello/0.1: Hello World Release!\" in client.out\n \n \ndiff --git a/conans/test/functional/toolchains/gnu/test_ios.py b/conans/test/functional/toolchains/gnu/test_ios.py\nnew file mode 100644\nindex 00000000000..bb5191ff3ff\n--- /dev/null\n+++ b/conans/test/functional/toolchains/gnu/test_ios.py\n@@ -0,0 +1,77 @@\n+import platform\n+import textwrap\n+\n+import pytest\n+\n+from conans.client.tools.apple import XCRun, to_apple_arch\n+from conans.test.assets.autotools import gen_makefile_am, gen_configure_ac\n+from conans.test.assets.sources import gen_function_cpp\n+from conans.test.utils.tools import TestClient\n+\n+\[email protected](platform.system() != \"Darwin\", reason=\"Requires Xcode\")\n+def test_ios():\n+ xcrun = XCRun(None, sdk='iphoneos')\n+ cflags = \"\"\n+ cflags += \" -isysroot \" + xcrun.sdk_path\n+ cflags += \" -arch \" + to_apple_arch('armv8')\n+ cxxflags = cflags\n+ ldflags = cflags\n+\n+ profile = textwrap.dedent(\"\"\"\n+ include(default)\n+ [settings]\n+ os=iOS\n+ os.version=12.0\n+ arch=armv8\n+ [env]\n+ CC={cc}\n+ CXX={cxx}\n+ CFLAGS={cflags}\n+ CXXFLAGS={cxxflags}\n+ LDFLAGS={ldflags}\n+ \"\"\").format(cc=xcrun.cc, cxx=xcrun.cxx, cflags=cflags, cxxflags=cxxflags, ldflags=ldflags)\n+\n+ client = TestClient(path_with_spaces=False)\n+ client.save({\"m1\": profile}, clean_first=True)\n+ client.run(\"new hello/0.1 --template=v2_cmake\")\n+ client.run(\"create . --profile:build=default --profile:host=m1 -tf None\")\n+\n+ main = gen_function_cpp(name=\"main\", includes=[\"hello\"], calls=[\"hello\"])\n+ makefile_am = gen_makefile_am(main=\"main\", main_srcs=\"main.cpp\")\n+ configure_ac = gen_configure_ac()\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conan.tools.gnu import Autotools\n+\n+ class TestConan(ConanFile):\n+ requires = \"hello/0.1\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+ exports_sources = \"configure.ac\", \"Makefile.am\", \"main.cpp\"\n+ generators = \"AutotoolsGen\"\n+\n+ def build(self):\n+ self.run(\"aclocal\")\n+ self.run(\"autoconf\")\n+ self.run(\"automake --add-missing --foreign\")\n+ autotools = Autotools(self)\n+ autotools.configure()\n+ autotools.make()\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile,\n+ \"configure.ac\": configure_ac,\n+ \"Makefile.am\": makefile_am,\n+ \"main.cpp\": main,\n+ \"m1\": profile}, clean_first=True)\n+ client.run(\"install . --profile:build=default --profile:host=m1\")\n+ client.run(\"build .\")\n+ client.run_command(\"./main\", assert_error=True)\n+ assert \"Bad CPU type in executable\" in client.out\n+ client.run_command(\"lipo -info main\")\n+ assert \"Non-fat file: main is architecture: arm64\" in client.out\n+\n+ js = client.load(\"conanbuild.json\")\n+ assert '{\"build\": \"x86_64-apple-darwin\", \"host\": \"aarch64-apple-ios\"}' in js\n+\ndiff --git a/conans/test/functional/utils.py b/conans/test/functional/utils.py\nindex ca38ff5e7d0..448055743a5 100644\n--- a/conans/test/functional/utils.py\n+++ b/conans/test/functional/utils.py\n@@ -46,20 +46,33 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de\n assert \"{} _MSC_VER{}\".format(name, version.replace(\".\", \"\")) in output\n assert \"{} _MSVC_LANG20{}\".format(name, cppstd) in output\n \n- elif compiler == \"gcc\":\n- assert \"{} __GNUC__\".format(name) in output\n-\n+ elif compiler in [\"gcc\", \"clang\", \"apple-clang\"]:\n+ if compiler == \"gcc\":\n+ assert \"{} __GNUC__\".format(name) in output\n+ if version: # FIXME: At the moment, the GCC version is not controlled, will change\n+ major, minor = version.split(\".\")[0:2]\n+ assert \"{} __GNUC__{}\".format(name, major) in output\n+ assert \"{} __GNUC_MINOR__{}\".format(name, minor) in output\n+ elif compiler == \"clang\":\n+ assert \"{} __clang__\".format(name) in output\n+ if version:\n+ major, minor = version.split(\".\")[0:2]\n+ assert \"{} __clang_major__{}\".format(name, major) in output\n+ assert \"{} __clang_minor__{}\".format(name, minor) in output\n+ elif compiler == \"apple-clang\":\n+ assert \"{} __apple_build_version__\".format(name) in output\n+ if version:\n+ major, minor = version.split(\".\")[0:2]\n+ assert \"{} __apple_build_version__{}{}\".format(name, major, minor) in output\n if arch == \"x86\":\n assert \"{} __i386__ defined\".format(name) in output\n elif arch == \"x86_64\":\n assert \"{} __x86_64__ defined\".format(name) in output\n+ elif arch == \"armv8\":\n+ assert \"{} __aarch64__ defined\".format(name) in output\n else:\n assert arch is None, \"checked don't know how to validate this architecture\"\n \n- if version: # FIXME: At the moment, the GCC version is not controlled, will change\n- major, minor = version.split(\".\")[0:2]\n- assert \"{} __GNUC__{}\".format(name, major) in output\n- assert \"{} __GNUC_MINOR__{}\".format(name, minor) in output\n if cppstd:\n cppstd_value = {\"98\": \"199711\",\n \"11\": \"201103\",\n@@ -70,10 +83,6 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de\n if cxx11_abi is not None:\n assert \"{} _GLIBCXX_USE_CXX11_ABI {}\".format(name, cxx11_abi) in output\n \n- elif compiler == \"apple-clang\":\n- # TODO: apple-clang requires checks too\n- pass\n-\n if definitions:\n for k, v in definitions.items():\n assert \"{}: {}\".format(k, v) in output\n"
}
|
[
{
"diff_hunk": "@@ -1,7 +1,12 @@\n+import json\n+\n+from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE\n from conan.tools._compilers import architecture_flag, build_type_flags\n from conan.tools.env import Environment\n # FIXME: need to refactor this import and bring to conan.tools\n from conans.client.build.cppstd_flags import cppstd_flag_new\n+from conans.client.tools.oss import cross_building, get_cross_building_settings, get_gnu_triplet",
"line": null,
"original_line": 8,
"original_start_line": null,
"path": "conan/tools/gnu/autotoolstoolchain.py",
"start_line": null,
"text": "@user1:\nIn general try to avoid things from old ``conans.client....`` scope. The ``cross_building, get_cross_building_settings, get_gnu_triplet`` should be copied here, because they are going to be modernized. This one already has a FIXME above, so it can be done in a later PR, but at least the idea for other PRs.\n\n@author:\nokay, let's copy them\n\n@author:\nmade a copies of `cross_building`, `get_cross_building_settings`, `get_gnu_triplet` into `conan.tools.oss`. it's like a spider web, one method pulling another one from the `conans.client.tools`, so I had to cut `OSInfo` to only used methods to avoid copying almost everything now (e.g. it requires `which`, `environment_append` and so on for full functionality, it actually does to much - classic god object).\n\n@user1:\nyes, this is one of the purposes of doing this copy. All this should be distilled and reduced much, specially if using both profiles, and not needing to detect anything at all. This will follow in future PRs."
},
{
"diff_hunk": "@@ -33,3 +33,5 @@ def generate(self):\n else:\n build_env.save_sh(\"conanbuildenv.sh\")\n run_env.save_sh(\"conanrunenv.sh\")\n+\n+ self.toolchain.generate()",
"line": null,
"original_line": 37,
"original_start_line": null,
"path": "conan/tools/gnu/autotoolsgen.py",
"start_line": null,
"text": "@user1:\nThis is confusing, as it will generate an extra, unused file. \r\nMove the AutotoolsToolchain CONAN_TOOLCHAIN_ARGS_FILE functionality to its own method, and call just that method.\n\n@author:\nI am not sure I got the idea behind the new method, but I'll do\n\n@user1:\nTo avoid the creation of an extra ``conanautotooltoolchain.sh`` script that will not be used.\n\n@author:\nokay, I've extracted the method `generate_args`.\r\nstill would be nice if you could explain your idea a bit more, since for CMake toolchain we don't do a file, it just writes `CONAN_TOOLCHAIN_ARGS_FILE` inside the `generate`.\r\nI am not sure what is an extra and unused file, as `CONAN_TOOLCHAIN_ARGS_FILE` is definitely used.\r\nand as I understand the point of `generate` is for developer to have exactly all the files needed, so I am not sure why some files belong to `generate`, but some don't."
}
] |
c226955552c9c95a3223af8c605467a618b9c714
|
diff --git a/conan/tools/gnu/autotools.py b/conan/tools/gnu/autotools.py
index 6c912487b43..fd0329e3d1e 100644
--- a/conan/tools/gnu/autotools.py
+++ b/conan/tools/gnu/autotools.py
@@ -1,6 +1,9 @@
-import platform
+import json
+import os
+from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE
from conan.tools._compilers import use_win_mingw
+from conans.util.files import load
class Autotools(object):
@@ -20,9 +23,14 @@ def __init__(self, conanfile):
self._build_type = conanfile.settings.get_safe("build_type")
self._compiler = conanfile.settings.get_safe("compiler")
self._compiler_version = conanfile.settings.get_safe("compiler.version")
-
- # Precalculate build, host, target triplets
- # TODO self.build, self.host, self.target = self._get_host_build_target_flags()
+ self._build = None
+ self._host = None
+ self._target = None
+ if os.path.isfile(CONAN_TOOLCHAIN_ARGS_FILE):
+ args = json.loads(load(CONAN_TOOLCHAIN_ARGS_FILE))
+ self._build = args["build"] if "build" in args else None
+ self._host = args["host"] if "host" in args else None
+ self._target = args["target"] if "target" in args else None
def configure(self):
"""
@@ -33,11 +41,13 @@ def configure(self):
return
configure_dir = "."
- # TODO: Management of build, host, target triplet
# TODO: Management of PKG_CONFIG_PATH
# TODO: Implement management of --prefix, bindir, sbindir, libexecdir, libdir, includedir
cmd = "%s/configure" % configure_dir
+ cmd += ' --host=%s' % self._host if self._host else ''
+ cmd += ' --build=%s' % self._build if self._build else ''
+ cmd += ' --target=%s' % self._target if self._target else ''
self._conanfile.output.info("Calling:\n > %s" % cmd)
self._conanfile.run(cmd)
diff --git a/conan/tools/gnu/autotoolsgen.py b/conan/tools/gnu/autotoolsgen.py
index ac0b523fefc..ef3d459c443 100644
--- a/conan/tools/gnu/autotoolsgen.py
+++ b/conan/tools/gnu/autotoolsgen.py
@@ -33,3 +33,5 @@ def generate(self):
else:
build_env.save_sh("conanbuildenv.sh")
run_env.save_sh("conanrunenv.sh")
+
+ self.toolchain.generate_args()
diff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py
index c9358d85f70..5313c8864bd 100644
--- a/conan/tools/gnu/autotoolstoolchain.py
+++ b/conan/tools/gnu/autotoolstoolchain.py
@@ -1,7 +1,14 @@
+import json
+
+from conan.tools import CONAN_TOOLCHAIN_ARGS_FILE
from conan.tools._compilers import architecture_flag, build_type_flags
from conan.tools.env import Environment
+from conan.tools.gnu.cross_building import _cross_building
+from conan.tools.gnu.get_cross_building_settings import _get_cross_building_settings
+from conan.tools.gnu.get_gnu_triplet import _get_gnu_triplet
# FIXME: need to refactor this import and bring to conan.tools
from conans.client.build.cppstd_flags import cppstd_flag_new
+from conans.util.files import save
class AutotoolsToolchain:
@@ -30,6 +37,15 @@ def __init__(self, conanfile):
# TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)
self.build_type_flags = build_type_flags(self._conanfile.settings)
+ self._host = None
+ self._build = None
+ self._target = None
+
+ if _cross_building(self._conanfile):
+ os_build, arch_build, os_host, arch_host = _get_cross_building_settings(self._conanfile)
+ self._host = _get_gnu_triplet(os_host, arch_host)
+ self._build = _get_gnu_triplet(os_build, arch_build)
+
def _rpaths_link(self):
# TODO: Not implemented yet
pass
@@ -109,3 +125,11 @@ def generate(self):
env = self.environment()
env.save_sh("conanautotoolstoolchain.sh")
env.save_bat("conanautotoolstoolchain.bat")
+ self.generate_args()
+
+ def generate_args(self):
+ args = {"build": self._build,
+ "host": self._host,
+ "target": self._target}
+ args = {k: v for k, v in args.items() if v is not None}
+ save(CONAN_TOOLCHAIN_ARGS_FILE, json.dumps(args))
diff --git a/conan/tools/gnu/cross_building.py b/conan/tools/gnu/cross_building.py
new file mode 100644
index 00000000000..64ea87316d2
--- /dev/null
+++ b/conan/tools/gnu/cross_building.py
@@ -0,0 +1,45 @@
+import warnings
+from collections import namedtuple
+
+from conan.tools.gnu.get_cross_building_settings import _get_cross_building_settings
+from conans.errors import ConanException
+
+
+def _cross_building(conanfile=None, self_os=None, self_arch=None, skip_x64_x86=False, settings=None):
+ # Handle input arguments (backwards compatibility with 'settings' as first argument)
+ # TODO: This can be promoted to a decorator pattern for tools if we adopt 'conanfile' as the
+ # first argument for all of them.
+ if conanfile and settings:
+ raise ConanException("Do not set both arguments, 'conanfile' and 'settings',"
+ " to call cross_building function")
+
+ from conans.model.conan_file import ConanFile
+ if conanfile and not isinstance(conanfile, ConanFile):
+ return _cross_building(settings=conanfile, self_os=self_os, self_arch=self_arch,
+ skip_x64_x86=skip_x64_x86)
+
+ if settings:
+ warnings.warn("Argument 'settings' has been deprecated, use 'conanfile' instead")
+
+ if conanfile:
+ ret = _get_cross_building_settings(conanfile, self_os, self_arch)
+ else:
+ # TODO: If Conan is using 'profile_build' here we don't have any information about it,
+ # we are falling back to the old behavior (which is probably wrong here)
+ conanfile = namedtuple('_ConanFile', ['settings'])(settings)
+ ret = _get_cross_building_settings(conanfile, self_os, self_arch)
+
+ build_os, build_arch, host_os, host_arch = ret
+
+ if skip_x64_x86 and host_os is not None and (build_os == host_os) and \
+ host_arch is not None and ((build_arch == "x86_64") and (host_arch == "x86") or
+ (build_arch == "sparcv9") and (host_arch == "sparc") or
+ (build_arch == "ppc64") and (host_arch == "ppc32")):
+ return False
+
+ if host_os is not None and (build_os != host_os):
+ return True
+ if host_arch is not None and (build_arch != host_arch):
+ return True
+
+ return False
diff --git a/conan/tools/gnu/get_cross_building_settings.py b/conan/tools/gnu/get_cross_building_settings.py
new file mode 100644
index 00000000000..3b4e89a6da4
--- /dev/null
+++ b/conan/tools/gnu/get_cross_building_settings.py
@@ -0,0 +1,165 @@
+import os
+import platform
+
+from conans.model.version import Version
+
+
+class _OSInfo(object):
+ """ Usage:
+ (os_info.is_linux) # True/False
+ (os_info.is_windows) # True/False
+ (os_info.is_macos) # True/False
+ (os_info.is_freebsd) # True/False
+ (os_info.is_solaris) # True/False
+
+ (os_info.linux_distro) # debian, ubuntu, fedora, centos...
+
+ (os_info.os_version) # 5.1
+ (os_info.os_version_name) # Windows 7, El Capitan
+
+ if os_info.os_version > "10.1":
+ pass
+ if os_info.os_version == "10.1.0":
+ pass
+ """
+
+ def __init__(self):
+ system = platform.system()
+ self.os_version = None
+ self.os_version_name = None
+ self.is_linux = system == "Linux"
+ self.linux_distro = None
+ self.is_msys = system.startswith("MING") or system.startswith("MSYS_NT")
+ self.is_cygwin = system.startswith("CYGWIN_NT")
+ self.is_windows = system == "Windows" or self.is_msys or self.is_cygwin
+ self.is_macos = system == "Darwin"
+ self.is_freebsd = system == "FreeBSD"
+ self.is_solaris = system == "SunOS"
+ self.is_aix = system == "AIX"
+ self.is_posix = os.pathsep == ':'
+
+ def _get_linux_distro_info(self):
+ import distro
+ self.linux_distro = distro.id()
+ self.os_version = Version(distro.version())
+ version_name = distro.codename()
+ self.os_version_name = version_name if version_name != "n/a" else ""
+ if not self.os_version_name and self.linux_distro == "debian":
+ self.os_version_name = self.get_debian_version_name(self.os_version)
+
+ @staticmethod
+ def get_aix_architecture():
+ processor = platform.processor()
+ if "powerpc" in processor:
+ kernel_bitness = _OSInfo().get_aix_conf("KERNEL_BITMODE")
+ if kernel_bitness:
+ return "ppc64" if kernel_bitness == "64" else "ppc32"
+ elif "rs6000" in processor:
+ return "ppc32"
+
+ @staticmethod
+ def get_solaris_architecture():
+ # under intel solaris, platform.machine()=='i86pc' so we need to handle
+ # it early to suport 64-bit
+ processor = platform.processor()
+ kernel_bitness, elf = platform.architecture()
+ if "sparc" in processor:
+ return "sparcv9" if kernel_bitness == "64bit" else "sparc"
+ elif "i386" in processor:
+ return "x86_64" if kernel_bitness == "64bit" else "x86"
+
+ @staticmethod
+ def get_e2k_architecture():
+ return {
+ "E1C+": "e2k-v4", # Elbrus 1C+ and Elbrus 1CK
+ "E2C+": "e2k-v2", # Elbrus 2CM
+ "E2C+DSP": "e2k-v2", # Elbrus 2C+
+ "E2C3": "e2k-v6", # Elbrus 2C3
+ "E2S": "e2k-v3", # Elbrus 2S (aka Elbrus 4C)
+ "E8C": "e2k-v4", # Elbrus 8C and Elbrus 8C1
+ "E8C2": "e2k-v5", # Elbrus 8C2 (aka Elbrus 8CB)
+ "E12C": "e2k-v6", # Elbrus 12C
+ "E16C": "e2k-v6", # Elbrus 16C
+ "E32C": "e2k-v7", # Elbrus 32C
+ }.get(platform.processor())
+
+
+def _detected_os():
+ if _OSInfo().is_macos:
+ return "Macos"
+ if _OSInfo().is_windows:
+ return "Windows"
+ return platform.system()
+
+
+def _detected_architecture():
+ # FIXME: Very weak check but not very common to run conan in other architectures
+ machine = platform.machine()
+ os_info = _OSInfo()
+ arch = None
+
+ if os_info.is_solaris:
+ arch = _OSInfo.get_solaris_architecture()
+ elif os_info.is_aix:
+ arch = _OSInfo.get_aix_architecture()
+
+ if arch:
+ return arch
+
+ if "ppc64le" in machine:
+ return "ppc64le"
+ elif "ppc64" in machine:
+ return "ppc64"
+ elif "ppc" in machine:
+ return "ppc32"
+ elif "mips64" in machine:
+ return "mips64"
+ elif "mips" in machine:
+ return "mips"
+ elif "sparc64" in machine:
+ return "sparcv9"
+ elif "sparc" in machine:
+ return "sparc"
+ elif "aarch64" in machine:
+ return "armv8"
+ elif "arm64" in machine:
+ return "armv8"
+ elif "64" in machine:
+ return "x86_64"
+ elif "86" in machine:
+ return "x86"
+ elif "armv8" in machine:
+ return "armv8"
+ elif "armv7" in machine:
+ return "armv7"
+ elif "arm" in machine:
+ return "armv6"
+ elif "s390x" in machine:
+ return "s390x"
+ elif "s390" in machine:
+ return "s390"
+ elif "sun4v" in machine:
+ return "sparc"
+ elif "e2k" in machine:
+ return _OSInfo.get_e2k_architecture()
+
+
+def _get_build_os_arch(conanfile):
+ """ Returns the value for the 'os' and 'arch' settings for the build context """
+ if hasattr(conanfile, 'settings_build'):
+ return conanfile.settings_build.get_safe('os'), conanfile.settings_build.get_safe('arch')
+ else:
+ return conanfile.settings.get_safe('os_build'), conanfile.settings.get_safe('arch_build')
+
+
+def _get_cross_building_settings(conanfile, self_os=None, self_arch=None):
+ os_build, arch_build = _get_build_os_arch(conanfile)
+ if not hasattr(conanfile, 'settings_build'):
+ # Let it override from outside only if no 'profile_build' is used
+ os_build = self_os or os_build or _detected_os()
+ arch_build = self_arch or arch_build or _detected_architecture()
+
+ os_host = conanfile.settings.get_safe("os")
+ arch_host = conanfile.settings.get_safe("arch")
+
+ return os_build, arch_build, os_host, arch_host
diff --git a/conan/tools/gnu/get_gnu_triplet.py b/conan/tools/gnu/get_gnu_triplet.py
new file mode 100644
index 00000000000..c082f3007aa
--- /dev/null
+++ b/conan/tools/gnu/get_gnu_triplet.py
@@ -0,0 +1,99 @@
+from conans.errors import ConanException
+
+
+def _get_gnu_triplet(os_, arch, compiler=None):
+ """
+ Returns string with <machine>-<vendor>-<op_system> triplet (<vendor> can be omitted in practice)
+
+ :param os_: os to be used to create the triplet
+ :param arch: arch to be used to create the triplet
+ :param compiler: compiler used to create the triplet (only needed fo windows)
+ """
+
+ if os_ == "Windows" and compiler is None:
+ raise ConanException("'compiler' parameter for 'get_gnu_triplet()' is not specified and "
+ "needed for os=Windows")
+
+ # Calculate the arch
+ machine = {"x86": "i686" if os_ != "Linux" else "x86",
+ "x86_64": "x86_64",
+ "armv8": "aarch64",
+ "armv8_32": "aarch64", # https://wiki.linaro.org/Platform/arm64-ilp32
+ "armv8.3": "aarch64",
+ "asm.js": "asmjs",
+ "wasm": "wasm32",
+ }.get(arch, None)
+
+ if not machine:
+ # https://wiki.debian.org/Multiarch/Tuples
+ if os_ == "AIX":
+ if "ppc32" in arch:
+ machine = "rs6000"
+ elif "ppc64" in arch:
+ machine = "powerpc"
+ elif "arm" in arch:
+ machine = "arm"
+ elif "ppc32be" in arch:
+ machine = "powerpcbe"
+ elif "ppc64le" in arch:
+ machine = "powerpc64le"
+ elif "ppc64" in arch:
+ machine = "powerpc64"
+ elif "ppc32" in arch:
+ machine = "powerpc"
+ elif "mips64" in arch:
+ machine = "mips64"
+ elif "mips" in arch:
+ machine = "mips"
+ elif "sparcv9" in arch:
+ machine = "sparc64"
+ elif "sparc" in arch:
+ machine = "sparc"
+ elif "s390x" in arch:
+ machine = "s390x-ibm"
+ elif "s390" in arch:
+ machine = "s390-ibm"
+ elif "sh4" in arch:
+ machine = "sh4"
+ elif "e2k" in arch:
+ # https://lists.gnu.org/archive/html/config-patches/2015-03/msg00000.html
+ machine = "e2k-unknown"
+
+ if machine is None:
+ raise ConanException("Unknown '%s' machine, Conan doesn't know how to "
+ "translate it to the GNU triplet, please report at "
+ " https://github.com/conan-io/conan/issues" % arch)
+
+ # Calculate the OS
+ if compiler == "gcc":
+ windows_op = "w64-mingw32"
+ elif compiler == "Visual Studio":
+ windows_op = "windows-msvc"
+ else:
+ windows_op = "windows"
+
+ op_system = {"Windows": windows_op,
+ "Linux": "linux-gnu",
+ "Darwin": "apple-darwin",
+ "Android": "linux-android",
+ "Macos": "apple-darwin",
+ "iOS": "apple-ios",
+ "watchOS": "apple-watchos",
+ "tvOS": "apple-tvos",
+ # NOTE: it technically must be "asmjs-unknown-emscripten" or
+ # "wasm32-unknown-emscripten", but it's not recognized by old config.sub versions
+ "Emscripten": "local-emscripten",
+ "AIX": "ibm-aix",
+ "Neutrino": "nto-qnx"}.get(os_, os_.lower())
+
+ if os_ in ("Linux", "Android"):
+ if "arm" in arch and "armv8" not in arch:
+ op_system += "eabi"
+
+ if (arch == "armv5hf" or arch == "armv7hf") and os_ == "Linux":
+ op_system += "hf"
+
+ if arch == "armv8_32" and os_ == "Linux":
+ op_system += "_ilp32" # https://wiki.linaro.org/Platform/arm64-ilp32
+
+ return "%s-%s" % (machine, op_system)
diff --git a/conan/tools/oss/__init__.py b/conan/tools/oss/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/assets/autotools.py b/conans/test/assets/autotools.py
index d074cf11cb9..473b291d252 100644
--- a/conans/test/assets/autotools.py
+++ b/conans/test/assets/autotools.py
@@ -14,7 +14,9 @@
{% if main and lib %}
{{main}}_LDADD = {{ lib }}
{% endif %}
+
"""
+# newline at the end is important: m4: INTERNAL ERROR: recursive push_string!
def gen_makefile_am(**context):
@@ -30,7 +32,9 @@ def gen_makefile_am(**context):
AM_PROG_AR
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
+
"""
+# newline at the end is important: m4: INTERNAL ERROR: recursive push_string!
def gen_configure_ac(**context):
diff --git a/conans/test/functional/toolchains/gnu/test_autotools.py b/conans/test/functional/toolchains/gnu/test_autotools.py
index 59f7afeb06d..8bf518023d3 100644
--- a/conans/test/functional/toolchains/gnu/test_autotools.py
+++ b/conans/test/functional/toolchains/gnu/test_autotools.py
@@ -13,7 +13,7 @@
from conans.util.files import touch
[email protected](platform.system() != "Linux", reason="Requires Autotools")
[email protected](platform.system() not in ["Linux", "Darwin"], reason="Requires Autotools")
@pytest.mark.tool_autotools()
def test_autotools():
client = TestClient(path_with_spaces=False)
@@ -50,7 +50,8 @@ def build(self):
client.run("install .")
client.run("build .")
client.run_command("./main")
- check_exe_run(client.out, "main", "gcc", None, "Release", "x86_64", None, cxx11_abi=0)
+ cxx11_abi = 0 if platform.system() == "Linux" else None
+ check_exe_run(client.out, "main", "gcc", None, "Release", "x86_64", None, cxx11_abi=cxx11_abi)
assert "hello/0.1: Hello World Release!" in client.out
diff --git a/conans/test/functional/toolchains/gnu/test_ios.py b/conans/test/functional/toolchains/gnu/test_ios.py
new file mode 100644
index 00000000000..bb5191ff3ff
--- /dev/null
+++ b/conans/test/functional/toolchains/gnu/test_ios.py
@@ -0,0 +1,77 @@
+import platform
+import textwrap
+
+import pytest
+
+from conans.client.tools.apple import XCRun, to_apple_arch
+from conans.test.assets.autotools import gen_makefile_am, gen_configure_ac
+from conans.test.assets.sources import gen_function_cpp
+from conans.test.utils.tools import TestClient
+
+
[email protected](platform.system() != "Darwin", reason="Requires Xcode")
+def test_ios():
+ xcrun = XCRun(None, sdk='iphoneos')
+ cflags = ""
+ cflags += " -isysroot " + xcrun.sdk_path
+ cflags += " -arch " + to_apple_arch('armv8')
+ cxxflags = cflags
+ ldflags = cflags
+
+ profile = textwrap.dedent("""
+ include(default)
+ [settings]
+ os=iOS
+ os.version=12.0
+ arch=armv8
+ [env]
+ CC={cc}
+ CXX={cxx}
+ CFLAGS={cflags}
+ CXXFLAGS={cxxflags}
+ LDFLAGS={ldflags}
+ """).format(cc=xcrun.cc, cxx=xcrun.cxx, cflags=cflags, cxxflags=cxxflags, ldflags=ldflags)
+
+ client = TestClient(path_with_spaces=False)
+ client.save({"m1": profile}, clean_first=True)
+ client.run("new hello/0.1 --template=v2_cmake")
+ client.run("create . --profile:build=default --profile:host=m1 -tf None")
+
+ main = gen_function_cpp(name="main", includes=["hello"], calls=["hello"])
+ makefile_am = gen_makefile_am(main="main", main_srcs="main.cpp")
+ configure_ac = gen_configure_ac()
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conan.tools.gnu import Autotools
+
+ class TestConan(ConanFile):
+ requires = "hello/0.1"
+ settings = "os", "compiler", "arch", "build_type"
+ exports_sources = "configure.ac", "Makefile.am", "main.cpp"
+ generators = "AutotoolsGen"
+
+ def build(self):
+ self.run("aclocal")
+ self.run("autoconf")
+ self.run("automake --add-missing --foreign")
+ autotools = Autotools(self)
+ autotools.configure()
+ autotools.make()
+ """)
+
+ client.save({"conanfile.py": conanfile,
+ "configure.ac": configure_ac,
+ "Makefile.am": makefile_am,
+ "main.cpp": main,
+ "m1": profile}, clean_first=True)
+ client.run("install . --profile:build=default --profile:host=m1")
+ client.run("build .")
+ client.run_command("./main", assert_error=True)
+ assert "Bad CPU type in executable" in client.out
+ client.run_command("lipo -info main")
+ assert "Non-fat file: main is architecture: arm64" in client.out
+
+ js = client.load("conanbuild.json")
+ assert '{"build": "x86_64-apple-darwin", "host": "aarch64-apple-ios"}' in js
+
diff --git a/conans/test/functional/utils.py b/conans/test/functional/utils.py
index ca38ff5e7d0..448055743a5 100644
--- a/conans/test/functional/utils.py
+++ b/conans/test/functional/utils.py
@@ -46,20 +46,33 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de
assert "{} _MSC_VER{}".format(name, version.replace(".", "")) in output
assert "{} _MSVC_LANG20{}".format(name, cppstd) in output
- elif compiler == "gcc":
- assert "{} __GNUC__".format(name) in output
-
+ elif compiler in ["gcc", "clang", "apple-clang"]:
+ if compiler == "gcc":
+ assert "{} __GNUC__".format(name) in output
+ if version: # FIXME: At the moment, the GCC version is not controlled, will change
+ major, minor = version.split(".")[0:2]
+ assert "{} __GNUC__{}".format(name, major) in output
+ assert "{} __GNUC_MINOR__{}".format(name, minor) in output
+ elif compiler == "clang":
+ assert "{} __clang__".format(name) in output
+ if version:
+ major, minor = version.split(".")[0:2]
+ assert "{} __clang_major__{}".format(name, major) in output
+ assert "{} __clang_minor__{}".format(name, minor) in output
+ elif compiler == "apple-clang":
+ assert "{} __apple_build_version__".format(name) in output
+ if version:
+ major, minor = version.split(".")[0:2]
+ assert "{} __apple_build_version__{}{}".format(name, major, minor) in output
if arch == "x86":
assert "{} __i386__ defined".format(name) in output
elif arch == "x86_64":
assert "{} __x86_64__ defined".format(name) in output
+ elif arch == "armv8":
+ assert "{} __aarch64__ defined".format(name) in output
else:
assert arch is None, "checked don't know how to validate this architecture"
- if version: # FIXME: At the moment, the GCC version is not controlled, will change
- major, minor = version.split(".")[0:2]
- assert "{} __GNUC__{}".format(name, major) in output
- assert "{} __GNUC_MINOR__{}".format(name, minor) in output
if cppstd:
cppstd_value = {"98": "199711",
"11": "201103",
@@ -70,10 +83,6 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de
if cxx11_abi is not None:
assert "{} _GLIBCXX_USE_CXX11_ABI {}".format(name, cxx11_abi) in output
- elif compiler == "apple-clang":
- # TODO: apple-clang requires checks too
- pass
-
if definitions:
for k, v in definitions.items():
assert "{}: {}".format(k, v) in output
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-8727@d83e979
|
conan-io/conan
|
Python
| 8,727
|
Alternative way of setting properties that depend on generator specific features.
|
Changelog: Feature: Add `set_property` and `get_property` to set properties and access them in generators. Can be set only for a specific generator or as a default value for all of them.
Changelog: Feature: Use `set_property` and `get_property` to support custom defined content in `pkg_config` generator.
Changelog: Feature: Add new property names: `cmake_target_name`, `cmake_file_name`, `pkg_config_name` and `cmake_build_modules` that can be used for multiple generators of the same type allowing also an easier migration of `names`, `filenames` and `build_modules` properties to this model.
Docs: https://github.com/conan-io/docs/pull/2082
Closes: https://github.com/conan-io/conan/issues/7661 , https://github.com/conan-io/conan/issues/8600
Set properties of the _cpp_info_ that are related to generators (things like names, filenames, build_modules...) with `set_property` and `get_property`. Tests with `filenames`, `names` and `build_modules` properties.
Using new properties names `cmake_target_name`, `cmake_file_name`, `pkg_config_name` and `cmake_build_modules` will make the migration a little bit less risky. The new property name will be given preferences over the old one.
This:
``` python
self.cpp_info.components["mycomponent"].names["cmake_find_package"] = "mycomponent-name"
self.cpp_info.components["mycomponent"].names["cmake_find_package_multi"] = "mycomponent-name"
```
would be expressed as this:
``` python
self.cpp_info.components["mycomponent"].set_property("cmake_target_name", "mycomponent-name")
```
It's also possible to make it specific for generator but that most of cases it won't be necessary:
``` python
self.cpp_info.components["mycomponent"].set_property("cmake_target_name", "mypkg-name", "cmake_find_package")
```
Also, not specifying the generator will set this value as a default for every generator. So if you set:
``` python
self.cpp_info.components["mycomponent"].set_property("my-property", "mycomponent-name")
```
A new custom generator can access that information with:
``` python
for pkg_name, cpp_info in self.deps_build_info.dependencies:
names = cpp_info.get_property("my-property", generator=self.name))
```
Related to: https://github.com/conan-io/conan/issues/8600, https://github.com/conan-io/conan/pull/8568 and https://github.com/conan-io/conan/issues/7661
#tags: slow
|
2021-03-29T15:37:00Z
|
[feature] Allow arbitrary sections in pkg_config .pc files
pkg_config files can contain extra information, next to include paths/library paths/libs/cflags/requires/...
This is used by some programs to pass around configuration paths.
Because the generated .pc files by conan, are missing this information, configuration fails.
e.g. glib uses this. When using the files generated by conan, the following error appears:
```
Run-time dependency gio-2.0 found: YES 2.65.1
Dependency gio-2.0 found: YES 2.65.1 (cached)
Using 'PKG_CONFIG_PATH' from environment with value: '/home/maarten/.conan/data/glib-networking/2.65.1/_/_/build/eac566419e1470979e589f72ef984a2778d4785a'
Run-time dependency gmodule-2.0 found: YES 2.65.1
WARNING: pkgconfig variable 'giomoduledir' not defined for dependency gio-2.0.
source_subfolder/meson.build:68:0: ERROR: Assert failed: GIO_MODULE_DIR is missing from gio-2.0.pc
```
The `gio-2.0.pc` file, installed in my system contains:
```
prefix=/usr
libdir=${prefix}/lib64
includedir=${prefix}/include
datadir=${prefix}/share
schemasdir=${datadir}/glib-2.0/schemas
bindir=${prefix}/bin
giomoduledir=${libdir}/gio/modules
glib_compile_schemas=${bindir}/glib-compile-schemas
glib_compile_resources=${bindir}/glib-compile-resources
gdbus_codegen=${bindir}/gdbus-codegen
Name: GIO
Description: glib I/O library
Version: 2.60.7
Requires: glib-2.0, gobject-2.0
Requires.private: gmodule-no-export-2.0, zlib, mount >= 2.23, libselinux
Libs: -L${libdir} -lgio-2.0
Libs.private: -ldl -pthread -lresolv
Cflags: -I${includedir}
[maarten@localhost eac5664
```
The generated `gio-2.0.pc` contains:
```
prefix=/home/maarten/.conan/data/glib/2.65.1/_/_/package/f9b484c68439b8cf20fdfbb3f876d6cf74a4f970
libdir=${prefix}/lib
includedir=${prefix}/include
Name: glib-gio-2.0
Description: Conan package: glib-gio-2.0
Version: 2.65.1
Libs: -L${libdir} -lgio-2.0 -lresolv -ldl -Wl,-rpath="${libdir}"
Cflags: -I${includedir}
Requires: glib-2.0 gobject-2.0 gmodule-2.0 zlib mount libselinux
```
The generated pc file is missing a lot of keys, of which some are custom.
So it would be useful to add custom key/values to pkg_config files.
This feature is used by glib/glib-networking.
|
this blocks https://github.com/conan-io/conan-center-index/pull/3663
This prevents generation of proper .pc files like the gst-plugins-base's gstreamer-video-1.0.pc
another example where it is needed: https://github.com/GNOME/gobject-introspection/blob/master/meson.build#L243
Thanks for the feedback. I am trying to understand the scope of this information, and why would it be exclusive to the .pc files. How is CMake dealing with this library, is it even possible to consume it at all without using those .pc files? How CMake maps those ``giomoduledir=${libdir}/gio/modules, glib_compile_schemas=${bindir}/glib-compile-schemas, glib_compile_resources=${bindir}/glib-compile-resources`` variables?
Because the recipe in ConanCenter maybe could add one of the new ``build_modules`` specific for the ``pkgconfig`` generator, though we would need a way to inject it, I think this is done at the moment only by the CMake generators.
regarding gobject-introspection, the information is probably never consumed from cmake : https://ubuntu.pkgs.org/20.10/ubuntu-main-amd64/libgirepository1.0-dev_1.66.1-1_amd64.deb.html
Wayland-protocols too: https://ubuntu.pkgs.org/20.10/ubuntu-main-arm64/wayland-protocols_1.20-1_all.deb.html
gst-plugins-base too: https://ubuntu.pkgs.org/20.04/ubuntu-main-amd64/libgstreamer-plugins-base1.0-dev_1.16.2-4_amd64.deb.html
glib too: https://ubuntu.pkgs.org/20.04/ubuntu-main-amd64/libglib2.0-dev_2.64.2-1~fakesync1_amd64.deb.html
On the other hand, the pkg-config variables can be consumed from cmake (via pkg-config) using https://cmake.org/cmake/help/latest/module/FindPkgConfig.html#command:pkg_get_variable
EDIT: I don't think `build_modules` is a proper solution to this problem, because we need to add simple key-value pairs to .pc files already generated by conan, not add new .pc files to existing recipes. Something like `self.cpp_info.variables['pkg_config'].variable_name = "variable value"`. The values are always simple strings, because pkg-config has its own expansion system
|
[
{
"body": "pkg_config files can contain extra information, next to include paths/library paths/libs/cflags/requires/...\r\nThis is used by some programs to pass around configuration paths.\r\n\r\nBecause the generated .pc files by conan, are missing this information, configuration fails.\r\n\r\ne.g. glib uses this. When using the files generated by conan, the following error appears:\r\n```\r\nRun-time dependency gio-2.0 found: YES 2.65.1\r\nDependency gio-2.0 found: YES 2.65.1 (cached)\r\nUsing 'PKG_CONFIG_PATH' from environment with value: '/home/maarten/.conan/data/glib-networking/2.65.1/_/_/build/eac566419e1470979e589f72ef984a2778d4785a'\r\nRun-time dependency gmodule-2.0 found: YES 2.65.1\r\nWARNING: pkgconfig variable 'giomoduledir' not defined for dependency gio-2.0.\r\n\r\nsource_subfolder/meson.build:68:0: ERROR: Assert failed: GIO_MODULE_DIR is missing from gio-2.0.pc\r\n```\r\n\r\nThe `gio-2.0.pc` file, installed in my system contains:\r\n```\r\nprefix=/usr\r\nlibdir=${prefix}/lib64\r\nincludedir=${prefix}/include\r\n\r\ndatadir=${prefix}/share\r\nschemasdir=${datadir}/glib-2.0/schemas\r\nbindir=${prefix}/bin\r\ngiomoduledir=${libdir}/gio/modules\r\nglib_compile_schemas=${bindir}/glib-compile-schemas\r\nglib_compile_resources=${bindir}/glib-compile-resources\r\ngdbus_codegen=${bindir}/gdbus-codegen\r\n\r\nName: GIO\r\nDescription: glib I/O library\r\nVersion: 2.60.7\r\nRequires: glib-2.0, gobject-2.0\r\nRequires.private: gmodule-no-export-2.0, zlib, mount >= 2.23, libselinux\r\nLibs: -L${libdir} -lgio-2.0\r\nLibs.private: -ldl -pthread -lresolv\r\nCflags: -I${includedir}\r\n[maarten@localhost eac5664\r\n```\r\n\r\nThe generated `gio-2.0.pc` contains:\r\n```\r\nprefix=/home/maarten/.conan/data/glib/2.65.1/_/_/package/f9b484c68439b8cf20fdfbb3f876d6cf74a4f970\r\nlibdir=${prefix}/lib\r\nincludedir=${prefix}/include\r\n\r\nName: glib-gio-2.0\r\nDescription: Conan package: glib-gio-2.0\r\nVersion: 2.65.1\r\nLibs: -L${libdir} -lgio-2.0 -lresolv -ldl -Wl,-rpath=\"${libdir}\"\r\nCflags: -I${includedir}\r\nRequires: glib-2.0 gobject-2.0 gmodule-2.0 zlib mount libselinux\r\n```\r\n\r\nThe generated pc file is missing a lot of keys, of which some are custom.\r\nSo it would be useful to add custom key/values to pkg_config files.\r\n\r\nThis feature is used by glib/glib-networking.",
"number": 7661,
"title": "[feature] Allow arbitrary sections in pkg_config .pc files"
}
] |
1bdb0de03d21a625c434721f9eca415105e2ddad
|
{
"head_commit": "d83e9795a566b377a2cf15bbb5ff87592a3847d9",
"head_commit_message": "remove argument",
"patch_to_review": "diff --git a/conans/client/generators/pkg_config.py b/conans/client/generators/pkg_config.py\nindex 079d354d09e..1a6e4ef7d48 100644\n--- a/conans/client/generators/pkg_config.py\n+++ b/conans/client/generators/pkg_config.py\n@@ -81,6 +81,10 @@ def _pc_file_content(self, name, cpp_info, requires_gennames):\n includedir_vars = varnames\n lines.extend(dir_lines)\n \n+ custom_content = cpp_info.get_property(\"custom_content\", self.name)\n+ if custom_content:\n+ lines.append(custom_content)\n+\n lines.append(\"\")\n lines.append(\"Name: %s\" % name)\n description = cpp_info.description or \"Conan package: %s\" % name\ndiff --git a/conans/model/build_info.py b/conans/model/build_info.py\nindex 88fc0cd3e90..20a25bd218e 100644\n--- a/conans/model/build_info.py\n+++ b/conans/model/build_info.py\n@@ -104,6 +104,7 @@ class _CppInfo(object):\n \n def __init__(self):\n self._name = None\n+ self._generator_properties = {}\n self.names = {}\n self.system_libs = [] # Ordered list of system libraries\n self.includedirs = [] # Ordered list of include paths\n@@ -150,11 +151,11 @@ def _filter_paths(self, paths):\n @property\n def build_modules_paths(self):\n if self._build_modules_paths is None:\n- if isinstance(self.build_modules, list): # FIXME: This should be just a plain dict\n+ if isinstance(self.get_build_modules(), list): # FIXME: This should be just a plain dict\n conan_v2_error(\"Use 'self.cpp_info.build_modules[\\\"<generator>\\\"] = \"\n- \"{the_list}' instead\".format(the_list=self.build_modules))\n- self.build_modules = BuildModulesDict.from_list(self.build_modules)\n- tmp = dict_to_abs_paths(BuildModulesDict(self.build_modules), self.rootpath)\n+ \"{the_list}' instead\".format(the_list=self.get_build_modules()))\n+ self.build_modules = BuildModulesDict.from_list(self.get_build_modules())\n+ tmp = dict_to_abs_paths(BuildModulesDict(self.get_build_modules()), self.rootpath)\n self._build_modules_paths = tmp\n return self._build_modules_paths\n \n@@ -209,15 +210,49 @@ def name(self):\n def name(self, value):\n self._name = value\n \n+ # TODO: Deprecate for 2.0. Only cmake and pkg_config generators should access this.\n+ # Use get_property for 2.0\n def get_name(self, generator):\n- return self.names.get(generator, self._name)\n+ property_name = \"cmake_target_name\" if \"cmake\" in generator else \"pkg_config_name\"\n+ return self.get_property(property_name, generator) or self.names.get(generator, self._name)\n \n+ # TODO: Deprecate for 2.0. Only cmake generators should access this. Use get_property for 2.0\n def get_filename(self, generator):\n- result = self.filenames.get(generator)\n+ result = self.get_property(\"cmake_file_name\", generator) or self.filenames.get(generator)\n if result:\n return result\n return self.get_name(generator)\n \n+ # TODO: Deprecate for 2.0. Use get_property for 2.0\n+ def get_build_modules(self):\n+ default_values_dict = self._generator_properties.get(\"conan_default_generators_value\")\n+ default_build_modules_value = default_values_dict.get(\"cmake_build_modules\") if default_values_dict else None\n+ ret_dict = {\"cmake_find_package\": default_build_modules_value,\n+ \"cmake_find_package_multi\": default_build_modules_value,\n+ \"cmake\": default_build_modules_value,\n+ \"cmake_multi\": default_build_modules_value} if default_build_modules_value else {}\n+ for generator, values in self._generator_properties.items():\n+ if values.get(\"cmake_build_modules\") and generator != \"conan_default_generators_value\":\n+ ret_dict[generator] = values.get(\"cmake_build_modules\")\n+ return ret_dict if ret_dict else self.build_modules\n+\n+ def set_property(self, property_name, value, generator=None):\n+ generator = generator or \"conan_default_generators_value\"\n+ gen_dict = self._generator_properties.get(generator)\n+ if gen_dict:\n+ gen_dict.update({property_name: value})\n+ else:\n+ self._generator_properties.update({generator: {property_name: value}})\n+\n+ def get_property(self, property_name, generator=None):\n+ generator = generator or \"conan_default_generators_value\"\n+ gen_dict = self._generator_properties.get(generator)\n+ if gen_dict and gen_dict.get(property_name):\n+ return gen_dict.get(property_name)\n+ else:\n+ gen_dict = self._generator_properties.get(\"conan_default_generators_value\")\n+ return gen_dict.get(property_name) if gen_dict else None\n+\n # Compatibility for 'cppflags' (old style property to allow decoration)\n def get_cppflags(self):\n conan_v2_error(\"'cpp_info.cppflags' is deprecated, use 'cxxflags' instead\")\n@@ -354,7 +389,7 @@ def _raise_incorrect_components_definition(self, package_name, package_requires)\n self.cxxflags or\n self.sharedlinkflags or\n self.exelinkflags or\n- self.build_modules or\n+ self.get_build_modules() or\n self.requires):\n raise ConanException(\"self.cpp_info.components cannot be used with self.cpp_info \"\n \"global values at the same time\")\n@@ -427,7 +462,7 @@ def merge_lists(seq1, seq2):\n self.frameworkdirs = merge_lists(self.frameworkdirs, dep_cpp_info.framework_paths)\n self.libs = merge_lists(self.libs, dep_cpp_info.libs)\n self.frameworks = merge_lists(self.frameworks, dep_cpp_info.frameworks)\n- self.build_modules = merge_dicts(self.build_modules, dep_cpp_info.build_modules_paths)\n+ self.build_modules = merge_dicts(self.get_build_modules(), dep_cpp_info.build_modules_paths)\n self.requires = merge_lists(self.requires, dep_cpp_info.requires)\n self.rootpaths.append(dep_cpp_info.rootpath)\n \n@@ -437,13 +472,12 @@ def merge_lists(seq1, seq2):\n self.cflags = merge_lists(dep_cpp_info.cflags, self.cflags)\n self.sharedlinkflags = merge_lists(dep_cpp_info.sharedlinkflags, self.sharedlinkflags)\n self.exelinkflags = merge_lists(dep_cpp_info.exelinkflags, self.exelinkflags)\n-\n if not self.sysroot:\n self.sysroot = dep_cpp_info.sysroot\n \n @property\n def build_modules_paths(self):\n- return self.build_modules\n+ return self.get_build_modules()\n \n @property\n def include_paths(self):\ndiff --git a/conans/test/functional/generators/pkg_config_test.py b/conans/test/functional/generators/pkg_config_test.py\nindex f24fe8167d5..d045403ae7b 100644\n--- a/conans/test/functional/generators/pkg_config_test.py\n+++ b/conans/test/functional/generators/pkg_config_test.py\n@@ -202,3 +202,59 @@ def test_empty_include(self):\n pc = client.load(\"pkg.pc\")\n self.assertNotIn(\"libdir=${prefix}/lib\", pc)\n self.assertNotIn(\"includedir=${prefix}/include\", pc)\n+\n+ def test_custom_content(self):\n+ # https://github.com/conan-io/conan/issues/7661\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.tools import save\n+ import os\n+ import textwrap\n+\n+ class PkgConfigConan(ConanFile):\n+ def package(self):\n+ save(os.path.join(self.package_folder, \"include\" ,\"file\"), \"\")\n+ save(os.path.join(self.package_folder, \"lib\" ,\"file\"), \"\")\n+\n+ def package_info(self):\n+ custom_content = textwrap.dedent(\\\"\"\"\n+ datadir=${prefix}/share\n+ schemasdir=${datadir}/mylib/schemas\n+ bindir=${prefix}/bin\n+ \\\"\"\")\n+ self.cpp_info.set_property(\"custom_content\", custom_content, \"pkg_config\")\n+ self.cpp_info.includedirs = [\"include\"]\n+ self.cpp_info.libdirs = [\"lib\"]\n+ \"\"\")\n+ client = TestClient()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . pkg/0.1@\")\n+ client.run(\"install pkg/0.1@ -g pkg_config\")\n+\n+ pc_content = client.load(\"pkg.pc\")\n+ self.assertIn(\"libdir=${prefix}/lib\", pc_content)\n+ self.assertIn(\"datadir=${prefix}/share\", pc_content)\n+ self.assertIn(\"schemasdir=${datadir}/mylib/schemas\", pc_content)\n+ self.assertIn(\"bindir=${prefix}/bin\", pc_content)\n+ self.assertIn(\"Name: pkg\", pc_content)\n+\n+ def test_custom_content_components(self):\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.tools import save\n+ import os\n+ import textwrap\n+\n+ class PkgConfigConan(ConanFile):\n+ def package_info(self):\n+ self.cpp_info.components[\"mycomponent\"].set_property(\"custom_content\",\n+ \"componentdir=${prefix}/mydir\",\n+ \"pkg_config\")\n+ \"\"\")\n+ client = TestClient()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . pkg/0.1@\")\n+ client.run(\"install pkg/0.1@ -g pkg_config\")\n+\n+ pc_content = client.load(\"mycomponent.pc\")\n+ self.assertIn(\"componentdir=${prefix}/mydir\", pc_content)\ndiff --git a/conans/test/integration/generators/cpp_info_set_generator_properties_test.py b/conans/test/integration/generators/cpp_info_set_generator_properties_test.py\nnew file mode 100644\nindex 00000000000..fb3b3c27fcd\n--- /dev/null\n+++ b/conans/test/integration/generators/cpp_info_set_generator_properties_test.py\n@@ -0,0 +1,266 @@\n+import os\n+import textwrap\n+\n+import pytest\n+\n+from conans.test.assets.genconanfile import GenConanfile\n+from conans.test.utils.tools import TestClient\n+\n+\[email protected](scope=\"module\")\n+def setup_client():\n+ client = TestClient()\n+ custom_generator = textwrap.dedent(\"\"\"\n+ from conans.model import Generator\n+ from conans import ConanFile\n+ from conans.model.conan_generator import GeneratorComponentsMixin\n+ import textwrap\n+ import os\n+\n+ class custom_generator(GeneratorComponentsMixin, Generator):\n+ name = \"custom_generator\"\n+ @property\n+ def filename(self):\n+ return \"my-generator.txt\"\n+\n+ def _get_components_custom_names(self, pkg_name, cpp_info):\n+ ret=[]\n+ for comp_name, comp in self.sorted_components(cpp_info).items():\n+ comp_genname = comp.get_property(\"custom_name\", generator=self.name)\n+ ret.append(\"{}:{}\".format(comp.name, comp_genname))\n+ return ret\n+\n+ @property\n+ def content(self):\n+ info = []\n+ for pkg_name, cpp_info in self.deps_build_info.dependencies:\n+ info.append(\"{}:{}\".format(pkg_name, cpp_info.get_property(\"custom_name\", self.name)))\n+ info.extend(self._get_components_custom_names(pkg_name, cpp_info))\n+ return os.linesep.join(info)\n+ \"\"\")\n+ client.save({\"custom_generator.py\": custom_generator})\n+ client.run(\"config install custom_generator.py -tf generators\")\n+\n+ build_module = textwrap.dedent(\"\"\"\n+ message(\"I am a build module\")\n+ \"\"\")\n+\n+ another_build_module = textwrap.dedent(\"\"\"\n+ message(\"I am another build module\")\n+ \"\"\")\n+\n+ client.save({\"consumer.py\": GenConanfile(\"consumer\", \"1.0\").with_requires(\"mypkg/1.0\").\n+ with_generator(\"custom_generator\").with_generator(\"cmake_find_package\").\n+ with_generator(\"cmake_find_package_multi\").with_generator(\"pkg_config\").\n+ with_setting(\"build_type\"),\n+ \"mypkg_bm.cmake\": build_module, \"mypkg_anootherbm.cmake\": another_build_module})\n+ return client\n+\n+\n+def get_files_contents(folder, filenames):\n+ ret = []\n+ for filename in filenames:\n+ with open(os.path.join(folder, filename)) as properties_package_file:\n+ ret.append(properties_package_file.read())\n+ return ret\n+\n+\n+def test_same_results_components(setup_client):\n+ client = setup_client\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ exports_sources = [\"mypkg_bm.cmake\"]\n+ def package(self):\n+ self.copy(\"mypkg_bm.cmake\", dst=\"lib\")\n+ def package_info(self):\n+ self.cpp_info.set_property(\"cmake_file_name\", \"MyFileName\")\n+ self.cpp_info.components[\"mycomponent\"].libs = [\"mycomponent-lib\"]\n+ self.cpp_info.components[\"mycomponent\"].set_property(\"cmake_target_name\", \"mycomponent-name\")\n+ self.cpp_info.components[\"mycomponent\"].set_property(\"cmake_build_modules\", [os.path.join(\"lib\", \"mypkg_bm.cmake\")])\n+ self.cpp_info.components[\"mycomponent\"].set_property(\"custom_name\", \"mycomponent-name\", \"custom_generator\")\n+ \"\"\")\n+\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"export mypkg.py\")\n+ client.run(\"install consumer.py --build missing -s build_type=Release\")\n+\n+ with open(os.path.join(client.current_folder, \"my-generator.txt\")) as custom_gen_file:\n+ assert \"mycomponent:mycomponent-name\" in custom_gen_file.read()\n+\n+ files_to_compare = [\"FindMyFileName.cmake\", \"MyFileNameConfig.cmake\", \"MyFileNameTargets.cmake\",\n+ \"MyFileNameTarget-release.cmake\", \"MyFileNameConfigVersion.cmake\", \"mypkg.pc\",\n+ \"mycomponent.pc\"]\n+ new_approach_contents = get_files_contents(client.current_folder, files_to_compare)\n+\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ exports_sources = [\"mypkg_bm.cmake\"]\n+ def package(self):\n+ self.copy(\"mypkg_bm.cmake\", dst=\"lib\")\n+ def package_info(self):\n+ self.cpp_info.components[\"mycomponent\"].libs = [\"mycomponent-lib\"]\n+ self.cpp_info.filenames[\"cmake_find_package\"] = \"MyFileName\"\n+ self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"MyFileName\"\n+ self.cpp_info.components[\"mycomponent\"].names[\"cmake_find_package\"] = \"mycomponent-name\"\n+ self.cpp_info.components[\"mycomponent\"].names[\"cmake_find_package_multi\"] = \"mycomponent-name\"\n+ self.cpp_info.components[\"mycomponent\"].build_modules.append(os.path.join(\"lib\", \"mypkg_bm.cmake\"))\n+ \"\"\")\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"export mypkg.py\")\n+ client.run(\"install consumer.py -s build_type=Release\")\n+\n+ old_approach_contents = get_files_contents(client.current_folder, files_to_compare)\n+\n+ assert new_approach_contents == old_approach_contents\n+\n+\n+def test_same_results_without_components(setup_client):\n+ client = setup_client\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ exports_sources = [\"mypkg_bm.cmake\"]\n+ def package(self):\n+ self.copy(\"mypkg_bm.cmake\", dst=\"lib\")\n+ def package_info(self):\n+ self.cpp_info.set_property(\"cmake_file_name\", \"MyFileName\")\n+ self.cpp_info.set_property(\"cmake_target_name\", \"mypkg-name\")\n+ self.cpp_info.set_property(\"cmake_build_modules\",[os.path.join(\"lib\",\n+ \"mypkg_bm.cmake\")])\n+ self.cpp_info.set_property(\"custom_name\", \"mypkg-name\", \"custom_generator\")\n+ \"\"\")\n+\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"export mypkg.py\")\n+\n+ client.run(\"install consumer.py --build missing -s build_type=Release\")\n+\n+ with open(os.path.join(client.current_folder, \"my-generator.txt\")) as custom_gen_file:\n+ assert \"mypkg:mypkg-name\" in custom_gen_file.read()\n+\n+ files_to_compare = [\"FindMyFileName.cmake\", \"MyFileNameConfig.cmake\", \"MyFileNameTargets.cmake\",\n+ \"MyFileNameTarget-release.cmake\", \"MyFileNameConfigVersion.cmake\", \"mypkg.pc\"]\n+ new_approach_contents = get_files_contents(client.current_folder, files_to_compare)\n+\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ exports_sources = [\"mypkg_bm.cmake\"]\n+ def package(self):\n+ self.copy(\"mypkg_bm.cmake\", dst=\"lib\")\n+ def package_info(self):\n+ self.cpp_info.filenames[\"cmake_find_package\"] = \"MyFileName\"\n+ self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"MyFileName\"\n+ self.cpp_info.names[\"cmake_find_package\"] = \"mypkg-name\"\n+ self.cpp_info.names[\"cmake_find_package_multi\"] = \"mypkg-name\"\n+ self.cpp_info.names[\"custom_generator\"] = \"mypkg-name\"\n+ self.cpp_info.build_modules.append(os.path.join(\"lib\", \"mypkg_bm.cmake\"))\n+ \"\"\")\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"create mypkg.py\")\n+ client.run(\"install consumer.py -s build_type=Release\")\n+\n+ old_approach_contents = get_files_contents(client.current_folder, files_to_compare)\n+\n+ assert new_approach_contents == old_approach_contents\n+\n+\n+def test_same_results_specific_generators(setup_client):\n+ client = setup_client\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ exports_sources = [\"mypkg_bm.cmake\", \"mypkg_anootherbm.cmake\"]\n+ def package(self):\n+ self.copy(\"mypkg_bm.cmake\", dst=\"lib\")\n+ self.copy(\"mypkg_anootherbm.cmake\", dst=\"lib\")\n+ def package_info(self):\n+ self.cpp_info.set_property(\"cmake_file_name\", \"MyFileName\", \"cmake_find_package\")\n+ self.cpp_info.set_property(\"cmake_file_name\", \"MyFileNameMulti\", \"cmake_find_package_multi\")\n+ self.cpp_info.set_property(\"cmake_target_name\", \"mypkg-name\", \"cmake_find_package\")\n+ self.cpp_info.set_property(\"cmake_target_name\", \"mypkg-name-multi\", \"cmake_find_package_multi\")\n+ self.cpp_info.set_property(\"cmake_build_modules\",[os.path.join(\"lib\",\n+ \"mypkg_bm.cmake\")], \"cmake_find_package\")\n+ self.cpp_info.set_property(\"cmake_build_modules\",[os.path.join(\"lib\",\n+ \"mypkg_anootherbm.cmake\")], \"cmake_find_package_multi\")\n+ \"\"\")\n+\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"export mypkg.py\")\n+\n+ client.run(\"install consumer.py --build missing -s build_type=Release\")\n+\n+ files_to_compare = [\"FindMyFileName.cmake\", \"MyFileNameMultiConfig.cmake\", \"MyFileNameMultiTargets.cmake\",\n+ \"MyFileNameMultiTarget-release.cmake\", \"MyFileNameMultiConfigVersion.cmake\"]\n+ new_approach_contents = get_files_contents(client.current_folder, files_to_compare)\n+\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ exports_sources = [\"mypkg_bm.cmake\", \"mypkg_anootherbm.cmake\"]\n+ def package(self):\n+ self.copy(\"mypkg_bm.cmake\", dst=\"lib\")\n+ self.copy(\"mypkg_anootherbm.cmake\", dst=\"lib\")\n+ def package_info(self):\n+ self.cpp_info.filenames[\"cmake_find_package\"] = \"MyFileName\"\n+ self.cpp_info.filenames[\"cmake_find_package_multi\"] = \"MyFileNameMulti\"\n+ self.cpp_info.names[\"cmake_find_package\"] = \"mypkg-name\"\n+ self.cpp_info.names[\"cmake_find_package_multi\"] = \"mypkg-name-multi\"\n+ self.cpp_info.build_modules[\"cmake_find_package\"].append(os.path.join(\"lib\", \"mypkg_bm.cmake\"))\n+ self.cpp_info.build_modules[\"cmake_find_package_multi\"].append(os.path.join(\"lib\", \"mypkg_anootherbm.cmake\"))\n+ \"\"\")\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"create mypkg.py\")\n+ client.run(\"install consumer.py -s build_type=Release\")\n+\n+ old_approach_contents = get_files_contents(client.current_folder, files_to_compare)\n+\n+ assert new_approach_contents == old_approach_contents\n+\n+\n+def test_pkg_config_names(setup_client):\n+ client = setup_client\n+ mypkg = textwrap.dedent(\"\"\"\n+ import os\n+ from conans import ConanFile, CMake, tools\n+ class MyPkg(ConanFile):\n+ settings = \"build_type\"\n+ name = \"mypkg\"\n+ version = \"1.0\"\n+ def package_info(self):\n+ self.cpp_info.components[\"mycomponent\"].libs = [\"mycomponent-lib\"]\n+ self.cpp_info.components[\"mycomponent\"].set_property(\"pkg_config_name\", \"mypkg-config-name\")\n+ \"\"\")\n+\n+ client.save({\"mypkg.py\": mypkg})\n+ client.run(\"export mypkg.py\")\n+ client.run(\"install consumer.py --build missing\")\n+\n+ with open(os.path.join(client.current_folder, \"mypkg-config-name.pc\")) as gen_file:\n+ assert \"mypkg-config-name\" in gen_file.read()\ndiff --git a/conans/test/unittests/model/build_info/generic_properties_test.py b/conans/test/unittests/model/build_info/generic_properties_test.py\nnew file mode 100644\nindex 00000000000..ad30708564f\n--- /dev/null\n+++ b/conans/test/unittests/model/build_info/generic_properties_test.py\n@@ -0,0 +1,16 @@\n+from conans.model.build_info import _CppInfo\n+\n+\n+def test_set_get_properties():\n+ cpp_info = _CppInfo()\n+ cpp_info.set_property(\"my_property\", \"default_value\")\n+ assert cpp_info.get_property(\"my_property\") == \"default_value\"\n+ # can you do a get_property for just a family without generator?\n+ assert cpp_info.get_property(\"my_property\", generator=\"cmake_multi\") == \"default_value\"\n+ assert cpp_info.get_property(\"my_property\", generator=\"pkg_config\") == \"default_value\"\n+\n+ cpp_info.set_property(\"my_property\", \"pkg_config_value\", generator=\"pkg_config\")\n+ assert cpp_info.get_property(\"my_property\", generator=\"pkg_config\") == \"pkg_config_value\"\n+ cpp_info.set_property(\"other_property\", \"other_pkg_config_value\", generator=\"pkg_config\")\n+ assert not cpp_info.get_property(\"other_property\")\n+ assert cpp_info.get_property(\"other_property\", generator=\"pkg_config\") == \"other_pkg_config_value\"\n"
}
|
[
{
"diff_hunk": "@@ -209,15 +210,49 @@ def name(self):\n def name(self, value):\n self._name = value\n \n+ # TODO: Deprecate for 2.0. Only cmake and pkg_config generators should access this.\n+ # Use get_property for 2.0\n def get_name(self, generator):\n- return self.names.get(generator, self._name)\n+ property_name = \"cmake_target_name\" if \"cmake\" in generator else \"pkg_config_name\"\n+ return self.get_property(property_name, generator) or self.names.get(generator, self._name)\n \n+ # TODO: Deprecate for 2.0. Only cmake generators should access this. Use get_property for 2.0\n def get_filename(self, generator):\n- result = self.filenames.get(generator)\n+ result = self.get_property(\"cmake_file_name\", generator) or self.filenames.get(generator)\n if result:\n return result\n return self.get_name(generator)\n \n+ # TODO: Deprecate for 2.0. Use get_property for 2.0\n+ def get_build_modules(self):\n+ default_values_dict = self._generator_properties.get(\"conan_default_generators_value\")\n+ default_build_modules_value = default_values_dict.get(\"cmake_build_modules\") if default_values_dict else None\n+ ret_dict = {\"cmake_find_package\": default_build_modules_value,\n+ \"cmake_find_package_multi\": default_build_modules_value,\n+ \"cmake\": default_build_modules_value,\n+ \"cmake_multi\": default_build_modules_value} if default_build_modules_value else {}\n+ for generator, values in self._generator_properties.items():\n+ if values.get(\"cmake_build_modules\") and generator != \"conan_default_generators_value\":\n+ ret_dict[generator] = values.get(\"cmake_build_modules\")\n+ return ret_dict if ret_dict else self.build_modules\n+\n+ def set_property(self, property_name, value, generator=None):\n+ generator = generator or \"conan_default_generators_value\"\n+ gen_dict = self._generator_properties.get(generator)\n+ if gen_dict:\n+ gen_dict.update({property_name: value})\n+ else:\n+ self._generator_properties.update({generator: {property_name: value}})\n+\n+ def get_property(self, property_name, generator=None):\n+ generator = generator or \"conan_default_generators_value\"\n+ gen_dict = self._generator_properties.get(generator)\n+ if gen_dict and gen_dict.get(property_name):\n+ return gen_dict.get(property_name)\n+ else:\n+ gen_dict = self._generator_properties.get(\"conan_default_generators_value\")\n+ return gen_dict.get(property_name) if gen_dict else None\n+",
"line": null,
"original_line": 255,
"original_start_line": 248,
"path": "conans/model/build_info.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n if generator:\r\n try:\r\n return self._generator_properties[generator][property_name]\r\n except KeyError:\r\n pass\r\n try:\r\n return self._generator_properties[None][property_name]\r\n except KeyError:\r\n pass\r\n```\r\n\r\n- avoid getting twice ``gen_dict.get(property_name)``\r\n- There is a redundant double processing when ``generator=None``, same thing is being evaluated twice."
},
{
"diff_hunk": "@@ -209,15 +210,49 @@ def name(self):\n def name(self, value):\n self._name = value\n \n+ # TODO: Deprecate for 2.0. Only cmake and pkg_config generators should access this.\n+ # Use get_property for 2.0\n def get_name(self, generator):\n- return self.names.get(generator, self._name)\n+ property_name = \"cmake_target_name\" if \"cmake\" in generator else \"pkg_config_name\"",
"line": null,
"original_line": 216,
"original_start_line": null,
"path": "conans/model/build_info.py",
"start_line": null,
"text": "@user1:\nIt is a bit confusing this \"pkg_config_name\" for every other thing not \"cmake\""
},
{
"diff_hunk": "@@ -209,15 +210,49 @@ def name(self):\n def name(self, value):\n self._name = value\n \n+ # TODO: Deprecate for 2.0. Only cmake and pkg_config generators should access this.\n+ # Use get_property for 2.0\n def get_name(self, generator):\n- return self.names.get(generator, self._name)\n+ property_name = \"cmake_target_name\" if \"cmake\" in generator else \"pkg_config_name\"\n+ return self.get_property(property_name, generator) or self.names.get(generator, self._name)\n \n+ # TODO: Deprecate for 2.0. Only cmake generators should access this. Use get_property for 2.0\n def get_filename(self, generator):\n- result = self.filenames.get(generator)\n+ result = self.get_property(\"cmake_file_name\", generator) or self.filenames.get(generator)\n if result:\n return result\n return self.get_name(generator)\n \n+ # TODO: Deprecate for 2.0. Use get_property for 2.0\n+ def get_build_modules(self):\n+ default_values_dict = self._generator_properties.get(\"conan_default_generators_value\")\n+ default_build_modules_value = default_values_dict.get(\"cmake_build_modules\") if default_values_dict else None\n+ ret_dict = {\"cmake_find_package\": default_build_modules_value,\n+ \"cmake_find_package_multi\": default_build_modules_value,\n+ \"cmake\": default_build_modules_value,\n+ \"cmake_multi\": default_build_modules_value} if default_build_modules_value else {}\n+ for generator, values in self._generator_properties.items():\n+ if values.get(\"cmake_build_modules\") and generator != \"conan_default_generators_value\":\n+ ret_dict[generator] = values.get(\"cmake_build_modules\")\n+ return ret_dict if ret_dict else self.build_modules\n+\n+ def set_property(self, property_name, value, generator=None):\n+ generator = generator or \"conan_default_generators_value\"\n+ gen_dict = self._generator_properties.get(generator)\n+ if gen_dict:\n+ gen_dict.update({property_name: value})\n+ else:\n+ self._generator_properties.update({generator: {property_name: value}})",
"line": null,
"original_line": 245,
"original_start_line": 240,
"path": "conans/model/build_info.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n self._generator_properties.setdefault(generator, {})[property_name] = value\r\n```\r\n\r\n- Use ``None`` as the value that means \"no generator\". No need to use a custom string\r\n- Avoid ``mydict.update({k: v})`` why not ``mydict[k] = v``?\r\n- ``setdefault`` is more idiomatic"
}
] |
a4b7e41aaa6c596dbb0d2005ba2468fd28d0ae35
|
diff --git a/conans/client/generators/pkg_config.py b/conans/client/generators/pkg_config.py
index 079d354d09e..fbd1726a32d 100644
--- a/conans/client/generators/pkg_config.py
+++ b/conans/client/generators/pkg_config.py
@@ -81,6 +81,10 @@ def _pc_file_content(self, name, cpp_info, requires_gennames):
includedir_vars = varnames
lines.extend(dir_lines)
+ pkg_config_custom_content = cpp_info.get_property("pkg_config_custom_content", self.name)
+ if pkg_config_custom_content:
+ lines.append(pkg_config_custom_content)
+
lines.append("")
lines.append("Name: %s" % name)
description = cpp_info.description or "Conan package: %s" % name
diff --git a/conans/model/build_info.py b/conans/model/build_info.py
index 88fc0cd3e90..8573223d09e 100644
--- a/conans/model/build_info.py
+++ b/conans/model/build_info.py
@@ -104,6 +104,7 @@ class _CppInfo(object):
def __init__(self):
self._name = None
+ self._generator_properties = {}
self.names = {}
self.system_libs = [] # Ordered list of system libraries
self.includedirs = [] # Ordered list of include paths
@@ -127,6 +128,7 @@ def __init__(self):
self.sysroot = ""
self.requires = []
self._build_modules_paths = None
+ self._build_modules = None
self._include_paths = None
self._lib_paths = None
self._bin_paths = None
@@ -154,7 +156,9 @@ def build_modules_paths(self):
conan_v2_error("Use 'self.cpp_info.build_modules[\"<generator>\"] = "
"{the_list}' instead".format(the_list=self.build_modules))
self.build_modules = BuildModulesDict.from_list(self.build_modules)
- tmp = dict_to_abs_paths(BuildModulesDict(self.build_modules), self.rootpath)
+ # Invalidate necessary, get_build_modules used raise_incorrect_components_definition
+ self._build_modules = None
+ tmp = dict_to_abs_paths(BuildModulesDict(self.get_build_modules()), self.rootpath)
self._build_modules_paths = tmp
return self._build_modules_paths
@@ -209,15 +213,58 @@ def name(self):
def name(self, value):
self._name = value
+ # TODO: Deprecate for 2.0. Only cmake and pkg_config generators should access this.
+ # Use get_property for 2.0
def get_name(self, generator):
- return self.names.get(generator, self._name)
-
+ property_name = None
+ if "cmake" in generator:
+ property_name = "cmake_target_name"
+ elif "pkg_config" in generator:
+ property_name = "pkg_config_name"
+ return self.get_property(property_name, generator) or self.names.get(generator, self._name)
+
+ # TODO: Deprecate for 2.0. Only cmake generators should access this. Use get_property for 2.0
def get_filename(self, generator):
- result = self.filenames.get(generator)
+ result = self.get_property("cmake_file_name", generator) or self.filenames.get(generator)
if result:
return result
return self.get_name(generator)
+ # TODO: Deprecate for 2.0. Use get_property for 2.0
+ def get_build_modules(self):
+ if self._build_modules is None: # Not cached yet
+ try:
+ default_build_modules_value = self._generator_properties[None]["cmake_build_modules"]
+ except KeyError:
+ ret_dict = {}
+ else:
+ ret_dict = {"cmake_find_package": default_build_modules_value,
+ "cmake_find_package_multi": default_build_modules_value,
+ "cmake": default_build_modules_value,
+ "cmake_multi": default_build_modules_value}
+
+ for generator, values in self._generator_properties.items():
+ if generator:
+ v = values.get("cmake_build_modules")
+ if v:
+ ret_dict[generator] = v
+ self._build_modules = ret_dict if ret_dict else self.build_modules
+ return self._build_modules
+
+ def set_property(self, property_name, value, generator=None):
+ self._generator_properties.setdefault(generator, {})[property_name] = value
+
+ def get_property(self, property_name, generator=None):
+ if generator:
+ try:
+ return self._generator_properties[generator][property_name]
+ except KeyError:
+ pass
+ try:
+ return self._generator_properties[None][property_name]
+ except KeyError:
+ pass
+
# Compatibility for 'cppflags' (old style property to allow decoration)
def get_cppflags(self):
conan_v2_error("'cpp_info.cppflags' is deprecated, use 'cxxflags' instead")
@@ -354,7 +401,7 @@ def _raise_incorrect_components_definition(self, package_name, package_requires)
self.cxxflags or
self.sharedlinkflags or
self.exelinkflags or
- self.build_modules or
+ self.get_build_modules() or
self.requires):
raise ConanException("self.cpp_info.components cannot be used with self.cpp_info "
"global values at the same time")
@@ -437,7 +484,6 @@ def merge_lists(seq1, seq2):
self.cflags = merge_lists(dep_cpp_info.cflags, self.cflags)
self.sharedlinkflags = merge_lists(dep_cpp_info.sharedlinkflags, self.sharedlinkflags)
self.exelinkflags = merge_lists(dep_cpp_info.exelinkflags, self.exelinkflags)
-
if not self.sysroot:
self.sysroot = dep_cpp_info.sysroot
diff --git a/conans/test/functional/generators/pkg_config_test.py b/conans/test/functional/generators/pkg_config_test.py
index f24fe8167d5..c863f52cb2a 100644
--- a/conans/test/functional/generators/pkg_config_test.py
+++ b/conans/test/functional/generators/pkg_config_test.py
@@ -202,3 +202,58 @@ def test_empty_include(self):
pc = client.load("pkg.pc")
self.assertNotIn("libdir=${prefix}/lib", pc)
self.assertNotIn("includedir=${prefix}/include", pc)
+
+ def test_custom_content(self):
+ # https://github.com/conan-io/conan/issues/7661
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.tools import save
+ import os
+ import textwrap
+
+ class PkgConfigConan(ConanFile):
+ def package(self):
+ save(os.path.join(self.package_folder, "include" ,"file"), "")
+ save(os.path.join(self.package_folder, "lib" ,"file"), "")
+
+ def package_info(self):
+ custom_content = textwrap.dedent(\"""
+ datadir=${prefix}/share
+ schemasdir=${datadir}/mylib/schemas
+ bindir=${prefix}/bin
+ \""")
+ self.cpp_info.set_property("pkg_config_custom_content", custom_content)
+ self.cpp_info.includedirs = ["include"]
+ self.cpp_info.libdirs = ["lib"]
+ """)
+ client = TestClient()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . pkg/0.1@")
+ client.run("install pkg/0.1@ -g pkg_config")
+
+ pc_content = client.load("pkg.pc")
+ self.assertIn("libdir=${prefix}/lib", pc_content)
+ self.assertIn("datadir=${prefix}/share", pc_content)
+ self.assertIn("schemasdir=${datadir}/mylib/schemas", pc_content)
+ self.assertIn("bindir=${prefix}/bin", pc_content)
+ self.assertIn("Name: pkg", pc_content)
+
+ def test_custom_content_components(self):
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.tools import save
+ import os
+ import textwrap
+
+ class PkgConfigConan(ConanFile):
+ def package_info(self):
+ self.cpp_info.components["mycomponent"].set_property("pkg_config_custom_content",
+ "componentdir=${prefix}/mydir")
+ """)
+ client = TestClient()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . pkg/0.1@")
+ client.run("install pkg/0.1@ -g pkg_config")
+
+ pc_content = client.load("mycomponent.pc")
+ self.assertIn("componentdir=${prefix}/mydir", pc_content)
diff --git a/conans/test/integration/generators/cpp_info_set_generator_properties_test.py b/conans/test/integration/generators/cpp_info_set_generator_properties_test.py
new file mode 100644
index 00000000000..55577696fd8
--- /dev/null
+++ b/conans/test/integration/generators/cpp_info_set_generator_properties_test.py
@@ -0,0 +1,262 @@
+import os
+import textwrap
+
+import pytest
+
+from conans.test.assets.genconanfile import GenConanfile
+from conans.test.utils.tools import TestClient
+
+
[email protected](scope="module")
+def setup_client():
+ client = TestClient()
+ custom_generator = textwrap.dedent("""
+ from conans.model import Generator
+ from conans import ConanFile
+ from conans.model.conan_generator import GeneratorComponentsMixin
+ import os
+
+
+ class custom_generator(GeneratorComponentsMixin, Generator):
+ name = "custom_generator"
+ @property
+ def filename(self):
+ return "my-generator.txt"
+
+ def _get_components_custom_names(self, cpp_info):
+ ret = []
+ for comp_name, comp in self.sorted_components(cpp_info).items():
+ comp_genname = comp.get_property("custom_name", generator=self.name)
+ ret.append("{}:{}".format(comp.name, comp_genname))
+ return ret
+
+ @property
+ def content(self):
+ info = []
+ for pkg_name, cpp_info in self.deps_build_info.dependencies:
+ info.append("{}:{}".format(pkg_name, cpp_info.get_property("custom_name", self.name)))
+ info.extend(self._get_components_custom_names(cpp_info))
+ return os.linesep.join(info)
+ """)
+ client.save({"custom_generator.py": custom_generator})
+ client.run("config install custom_generator.py -tf generators")
+
+ build_module = textwrap.dedent("""
+ message("I am a build module")
+ """)
+
+ another_build_module = textwrap.dedent("""
+ message("I am another build module")
+ """)
+
+ client.save({"consumer.py": GenConanfile("consumer", "1.0").with_requires("mypkg/1.0").
+ with_generator("custom_generator").with_generator("cmake_find_package").
+ with_generator("cmake_find_package_multi").with_generator("pkg_config").
+ with_setting("build_type"),
+ "mypkg_bm.cmake": build_module, "mypkg_anootherbm.cmake": another_build_module})
+ return client
+
+
+def get_files_contents(client, filenames):
+ return [client.load(f) for f in filenames]
+
+
+def test_same_results_components(setup_client):
+ client = setup_client
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile, CMake, tools
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ exports_sources = ["mypkg_bm.cmake"]
+ def package(self):
+ self.copy("mypkg_bm.cmake", dst="lib")
+ def package_info(self):
+ self.cpp_info.set_property("cmake_file_name", "MyFileName")
+ self.cpp_info.components["mycomponent"].libs = ["mycomponent-lib"]
+ self.cpp_info.components["mycomponent"].set_property("cmake_target_name", "mycomponent-name")
+ self.cpp_info.components["mycomponent"].set_property("cmake_build_modules", [os.path.join("lib", "mypkg_bm.cmake")])
+ self.cpp_info.components["mycomponent"].set_property("custom_name", "mycomponent-name", "custom_generator")
+ """)
+
+ client.save({"mypkg.py": mypkg})
+ client.run("export mypkg.py")
+ client.run("install consumer.py --build missing -s build_type=Release")
+
+ my_generator = client.load("my-generator.txt")
+ assert "mycomponent:mycomponent-name" in my_generator
+
+ files_to_compare = ["FindMyFileName.cmake", "MyFileNameConfig.cmake", "MyFileNameTargets.cmake",
+ "MyFileNameTarget-release.cmake", "MyFileNameConfigVersion.cmake", "mypkg.pc",
+ "mycomponent.pc"]
+ new_approach_contents = get_files_contents(client, files_to_compare)
+
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ exports_sources = ["mypkg_bm.cmake"]
+ def package(self):
+ self.copy("mypkg_bm.cmake", dst="lib")
+ def package_info(self):
+ self.cpp_info.components["mycomponent"].libs = ["mycomponent-lib"]
+ self.cpp_info.filenames["cmake_find_package"] = "MyFileName"
+ self.cpp_info.filenames["cmake_find_package_multi"] = "MyFileName"
+ self.cpp_info.components["mycomponent"].names["cmake_find_package"] = "mycomponent-name"
+ self.cpp_info.components["mycomponent"].names["cmake_find_package_multi"] = "mycomponent-name"
+ self.cpp_info.components["mycomponent"].build_modules.append(os.path.join("lib", "mypkg_bm.cmake"))
+ """)
+ client.save({"mypkg.py": mypkg})
+ client.run("export mypkg.py")
+ client.run("install consumer.py -s build_type=Release")
+
+ old_approach_contents = get_files_contents(client, files_to_compare)
+
+ assert new_approach_contents == old_approach_contents
+
+
+def test_same_results_without_components(setup_client):
+ client = setup_client
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ exports_sources = ["mypkg_bm.cmake"]
+ def package(self):
+ self.copy("mypkg_bm.cmake", dst="lib")
+ def package_info(self):
+ self.cpp_info.set_property("cmake_file_name", "MyFileName")
+ self.cpp_info.set_property("cmake_target_name", "mypkg-name")
+ self.cpp_info.set_property("cmake_build_modules",[os.path.join("lib",
+ "mypkg_bm.cmake")])
+ self.cpp_info.set_property("custom_name", "mypkg-name", "custom_generator")
+ """)
+
+ client.save({"mypkg.py": mypkg})
+ client.run("export mypkg.py")
+
+ client.run("install consumer.py --build missing -s build_type=Release")
+
+ with open(os.path.join(client.current_folder, "my-generator.txt")) as custom_gen_file:
+ assert "mypkg:mypkg-name" in custom_gen_file.read()
+
+ files_to_compare = ["FindMyFileName.cmake", "MyFileNameConfig.cmake", "MyFileNameTargets.cmake",
+ "MyFileNameTarget-release.cmake", "MyFileNameConfigVersion.cmake", "mypkg.pc"]
+ new_approach_contents = get_files_contents(client, files_to_compare)
+
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ exports_sources = ["mypkg_bm.cmake"]
+ def package(self):
+ self.copy("mypkg_bm.cmake", dst="lib")
+ def package_info(self):
+ self.cpp_info.filenames["cmake_find_package"] = "MyFileName"
+ self.cpp_info.filenames["cmake_find_package_multi"] = "MyFileName"
+ self.cpp_info.names["cmake_find_package"] = "mypkg-name"
+ self.cpp_info.names["cmake_find_package_multi"] = "mypkg-name"
+ self.cpp_info.names["custom_generator"] = "mypkg-name"
+ self.cpp_info.build_modules.append(os.path.join("lib", "mypkg_bm.cmake"))
+ """)
+ client.save({"mypkg.py": mypkg})
+ client.run("create mypkg.py")
+ client.run("install consumer.py -s build_type=Release")
+
+ old_approach_contents = get_files_contents(client, files_to_compare)
+
+ assert new_approach_contents == old_approach_contents
+
+
+def test_same_results_specific_generators(setup_client):
+ client = setup_client
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ exports_sources = ["mypkg_bm.cmake", "mypkg_anootherbm.cmake"]
+ def package(self):
+ self.copy("mypkg_bm.cmake", dst="lib")
+ self.copy("mypkg_anootherbm.cmake", dst="lib")
+ def package_info(self):
+ self.cpp_info.set_property("cmake_file_name", "MyFileName", "cmake_find_package")
+ self.cpp_info.set_property("cmake_file_name", "MyFileNameMulti", "cmake_find_package_multi")
+ self.cpp_info.set_property("cmake_target_name", "mypkg-name", "cmake_find_package")
+ self.cpp_info.set_property("cmake_target_name", "mypkg-name-multi", "cmake_find_package_multi")
+ self.cpp_info.set_property("cmake_build_modules",[os.path.join("lib",
+ "mypkg_bm.cmake")], "cmake_find_package")
+ self.cpp_info.set_property("cmake_build_modules",[os.path.join("lib",
+ "mypkg_anootherbm.cmake")], "cmake_find_package_multi")
+ """)
+
+ client.save({"mypkg.py": mypkg})
+ client.run("export mypkg.py")
+
+ client.run("install consumer.py --build missing -s build_type=Release")
+
+ files_to_compare = ["FindMyFileName.cmake", "MyFileNameMultiConfig.cmake", "MyFileNameMultiTargets.cmake",
+ "MyFileNameMultiTarget-release.cmake", "MyFileNameMultiConfigVersion.cmake"]
+ new_approach_contents = get_files_contents(client, files_to_compare)
+
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ exports_sources = ["mypkg_bm.cmake", "mypkg_anootherbm.cmake"]
+ def package(self):
+ self.copy("mypkg_bm.cmake", dst="lib")
+ self.copy("mypkg_anootherbm.cmake", dst="lib")
+ def package_info(self):
+ self.cpp_info.filenames["cmake_find_package"] = "MyFileName"
+ self.cpp_info.filenames["cmake_find_package_multi"] = "MyFileNameMulti"
+ self.cpp_info.names["cmake_find_package"] = "mypkg-name"
+ self.cpp_info.names["cmake_find_package_multi"] = "mypkg-name-multi"
+ self.cpp_info.build_modules["cmake_find_package"].append(os.path.join("lib", "mypkg_bm.cmake"))
+ self.cpp_info.build_modules["cmake_find_package_multi"].append(os.path.join("lib", "mypkg_anootherbm.cmake"))
+ """)
+ client.save({"mypkg.py": mypkg})
+ client.run("create mypkg.py")
+ client.run("install consumer.py -s build_type=Release")
+
+ old_approach_contents = get_files_contents(client, files_to_compare)
+
+ assert new_approach_contents == old_approach_contents
+
+
+def test_pkg_config_names(setup_client):
+ client = setup_client
+ mypkg = textwrap.dedent("""
+ import os
+ from conans import ConanFile
+ class MyPkg(ConanFile):
+ settings = "build_type"
+ name = "mypkg"
+ version = "1.0"
+ def package_info(self):
+ self.cpp_info.components["mycomponent"].libs = ["mycomponent-lib"]
+ self.cpp_info.components["mycomponent"].set_property("pkg_config_name", "mypkg-config-name")
+ """)
+
+ client.save({"mypkg.py": mypkg})
+ client.run("export mypkg.py")
+ client.run("install consumer.py --build missing")
+
+ with open(os.path.join(client.current_folder, "mypkg-config-name.pc")) as gen_file:
+ assert "mypkg-config-name" in gen_file.read()
diff --git a/conans/test/unittests/model/build_info/generic_properties_test.py b/conans/test/unittests/model/build_info/generic_properties_test.py
new file mode 100644
index 00000000000..30ac0f3d5b9
--- /dev/null
+++ b/conans/test/unittests/model/build_info/generic_properties_test.py
@@ -0,0 +1,20 @@
+from conans.model.build_info import _CppInfo
+
+
+def test_set_get_properties():
+ cpp_info = _CppInfo()
+
+ assert not cpp_info.get_property("my_property")
+ assert not cpp_info.get_property("my_property", "some_generator")
+
+ cpp_info.set_property("my_property", "default_value")
+ assert cpp_info.get_property("my_property") == "default_value"
+ # can you do a get_property for just a family without generator?
+ assert cpp_info.get_property("my_property", generator="cmake_multi") == "default_value"
+ assert cpp_info.get_property("my_property", generator="pkg_config") == "default_value"
+
+ cpp_info.set_property("my_property", "pkg_config_value", generator="pkg_config")
+ assert cpp_info.get_property("my_property", generator="pkg_config") == "pkg_config_value"
+ cpp_info.set_property("other_property", "other_pkg_config_value", generator="pkg_config")
+ assert not cpp_info.get_property("other_property")
+ assert cpp_info.get_property("other_property", generator="pkg_config") == "other_pkg_config_value"
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-8665@0092d41
|
conan-io/conan
|
Python
| 8,665
|
More fine-grained control (using [conf]) for build parallelization
|
Changelog: Feature: More fine-grained control (using [conf]) for build parallelization.
Docs: https://github.com/conan-io/docs/pull/2061
This PR introduces the following entries in `conf`:
* `tools.build:processes`: number of processes to use for every build-helper
* `tools.gnu.make:jobs`: argument for the `--jobs` parameter when running `make` (overrides the general `tools.build:processes`).
* `tools.microsoft.msbuild:max_cpu_count`: argument for the `/m` (`/maxCpuCount`) when running `MSBuild` standalone or via CMake (overrides the general `tools.build:processes`).
* `tools.ninja:jobs`: argument for the `--jobs` parameter when running Ninja generator via CMake or Meson. (overrides the general `tools.build:processes`).
If none of these configuration options is found, Conan won't add anything to the command line running these tools, it will use whatever it is the default.
close https://github.com/conan-io/conan/issues/8598
|
2021-03-17T18:37:32Z
|
Propose ``cpu_count`` alternative in [conf] for build systems paralelism
For build helpers in ``conan.tools.xxxx``, the parallelism should be configurable via [conf] (and not ``cpu_count``).
Please propose a solution, some guidelines:
- No need that recipes specify it, not explicitly, not as arguments, and they can default back to whatever the build system likes.
- All build systems can share a "parallel" value, no need to do one per build-system/helper
- Might be different to concurrent upload/download
|
What about following entries in `conf`:
* `core.build:processes`: number of processes to use for every build-helper
* `tools.gnu.make:jobs`: argument for the `--jobs` parameter when running `make` (overrides the general `core.build:processes`).
* `tools.microsoft.msbuild:maxCpuCount`: argument for the `/m` (`/maxCpuCount`) when running `MSBuild` standalone or via CMake (overrides the general `core.build:processes`).
* `tools.ninja:jobs`: argument for the `--jobs` parameter when running Ninja generator via CMake or Meson. (overrides the general `core.build:processes`). If no-parallel is requested, this will force `-j1`.
If none of these configuration options is found, it fallback to the old `cpu_count` when parallelization is activated. Alternatively, we could leave the number blank (`-j`, `/m`,..) and the corresponding build-system will run in parallel with its internal defaults (infinite probably).
|
[
{
"body": "For build helpers in ``conan.tools.xxxx``, the parallelism should be configurable via [conf] (and not ``cpu_count``).\r\n\r\nPlease propose a solution, some guidelines:\r\n\r\n- No need that recipes specify it, not explicitly, not as arguments, and they can default back to whatever the build system likes.\r\n- All build systems can share a \"parallel\" value, no need to do one per build-system/helper\r\n- Might be different to concurrent upload/download",
"number": 8598,
"title": "Propose ``cpu_count`` alternative in [conf] for build systems paralelism"
}
] |
acf6de21d5df09c5f9b4743bb9c2c3e1c33d30c8
|
{
"head_commit": "0092d4103db8702dd4c034004359fd8b46ebcd92",
"head_commit_message": "add unittesting",
"patch_to_review": "diff --git a/conan/tools/cmake/cmake.py b/conan/tools/cmake/cmake.py\nindex 049926a0e2c..43c216c91a3 100644\n--- a/conan/tools/cmake/cmake.py\n+++ b/conan/tools/cmake/cmake.py\n@@ -3,13 +3,15 @@\n \n from conan.tools.cmake.base import CMakeToolchainBase\n from conan.tools.cmake.utils import get_generator, is_multi_configuration\n-from conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg\n+from conan.tools.gnu.make import make_jobs_cmd_line_arg\n+from conan.tools.meson.meson import ninja_jobs_cmd_line_arg\n+from conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg, \\\n+ msbuild_max_cpu_count_cmd_line_arg\n from conans.client import tools\n from conans.client.build import join_arguments\n from conans.client.tools.files import chdir\n from conans.client.tools.oss import cpu_count, args_to_string\n from conans.errors import ConanException\n-from conans.model.version import Version\n from conans.util.conan_v2_mode import conan_v2_error\n from conans.util.files import mkdir\n \n@@ -23,15 +25,27 @@ def _validate_recipe(conanfile):\n \n def _cmake_cmd_line_args(conanfile, generator, parallel):\n args = []\n- compiler_version = conanfile.settings.get_safe(\"compiler.version\")\n- if generator and parallel:\n- if (\"Makefiles\" in generator or \"Ninja\" in generator) and \"NMake\" not in generator:\n- args.append(\"-j%i\" % cpu_count(conanfile.output))\n- elif \"Visual Studio\" in generator and compiler_version and Version(compiler_version) >= \"10\":\n- # Parallel for building projects in the solution\n- args.append(\"/m:%i\" % cpu_count(output=conanfile.output))\n-\n- if generator and \"Visual Studio\" in generator:\n+ if not generator:\n+ return args\n+\n+ # Arguments related to parallel\n+ if \"Makefiles\" in generator and \"NMake\" not in generator and parallel:\n+ njobs = make_jobs_cmd_line_arg(conanfile)\n+ if njobs:\n+ args.append(njobs)\n+\n+ if \"Ninja\" in generator and \"NMake\" not in generator and parallel:\n+ njobs = ninja_jobs_cmd_line_arg(conanfile)\n+ if njobs:\n+ args.append(njobs)\n+\n+ if \"Visual Studio\" in generator and parallel:\n+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)\n+ if max_cpu_count:\n+ args.append(max_cpu_count)\n+\n+ # Arguments for verbosity\n+ if \"Visual Studio\" in generator:\n verbosity = msbuild_verbosity_cmd_line_arg(conanfile)\n if verbosity:\n args.append(verbosity)\ndiff --git a/conan/tools/gnu/make.py b/conan/tools/gnu/make.py\nindex 7091ad945c5..44fa240d716 100644\n--- a/conan/tools/gnu/make.py\n+++ b/conan/tools/gnu/make.py\n@@ -3,11 +3,19 @@\n from collections import OrderedDict\n \n from jinja2 import Template\n+\n from conans.client.build.compiler_flags import build_type_define, libcxx_define\n from conans.client.tools.oss import detected_architecture, detected_os, get_build_os_arch\n from conans.util.files import save\n \n \n+def make_jobs_cmd_line_arg(conanfile):\n+ njobs = conanfile.conf[\"tools.gnu.make\"].jobs or \\\n+ conanfile.conf[\"tools.build\"].processes\n+ if njobs:\n+ return \"-j{}\".format(njobs)\n+\n+\n class MakeToolchain(object):\n filename = \"conan_toolchain.mak\"\n \ndiff --git a/conan/tools/meson/meson.py b/conan/tools/meson/meson.py\nindex 8a85db4a569..9c8d98933c4 100644\n--- a/conan/tools/meson/meson.py\n+++ b/conan/tools/meson/meson.py\n@@ -2,7 +2,14 @@\n \n from conan.tools.meson import MesonToolchain\n from conan.tools.microsoft.visual import vcvars_command, vcvars_arch\n-from conans.client.tools.oss import cross_building, cpu_count\n+from conans.client.tools.oss import cross_building\n+\n+\n+def ninja_jobs_cmd_line_arg(conanfile):\n+ njobs = conanfile.conf[\"tools.ninja\"].jobs or \\\n+ conanfile.conf[\"tools.build\"].processes\n+ if njobs:\n+ return \"-j{}\".format(njobs)\n \n \n class Meson(object):\n@@ -33,14 +40,17 @@ def configure(self, source_folder=None):\n if cross_building(self._conanfile):\n cmd += ' --cross-file \"{}\"'.format(MesonToolchain.cross_filename)\n else:\n- cmd += ' --native-file \"{}\"'. format(MesonToolchain.native_filename)\n+ cmd += ' --native-file \"{}\"'.format(MesonToolchain.native_filename)\n cmd += ' \"{}\" \"{}\"'.format(self._build_dir, source)\n if self._conanfile.package_folder:\n cmd += ' -Dprefix=\"{}\"'.format(self._conanfile.package_folder)\n self._run(cmd)\n \n def build(self, target=None):\n- cmd = 'meson compile -C \"{}\" -j {}'.format(self._build_dir, cpu_count())\n+ cmd = 'meson compile -C \"{}\"'.format(self._build_dir)\n+ njobs = ninja_jobs_cmd_line_arg(self._conanfile)\n+ if njobs:\n+ cmd += \" {}\".format(njobs)\n if target:\n cmd += \" {}\".format(target)\n self._run(cmd)\ndiff --git a/conan/tools/microsoft/msbuild.py b/conan/tools/microsoft/msbuild.py\nindex d1c22d064b2..3ac28f0c5a8 100644\n--- a/conan/tools/microsoft/msbuild.py\n+++ b/conan/tools/microsoft/msbuild.py\n@@ -9,6 +9,13 @@ def msbuild_verbosity_cmd_line_arg(conanfile):\n return '/verbosity:{}'.format(verbosity)\n \n \n+def msbuild_max_cpu_count_cmd_line_arg(conanfile):\n+ max_cpu_count = conanfile.conf[\"tools.microsoft.msbuild\"].max_cpu_count or \\\n+ conanfile.conf[\"tools.build\"].processes\n+ if max_cpu_count:\n+ return \"/m:{}\".format(max_cpu_count)\n+\n+\n class MSBuild(object):\n def __init__(self, conanfile):\n self._conanfile = conanfile\n@@ -34,6 +41,10 @@ def command(self, sln):\n if verbosity:\n cmd += \" {}\".format(verbosity)\n \n+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(self._conanfile)\n+ if max_cpu_count:\n+ cmd += \" {}\".format(max_cpu_count)\n+\n return cmd\n \n def build(self, sln):\ndiff --git a/conans/test/unittests/tools/cmake/test_cmake_cmd_line_args.py b/conans/test/unittests/tools/cmake/test_cmake_cmd_line_args.py\nnew file mode 100644\nindex 00000000000..ba40251dc7e\n--- /dev/null\n+++ b/conans/test/unittests/tools/cmake/test_cmake_cmd_line_args.py\n@@ -0,0 +1,54 @@\n+import textwrap\n+\n+import pytest\n+\n+from conan.tools.cmake.cmake import _cmake_cmd_line_args\n+from conans.model.conf import ConfDefinition\n+from conans.test.utils.mocks import ConanFileMock\n+\n+\[email protected]\n+def conanfile():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.gnu.make:jobs=40\n+ tools.ninja:jobs=30\n+ tools.microsoft.msbuild:max_cpu_count=20\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ return conanfile\n+\n+\n+def test_no_generator(conanfile):\n+ args = _cmake_cmd_line_args(conanfile, None, parallel=True)\n+ assert not len(args)\n+\n+\n+def test_makefiles(conanfile):\n+ args = _cmake_cmd_line_args(conanfile, 'Unix Makefiles', parallel=True)\n+ assert args == ['-j40']\n+\n+ args = _cmake_cmd_line_args(conanfile, 'Unix Makefiles', parallel=False)\n+ assert not len(args)\n+\n+ args = _cmake_cmd_line_args(conanfile, 'NMake Makefiles', parallel=True)\n+ assert not len(args)\n+\n+\n+def test_ninja(conanfile):\n+ args = _cmake_cmd_line_args(conanfile, 'Ninja', parallel=True)\n+ assert ['-j30'] == args\n+\n+ args = _cmake_cmd_line_args(conanfile, 'Ninja', parallel=False)\n+ assert not len(args)\n+\n+\n+def test_visual_studio(conanfile):\n+ args = _cmake_cmd_line_args(conanfile, 'Visual Studio 16 2019', parallel=True)\n+ assert ['/m:20'] == args\n+\n+ args = _cmake_cmd_line_args(conanfile, 'Ninja', parallel=False)\n+ assert not len(args)\ndiff --git a/conans/test/unittests/tools/gnu/__init__.py b/conans/test/unittests/tools/gnu/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/unittests/tools/gnu/test_make_jobs_cmd_line_arg.py b/conans/test/unittests/tools/gnu/test_make_jobs_cmd_line_arg.py\nnew file mode 100644\nindex 00000000000..b2a463641c3\n--- /dev/null\n+++ b/conans/test/unittests/tools/gnu/test_make_jobs_cmd_line_arg.py\n@@ -0,0 +1,53 @@\n+import textwrap\n+\n+from conan.tools.gnu.make import make_jobs_cmd_line_arg\n+from conans.model.conf import ConfDefinition\n+from conans.test.utils.mocks import ConanFileMock\n+\n+\n+def test_tools_build():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = make_jobs_cmd_line_arg(conanfile)\n+ assert njobs == \"-j10\"\n+\n+\n+def test_tools_gnu_make():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.gnu.make:jobs=23\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = make_jobs_cmd_line_arg(conanfile)\n+ assert njobs == \"-j23\"\n+\n+\n+def test_both_values():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.gnu.make:jobs=23\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = make_jobs_cmd_line_arg(conanfile)\n+ assert njobs == \"-j23\"\n+\n+\n+def test_none():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = make_jobs_cmd_line_arg(conanfile)\n+ assert njobs is None\ndiff --git a/conans/test/unittests/tools/meson/__init__.py b/conans/test/unittests/tools/meson/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/unittests/tools/meson/test_meson.py b/conans/test/unittests/tools/meson/test_meson.py\nnew file mode 100644\nindex 00000000000..364157f479d\n--- /dev/null\n+++ b/conans/test/unittests/tools/meson/test_meson.py\n@@ -0,0 +1,28 @@\n+import textwrap\n+\n+from conan.tools.meson import Meson\n+from conans.model.conf import ConfDefinition\n+from conans.test.utils.mocks import ConanFileMock, MockSettings\n+\n+\n+def test_meson_build():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.ninja:jobs=23\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ settings = MockSettings({\"build_type\": \"Release\",\n+ \"compiler\": \"gcc\",\n+ \"compiler.version\": \"7\",\n+ \"os\": \"Linux\",\n+ \"arch\": \"x86_64\"})\n+ conanfile = ConanFileMock()\n+ conanfile.settings = settings\n+ conanfile.display_name = 'test'\n+ conanfile.conf = c.get_conanfile_conf(None)\n+\n+ meson = Meson(conanfile)\n+ meson.build()\n+ \n+ assert '-j23' in str(conanfile.command)\ndiff --git a/conans/test/unittests/tools/meson/test_ninja_jobs_cmd_line_arg.py b/conans/test/unittests/tools/meson/test_ninja_jobs_cmd_line_arg.py\nnew file mode 100644\nindex 00000000000..278f50d8737\n--- /dev/null\n+++ b/conans/test/unittests/tools/meson/test_ninja_jobs_cmd_line_arg.py\n@@ -0,0 +1,53 @@\n+import textwrap\n+\n+from conan.tools.meson.meson import ninja_jobs_cmd_line_arg\n+from conans.model.conf import ConfDefinition\n+from conans.test.utils.mocks import ConanFileMock\n+\n+\n+def test_tools_build():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = ninja_jobs_cmd_line_arg(conanfile)\n+ assert njobs == \"-j10\"\n+\n+\n+def test_tools_ning():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.ninja:jobs=23\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = ninja_jobs_cmd_line_arg(conanfile)\n+ assert njobs == \"-j23\"\n+\n+\n+def test_both_values():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.ninja:jobs=23\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = ninja_jobs_cmd_line_arg(conanfile)\n+ assert njobs == \"-j23\"\n+\n+\n+def test_none():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ njobs = ninja_jobs_cmd_line_arg(conanfile)\n+ assert njobs is None\ndiff --git a/conans/test/unittests/tools/microsoft/__init__.py b/conans/test/unittests/tools/microsoft/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/unittests/tools/microsoft/test_msbuild.py b/conans/test/unittests/tools/microsoft/test_msbuild.py\nnew file mode 100644\nindex 00000000000..28bbdcf94a2\n--- /dev/null\n+++ b/conans/test/unittests/tools/microsoft/test_msbuild.py\n@@ -0,0 +1,27 @@\n+import textwrap\n+\n+from conan.tools.microsoft import MSBuild\n+from conans.model.conf import ConfDefinition\n+from conans.test.utils.mocks import ConanFileMock, MockSettings\n+\n+\n+def test_meson_build():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.microsoft.msbuild:max_cpu_count=23\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ settings = MockSettings({\"build_type\": \"Release\",\n+ \"compiler\": \"gcc\",\n+ \"compiler.version\": \"7\",\n+ \"os\": \"Linux\",\n+ \"arch\": \"x86_64\"})\n+ conanfile = ConanFileMock()\n+ conanfile.settings = settings\n+ conanfile.conf = c.get_conanfile_conf(None)\n+\n+ msbuild = MSBuild(conanfile)\n+ cmd = msbuild.command('project.sln')\n+\n+ assert '/m:23' in cmd\ndiff --git a/conans/test/unittests/tools/microsoft/test_msbuild_max_cpu_count_cmd_line_arg.py b/conans/test/unittests/tools/microsoft/test_msbuild_max_cpu_count_cmd_line_arg.py\nnew file mode 100644\nindex 00000000000..95fd0452a5b\n--- /dev/null\n+++ b/conans/test/unittests/tools/microsoft/test_msbuild_max_cpu_count_cmd_line_arg.py\n@@ -0,0 +1,53 @@\n+import textwrap\n+\n+from conan.tools.microsoft.msbuild import msbuild_max_cpu_count_cmd_line_arg\n+from conans.model.conf import ConfDefinition\n+from conans.test.utils.mocks import ConanFileMock\n+\n+\n+def test_tools_build():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)\n+ assert max_cpu_count == \"/m:10\"\n+\n+\n+def test_tools_ning():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.microsoft.msbuild:max_cpu_count=23\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)\n+ assert max_cpu_count == \"/m:23\"\n+\n+\n+def test_both_values():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ tools.microsoft.msbuild:max_cpu_count=23\n+ tools.build:processes=10\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)\n+ assert max_cpu_count == \"/m:23\"\n+\n+\n+def test_none():\n+ c = ConfDefinition()\n+ c.loads(textwrap.dedent(\"\"\"\\\n+ \"\"\"))\n+\n+ conanfile = ConanFileMock()\n+ conanfile.conf = c.get_conanfile_conf(None)\n+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)\n+ assert max_cpu_count is None\n"
}
|
[
{
"diff_hunk": "@@ -23,15 +25,27 @@ def _validate_recipe(conanfile):\n \n def _cmake_cmd_line_args(conanfile, generator, parallel):\n args = []\n- compiler_version = conanfile.settings.get_safe(\"compiler.version\")\n- if generator and parallel:\n- if (\"Makefiles\" in generator or \"Ninja\" in generator) and \"NMake\" not in generator:\n- args.append(\"-j%i\" % cpu_count(conanfile.output))\n- elif \"Visual Studio\" in generator and compiler_version and Version(compiler_version) >= \"10\":\n- # Parallel for building projects in the solution\n- args.append(\"/m:%i\" % cpu_count(output=conanfile.output))\n-\n- if generator and \"Visual Studio\" in generator:\n+ if not generator:\n+ return args\n+\n+ # Arguments related to parallel\n+ if \"Makefiles\" in generator and \"NMake\" not in generator and parallel:",
"line": null,
"original_line": 32,
"original_start_line": null,
"path": "conan/tools/cmake/cmake.py",
"start_line": null,
"text": "@user1:\nmaybe check for parallel just once and use nested cond? we're checking 3 times right now"
}
] |
6222e240e16f13e2f4025a3820e62870ac615fcb
|
diff --git a/conan/tools/cmake/cmake.py b/conan/tools/cmake/cmake.py
index 049926a0e2c..dc6fc8b72c4 100644
--- a/conan/tools/cmake/cmake.py
+++ b/conan/tools/cmake/cmake.py
@@ -3,13 +3,15 @@
from conan.tools.cmake.base import CMakeToolchainBase
from conan.tools.cmake.utils import get_generator, is_multi_configuration
-from conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg
+from conan.tools.gnu.make import make_jobs_cmd_line_arg
+from conan.tools.meson.meson import ninja_jobs_cmd_line_arg
+from conan.tools.microsoft.msbuild import msbuild_verbosity_cmd_line_arg, \
+ msbuild_max_cpu_count_cmd_line_arg
from conans.client import tools
from conans.client.build import join_arguments
from conans.client.tools.files import chdir
from conans.client.tools.oss import cpu_count, args_to_string
from conans.errors import ConanException
-from conans.model.version import Version
from conans.util.conan_v2_mode import conan_v2_error
from conans.util.files import mkdir
@@ -23,15 +25,28 @@ def _validate_recipe(conanfile):
def _cmake_cmd_line_args(conanfile, generator, parallel):
args = []
- compiler_version = conanfile.settings.get_safe("compiler.version")
- if generator and parallel:
- if ("Makefiles" in generator or "Ninja" in generator) and "NMake" not in generator:
- args.append("-j%i" % cpu_count(conanfile.output))
- elif "Visual Studio" in generator and compiler_version and Version(compiler_version) >= "10":
- # Parallel for building projects in the solution
- args.append("/m:%i" % cpu_count(output=conanfile.output))
-
- if generator and "Visual Studio" in generator:
+ if not generator:
+ return args
+
+ # Arguments related to parallel
+ if parallel:
+ if "Makefiles" in generator and "NMake" not in generator:
+ njobs = make_jobs_cmd_line_arg(conanfile)
+ if njobs:
+ args.append(njobs)
+
+ if "Ninja" in generator and "NMake" not in generator:
+ njobs = ninja_jobs_cmd_line_arg(conanfile)
+ if njobs:
+ args.append(njobs)
+
+ if "Visual Studio" in generator:
+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)
+ if max_cpu_count:
+ args.append(max_cpu_count)
+
+ # Arguments for verbosity
+ if "Visual Studio" in generator:
verbosity = msbuild_verbosity_cmd_line_arg(conanfile)
if verbosity:
args.append(verbosity)
diff --git a/conan/tools/gnu/make.py b/conan/tools/gnu/make.py
index 7091ad945c5..44fa240d716 100644
--- a/conan/tools/gnu/make.py
+++ b/conan/tools/gnu/make.py
@@ -3,11 +3,19 @@
from collections import OrderedDict
from jinja2 import Template
+
from conans.client.build.compiler_flags import build_type_define, libcxx_define
from conans.client.tools.oss import detected_architecture, detected_os, get_build_os_arch
from conans.util.files import save
+def make_jobs_cmd_line_arg(conanfile):
+ njobs = conanfile.conf["tools.gnu.make"].jobs or \
+ conanfile.conf["tools.build"].processes
+ if njobs:
+ return "-j{}".format(njobs)
+
+
class MakeToolchain(object):
filename = "conan_toolchain.mak"
diff --git a/conan/tools/meson/meson.py b/conan/tools/meson/meson.py
index 8a85db4a569..9c8d98933c4 100644
--- a/conan/tools/meson/meson.py
+++ b/conan/tools/meson/meson.py
@@ -2,7 +2,14 @@
from conan.tools.meson import MesonToolchain
from conan.tools.microsoft.visual import vcvars_command, vcvars_arch
-from conans.client.tools.oss import cross_building, cpu_count
+from conans.client.tools.oss import cross_building
+
+
+def ninja_jobs_cmd_line_arg(conanfile):
+ njobs = conanfile.conf["tools.ninja"].jobs or \
+ conanfile.conf["tools.build"].processes
+ if njobs:
+ return "-j{}".format(njobs)
class Meson(object):
@@ -33,14 +40,17 @@ def configure(self, source_folder=None):
if cross_building(self._conanfile):
cmd += ' --cross-file "{}"'.format(MesonToolchain.cross_filename)
else:
- cmd += ' --native-file "{}"'. format(MesonToolchain.native_filename)
+ cmd += ' --native-file "{}"'.format(MesonToolchain.native_filename)
cmd += ' "{}" "{}"'.format(self._build_dir, source)
if self._conanfile.package_folder:
cmd += ' -Dprefix="{}"'.format(self._conanfile.package_folder)
self._run(cmd)
def build(self, target=None):
- cmd = 'meson compile -C "{}" -j {}'.format(self._build_dir, cpu_count())
+ cmd = 'meson compile -C "{}"'.format(self._build_dir)
+ njobs = ninja_jobs_cmd_line_arg(self._conanfile)
+ if njobs:
+ cmd += " {}".format(njobs)
if target:
cmd += " {}".format(target)
self._run(cmd)
diff --git a/conan/tools/microsoft/msbuild.py b/conan/tools/microsoft/msbuild.py
index d1c22d064b2..3ac28f0c5a8 100644
--- a/conan/tools/microsoft/msbuild.py
+++ b/conan/tools/microsoft/msbuild.py
@@ -9,6 +9,13 @@ def msbuild_verbosity_cmd_line_arg(conanfile):
return '/verbosity:{}'.format(verbosity)
+def msbuild_max_cpu_count_cmd_line_arg(conanfile):
+ max_cpu_count = conanfile.conf["tools.microsoft.msbuild"].max_cpu_count or \
+ conanfile.conf["tools.build"].processes
+ if max_cpu_count:
+ return "/m:{}".format(max_cpu_count)
+
+
class MSBuild(object):
def __init__(self, conanfile):
self._conanfile = conanfile
@@ -34,6 +41,10 @@ def command(self, sln):
if verbosity:
cmd += " {}".format(verbosity)
+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(self._conanfile)
+ if max_cpu_count:
+ cmd += " {}".format(max_cpu_count)
+
return cmd
def build(self, sln):
diff --git a/conans/test/unittests/tools/cmake/test_cmake_cmd_line_args.py b/conans/test/unittests/tools/cmake/test_cmake_cmd_line_args.py
new file mode 100644
index 00000000000..ba40251dc7e
--- /dev/null
+++ b/conans/test/unittests/tools/cmake/test_cmake_cmd_line_args.py
@@ -0,0 +1,54 @@
+import textwrap
+
+import pytest
+
+from conan.tools.cmake.cmake import _cmake_cmd_line_args
+from conans.model.conf import ConfDefinition
+from conans.test.utils.mocks import ConanFileMock
+
+
[email protected]
+def conanfile():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.gnu.make:jobs=40
+ tools.ninja:jobs=30
+ tools.microsoft.msbuild:max_cpu_count=20
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ return conanfile
+
+
+def test_no_generator(conanfile):
+ args = _cmake_cmd_line_args(conanfile, None, parallel=True)
+ assert not len(args)
+
+
+def test_makefiles(conanfile):
+ args = _cmake_cmd_line_args(conanfile, 'Unix Makefiles', parallel=True)
+ assert args == ['-j40']
+
+ args = _cmake_cmd_line_args(conanfile, 'Unix Makefiles', parallel=False)
+ assert not len(args)
+
+ args = _cmake_cmd_line_args(conanfile, 'NMake Makefiles', parallel=True)
+ assert not len(args)
+
+
+def test_ninja(conanfile):
+ args = _cmake_cmd_line_args(conanfile, 'Ninja', parallel=True)
+ assert ['-j30'] == args
+
+ args = _cmake_cmd_line_args(conanfile, 'Ninja', parallel=False)
+ assert not len(args)
+
+
+def test_visual_studio(conanfile):
+ args = _cmake_cmd_line_args(conanfile, 'Visual Studio 16 2019', parallel=True)
+ assert ['/m:20'] == args
+
+ args = _cmake_cmd_line_args(conanfile, 'Ninja', parallel=False)
+ assert not len(args)
diff --git a/conans/test/unittests/tools/gnu/__init__.py b/conans/test/unittests/tools/gnu/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/unittests/tools/gnu/test_make_jobs_cmd_line_arg.py b/conans/test/unittests/tools/gnu/test_make_jobs_cmd_line_arg.py
new file mode 100644
index 00000000000..b2a463641c3
--- /dev/null
+++ b/conans/test/unittests/tools/gnu/test_make_jobs_cmd_line_arg.py
@@ -0,0 +1,53 @@
+import textwrap
+
+from conan.tools.gnu.make import make_jobs_cmd_line_arg
+from conans.model.conf import ConfDefinition
+from conans.test.utils.mocks import ConanFileMock
+
+
+def test_tools_build():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = make_jobs_cmd_line_arg(conanfile)
+ assert njobs == "-j10"
+
+
+def test_tools_gnu_make():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.gnu.make:jobs=23
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = make_jobs_cmd_line_arg(conanfile)
+ assert njobs == "-j23"
+
+
+def test_both_values():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.gnu.make:jobs=23
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = make_jobs_cmd_line_arg(conanfile)
+ assert njobs == "-j23"
+
+
+def test_none():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = make_jobs_cmd_line_arg(conanfile)
+ assert njobs is None
diff --git a/conans/test/unittests/tools/meson/__init__.py b/conans/test/unittests/tools/meson/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/unittests/tools/meson/test_meson.py b/conans/test/unittests/tools/meson/test_meson.py
new file mode 100644
index 00000000000..364157f479d
--- /dev/null
+++ b/conans/test/unittests/tools/meson/test_meson.py
@@ -0,0 +1,28 @@
+import textwrap
+
+from conan.tools.meson import Meson
+from conans.model.conf import ConfDefinition
+from conans.test.utils.mocks import ConanFileMock, MockSettings
+
+
+def test_meson_build():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.ninja:jobs=23
+ tools.build:processes=10
+ """))
+
+ settings = MockSettings({"build_type": "Release",
+ "compiler": "gcc",
+ "compiler.version": "7",
+ "os": "Linux",
+ "arch": "x86_64"})
+ conanfile = ConanFileMock()
+ conanfile.settings = settings
+ conanfile.display_name = 'test'
+ conanfile.conf = c.get_conanfile_conf(None)
+
+ meson = Meson(conanfile)
+ meson.build()
+
+ assert '-j23' in str(conanfile.command)
diff --git a/conans/test/unittests/tools/meson/test_ninja_jobs_cmd_line_arg.py b/conans/test/unittests/tools/meson/test_ninja_jobs_cmd_line_arg.py
new file mode 100644
index 00000000000..278f50d8737
--- /dev/null
+++ b/conans/test/unittests/tools/meson/test_ninja_jobs_cmd_line_arg.py
@@ -0,0 +1,53 @@
+import textwrap
+
+from conan.tools.meson.meson import ninja_jobs_cmd_line_arg
+from conans.model.conf import ConfDefinition
+from conans.test.utils.mocks import ConanFileMock
+
+
+def test_tools_build():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = ninja_jobs_cmd_line_arg(conanfile)
+ assert njobs == "-j10"
+
+
+def test_tools_ning():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.ninja:jobs=23
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = ninja_jobs_cmd_line_arg(conanfile)
+ assert njobs == "-j23"
+
+
+def test_both_values():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.ninja:jobs=23
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = ninja_jobs_cmd_line_arg(conanfile)
+ assert njobs == "-j23"
+
+
+def test_none():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ njobs = ninja_jobs_cmd_line_arg(conanfile)
+ assert njobs is None
diff --git a/conans/test/unittests/tools/microsoft/__init__.py b/conans/test/unittests/tools/microsoft/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/unittests/tools/microsoft/test_msbuild.py b/conans/test/unittests/tools/microsoft/test_msbuild.py
new file mode 100644
index 00000000000..d61c1827e9b
--- /dev/null
+++ b/conans/test/unittests/tools/microsoft/test_msbuild.py
@@ -0,0 +1,27 @@
+import textwrap
+
+from conan.tools.microsoft import MSBuild
+from conans.model.conf import ConfDefinition
+from conans.test.utils.mocks import ConanFileMock, MockSettings
+
+
+def test_msbuild_cpu_count():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.microsoft.msbuild:max_cpu_count=23
+ tools.build:processes=10
+ """))
+
+ settings = MockSettings({"build_type": "Release",
+ "compiler": "gcc",
+ "compiler.version": "7",
+ "os": "Linux",
+ "arch": "x86_64"})
+ conanfile = ConanFileMock()
+ conanfile.settings = settings
+ conanfile.conf = c.get_conanfile_conf(None)
+
+ msbuild = MSBuild(conanfile)
+ cmd = msbuild.command('project.sln')
+
+ assert '/m:23' in cmd
diff --git a/conans/test/unittests/tools/microsoft/test_msbuild_max_cpu_count_cmd_line_arg.py b/conans/test/unittests/tools/microsoft/test_msbuild_max_cpu_count_cmd_line_arg.py
new file mode 100644
index 00000000000..95fd0452a5b
--- /dev/null
+++ b/conans/test/unittests/tools/microsoft/test_msbuild_max_cpu_count_cmd_line_arg.py
@@ -0,0 +1,53 @@
+import textwrap
+
+from conan.tools.microsoft.msbuild import msbuild_max_cpu_count_cmd_line_arg
+from conans.model.conf import ConfDefinition
+from conans.test.utils.mocks import ConanFileMock
+
+
+def test_tools_build():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)
+ assert max_cpu_count == "/m:10"
+
+
+def test_tools_ning():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.microsoft.msbuild:max_cpu_count=23
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)
+ assert max_cpu_count == "/m:23"
+
+
+def test_both_values():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ tools.microsoft.msbuild:max_cpu_count=23
+ tools.build:processes=10
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)
+ assert max_cpu_count == "/m:23"
+
+
+def test_none():
+ c = ConfDefinition()
+ c.loads(textwrap.dedent("""\
+ """))
+
+ conanfile = ConanFileMock()
+ conanfile.conf = c.get_conanfile_conf(None)
+ max_cpu_count = msbuild_max_cpu_count_cmd_line_arg(conanfile)
+ assert max_cpu_count is None
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-8625@3c0b44e
|
conan-io/conan
|
Python
| 8,625
|
Do not remove sh from path in the new CMake helper
|
Changelog: Feature: Do not remove sh from the path in the new CMake helper.
Docs: https://github.com/conan-io/docs/pull/2055
#TAGS: slow
- [x] Refer to the issue that supports this Pull Request: closes #8597
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2021-03-10T09:20:11Z
|
Remove removal of "sh" from path for MinGW in CMake
New conan.tools.cmake CMake helper contains:
```python
if is_windows_mingw:
with tools.remove_from_path("sh"):
self._conanfile.run(command)
```
This is no longer necessary, can be replaced by ``set(DCMAKE_SH="CMAKE_SH-NOTFOUND")``. Please remove it and add a red/green test for it.
|
[
{
"body": "New conan.tools.cmake CMake helper contains:\r\n\r\n```python\r\n if is_windows_mingw:\r\n with tools.remove_from_path(\"sh\"):\r\n self._conanfile.run(command)\r\n```\r\n\r\nThis is no longer necessary, can be replaced by ``set(DCMAKE_SH=\"CMAKE_SH-NOTFOUND\")``. Please remove it and add a red/green test for it.",
"number": 8597,
"title": "Remove removal of \"sh\" from path for MinGW in CMake"
}
] |
8946a079a4857e20ab8dc33adac8d3069e0fd3af
|
{
"head_commit": "3c0b44e6bd0be4a5c6d097698810443002fdc4e7",
"head_commit_message": "Use CMAKE_SH-NOTFOUND",
"patch_to_review": "diff --git a/conan/tools/cmake/base.py b/conan/tools/cmake/base.py\nindex 860892e704f..5a520f642f8 100644\n--- a/conan/tools/cmake/base.py\n+++ b/conan/tools/cmake/base.py\n@@ -95,6 +95,7 @@ class CMakeToolchainBase(object):\n # We are going to adjust automagically many things as requested by Conan\n # these are the things done by 'conan_basic_setup()'\n set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)\n+ set(CMAKE_SH \"CMAKE_SH-NOTFOUND\")\n # To support the cmake_find_package generators\n {% if cmake_module_path -%}\n set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})\ndiff --git a/conan/tools/cmake/cmake.py b/conan/tools/cmake/cmake.py\nindex 4139447819d..db43d324996 100644\n--- a/conan/tools/cmake/cmake.py\n+++ b/conan/tools/cmake/cmake.py\n@@ -85,11 +85,7 @@ def configure(self, source_folder=None):\n is_windows_mingw = platform.system() == \"Windows\" and self._generator == \"MinGW Makefiles\"\n self._conanfile.output.info(\"CMake command: %s\" % command)\n with chdir(build_folder):\n- if is_windows_mingw:\n- with tools.remove_from_path(\"sh\"):\n- self._conanfile.run(command)\n- else:\n- self._conanfile.run(command)\n+ self._conanfile.run(command)\n \n def _build(self, build_type=None, target=None):\n bf = self._conanfile.build_folder\ndiff --git a/conans/test/functional/toolchains/cmake/test_cmake.py b/conans/test/functional/toolchains/cmake/test_cmake.py\nindex c59da9317a1..9452b5c1b6d 100644\n--- a/conans/test/functional/toolchains/cmake/test_cmake.py\n+++ b/conans/test/functional/toolchains/cmake/test_cmake.py\n@@ -528,3 +528,53 @@ def build(self):\n client.run(\"install .\")\n client.run(\"build .\")\n self.assertIn(\"VALUE OF CONFIG STRING: my new value\", client.out)\n+\n+\[email protected](platform.system() != \"Windows\", reason=\"Tests Windows MinGW\")\n+class TestMinGW:\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conan.tools.cmake import CMake, CMakeToolchain\n+ class App(ConanFile):\n+ settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n+ generators = \"cmake_find_package_multi\"\n+ exports_sources = \"CMakeLists.txt\", \"main.cpp\"\n+\n+ def generate(self):\n+ tc = CMakeToolchain(self)\n+ tc.generate()\n+\n+ def build(self):\n+ cmake = CMake(self)\n+ cmake.configure()\n+ cmake.build()\n+ \"\"\")\n+ cmakelists = textwrap.dedent(\"\"\"\n+ cmake_minimum_required(VERSION 2.8)\n+ project(app)\n+ add_executable(app main.cpp)\n+ \"\"\")\n+ main_cpp = gen_function_cpp(name=\"main\")\n+\n+ @pytest.mark.tool_mingw64\n+ @pytest.mark.tool_cmake(version=\"3.15\")\n+ def test_mingw64(self):\n+ profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Windows\n+ arch=x86_64\n+ build_type=Release\n+ compiler=gcc\n+ compiler.version=4.9\n+ compiler.libcxx=libstdc++\n+ compiler.cppstd=98\n+ \"\"\")\n+ client = TestClient()\n+ client.save({\"conanfile.py\": self.conanfile, \"CMakeLists.txt\": self.cmakelists,\n+ \"main.cpp\": self.main_cpp, \"profile\": profile})\n+ client.run_command(\"where cmake\")\n+ print(\"CMAKE PATH: %s\" % client.out)\n+ client.run_command(\"cmake --version\")\n+ assert \"cmake version 3.15\" in client.out\n+ client.run(\"create . test/1.0@ --profile profile\", assert_error=True)\n+ print(client.out)\n"
}
|
[
{
"diff_hunk": "@@ -528,3 +528,53 @@ def build(self):\n client.run(\"install .\")\n client.run(\"build .\")\n self.assertIn(\"VALUE OF CONFIG STRING: my new value\", client.out)\n+\n+\[email protected](platform.system() != \"Windows\", reason=\"Tests Windows MinGW\")",
"line": null,
"original_line": 533,
"original_start_line": null,
"path": "conans/test/functional/toolchains/cmake/test_cmake.py",
"start_line": null,
"text": "@user1:\n``TestSubsystemsCMakeBuild`` is already building very similar tests, isn't there any overlap?\n\n@author:\nStill a draft. I made another test for better isolation of the test but I will unify them to the existing MinGW ones once I figure out the cmake behavior"
}
] |
eaf838224e6ac74fdeef406449dccc7fdbd9eaf5
|
diff --git a/conan/tools/cmake/cmake.py b/conan/tools/cmake/cmake.py
index 4139447819d..049926a0e2c 100644
--- a/conan/tools/cmake/cmake.py
+++ b/conan/tools/cmake/cmake.py
@@ -79,17 +79,15 @@ def configure(self, source_folder=None):
self._conanfile.package_folder.replace("\\", "/"),
source)
+ if platform.system() == "Windows" and self._generator == "MinGW Makefiles":
+ arg_list += ' -DCMAKE_SH="CMAKE_SH-NOTFOUND"'
+
generator = '-G "{}" '.format(self._generator) if self._generator else ""
command = "%s %s%s" % (self._cmake_program, generator, arg_list)
- is_windows_mingw = platform.system() == "Windows" and self._generator == "MinGW Makefiles"
self._conanfile.output.info("CMake command: %s" % command)
with chdir(build_folder):
- if is_windows_mingw:
- with tools.remove_from_path("sh"):
- self._conanfile.run(command)
- else:
- self._conanfile.run(command)
+ self._conanfile.run(command)
def _build(self, build_type=None, target=None):
bf = self._conanfile.build_folder
diff --git a/conans/test/functional/toolchains/cmake/test_cmake.py b/conans/test/functional/toolchains/cmake/test_cmake.py
index c59da9317a1..ff24a50c880 100644
--- a/conans/test/functional/toolchains/cmake/test_cmake.py
+++ b/conans/test/functional/toolchains/cmake/test_cmake.py
@@ -272,6 +272,7 @@ def _verify_out(marker=">>"):
@parameterized.expand([("Debug", "libstdc++", "4.9", "98", "x86_64", True),
("Release", "libstdc++", "4.9", "11", "x86_64", False)])
@pytest.mark.tool_mingw64
+ @pytest.mark.tool_cmake(version="3.15")
def test_toolchain_mingw_win(self, build_type, libcxx, version, cppstd, arch, shared):
# FIXME: The version and cppstd are wrong, toolchain doesn't enforce it
settings = {"compiler": "gcc",
@@ -287,6 +288,7 @@ def test_toolchain_mingw_win(self, build_type, libcxx, version, cppstd, arch, sh
self.assertIn("The C compiler identification is GNU", self.client.out)
self.assertIn('CMake command: cmake -G "MinGW Makefiles" '
'-DCMAKE_TOOLCHAIN_FILE="conan_toolchain.cmake"', self.client.out)
+ assert '-DCMAKE_SH="CMAKE_SH-NOTFOUND"' in self.client.out
def _verify_out(marker=">>"):
cmake_vars = {"CMAKE_GENERATOR_PLATFORM": "",
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Code Refactoring / Architectural Improvement"
}
|
|
conan-io__conan-8685@608b5b8
|
conan-io/conan
|
Python
| 8,685
|
Relative profile path should be valid for creation commands
|
Conan accepts absolute and relative paths for profiles, that's well [documented](https://docs.conan.io/en/latest/reference/profiles.html).
We know the main recommendation is absolute, but relative path is not working well when is associated to same tree level, and profile names is same from default profile folder. If a default profile doesn't exist, it works well, but when duplicated, Conan prefers default profiles folder instead.
fixes #8678
Changelog: Fix: Accept relative profile path when folder is on same tree level.
Docs: https://github.com/conan-io/docs/pull/2049
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2021-03-22T21:25:33Z
|
[question] profile search order
From some testing, it seems conan tries to resolve relative paths to profiles in the following order:
1. resolve from `.conan/profiles`
2. resolve from current directory
I'm a bit confused why this order was chosen, as it doesn't seem to be intuitive: Why try the more general solution before the more specific one?
---
I stumbled upon this when developing some profiles to later be shared and used with `conan config install`. One referenced profile from the local directory structure (path scheme: `../profiles/xyz`) caused some confusion, as there was another profile installed at `.conan/profiles/xyz` and the latter was chosen instead.
|
Hi @PengolodhGoedath
It's not clear which steps did you use to validate it. Could you please share your steps?
Indeed Conan tries first `.conan/profiles` because it's the default path. As documented, [profiles](https://docs.conan.io/en/latest/reference/profiles.html) can be used as absolute path, and that's the recommended way. You still can set `PROFILE_DIR` env var, if you don't want use default profile folder.
Unfortunately, even if we find it wrong, we could not change that behavior, due Conan 1.0 backward compatibility, but we can think about for Conan 2.0.
@uilianries For full context, I've had the following folder structure in a repo:
```
./
.git/
system/
llvm/
conanfile.py
profiles/
msvc2019
```
(I skipped irrelevant parts for demonstration purposes. Yes, I'm using MSVC to build LLVM.) Now, with a terminal in `./system`, I tried to build `conan create llvm -pr ../profiles/msvc2019 -s build_type=Release`. That build failed, because it didn't actually take the profile from `./profiles/msvc2019`, and instead took a profile from `.conan/profiles/msvc2019`. (I mostly noticed because some `build_requires` listed in `./profiles/msvc2019` were missing.)
I expected the relative path to search from the local directory first, before looking in the global one (if at all). To be fair, the documentation you linked only handles relative paths starting with `.`, not `..`, but it really surprised me. I expected all relative paths to be handled equally.
Even worse: After renaming `.conan/profiles/msvc2019`, the command above worked. So it's not like conan is completely unable to handle relative paths starting with `..`, it just prioritizes questionably and looks in `PROFILE_DIR` first.
After some testing, I also found that `include()` inside of a profile has the same behavior.
---
If feel this is a bad user experience. It makes for very brittle automation scripts (e.g. automatically building system library packages, what I was trying), where they might work in one environment, but fail in another. It also goes against common intuition, expecting all relative paths to work alike. It also goes against common practice of looking in more specific (= more local) places first before looking in more general ones.
---
I'm mostly asking whether this was intended, and hopefully an explanation why it was decided this way. If it isn't intended, I might make this into a bug report, but for now I'm just assuming I'm not getting the full picture.
> Now, with a terminal in ./system, I tried to build conan create llvm -pr ../profiles/msvc2019 -s build_type=Release. That build failed, because it didn't actually take the profile from ./profiles/msvc2019, and instead took a profile from .conan/profiles/msvc2019. (I mostly noticed because some build_requires listed in ./profiles/msvc2019 were missing.)
Indeed I can reproduce your behavior running Conan 1.34.1.
It works for absolute path, but as the documentation says:
> Profiles can be located in different folders. For example, the default <userhome>/.conan/profiles, and be referenced by absolute or relative path:
> $ conan install . --profile /abs/path/to/profile # abs path
> $ conan install . --profile ./relpath/to/profile # resolved to current dir
> $ conan install . --profile profile # resolved to user/.conan/profiles/profile
It doesn't mention ../ but, oblivious, it should work too, because ./msvc2019 works. I would consider it a bug. I'll investigate better. Thanks for reporting your case.
|
[
{
"body": "From some testing, it seems conan tries to resolve relative paths to profiles in the following order:\r\n\r\n1. resolve from `.conan/profiles`\r\n2. resolve from current directory\r\n\r\nI'm a bit confused why this order was chosen, as it doesn't seem to be intuitive: Why try the more general solution before the more specific one?\r\n\r\n---\r\n\r\nI stumbled upon this when developing some profiles to later be shared and used with `conan config install`. One referenced profile from the local directory structure (path scheme: `../profiles/xyz`) caused some confusion, as there was another profile installed at `.conan/profiles/xyz` and the latter was chosen instead.",
"number": 8678,
"title": "[question] profile search order"
}
] |
f230bf104842c160be6ea377497151fe1ea64c38
|
{
"head_commit": "608b5b8c90a35b2ecba6d2df9c178bc2800db69a",
"head_commit_message": "#8678 Support relative profile folder\n\nSigned-off-by: Uilian Ries <[email protected]>",
"patch_to_review": "diff --git a/conans/client/profile_loader.py b/conans/client/profile_loader.py\nindex 9ba3a1cd295..598f4e92323 100644\n--- a/conans/client/profile_loader.py\n+++ b/conans/client/profile_loader.py\n@@ -90,7 +90,9 @@ def valid_path(_profile_path):\n if os.path.isabs(profile_name):\n return valid_path(profile_name)\n \n- if profile_name[:2] in (\"./\", \".\\\\\"): # local\n+ # relative local profile path\n+ if profile_name[:2] in (\"./\", \".\\\\\") or \\\n+ profile_name[:3] in (\"../\", \"..\\\\\"):\n profile_path = os.path.abspath(os.path.join(cwd, profile_name))\n return valid_path(profile_path)\n \ndiff --git a/conans/test/functional/configuration/profile_test.py b/conans/test/functional/configuration/profile_test.py\nindex 3b48c821760..f0d58925aa0 100644\n--- a/conans/test/functional/configuration/profile_test.py\n+++ b/conans/test/functional/configuration/profile_test.py\n@@ -501,6 +501,77 @@ def config(self):\n self.assertIn('''Requires:\n WinRequire/0.1@lasote/stable''', self.client.out)\n \n+ def test_install_profile_absolute_path(self):\n+ \"\"\" It does not matter where you are, if profile is an absolute path,\n+ Conan MUST use it instead of default profile with same name.\n+\n+ conan install foo/0.1.0@user/testing -pr /tmp/profiles/default\n+ \"\"\"\n+ project_folder = os.path.join(self.client.current_folder, \"subfolder\", \"project\")\n+ extra_profiles_folder = os.path.join(self.client.current_folder, \"subfolder\", \"profiles\")\n+ tools.mkdir(extra_profiles_folder)\n+ tools.mkdir(project_folder)\n+\n+ files = cpp_hello_conan_files(\"foobar\", \"0.1.0\", build=False)\n+ self.client.save(files, project_folder)\n+\n+ profile_name = \"baz\"\n+ create_profile(self.client.cache.profiles_path, profile_name, settings={}, env=[(\"SWORD\", \"ATLANTEAN\")])\n+ create_profile(extra_profiles_folder, profile_name, settings={}, env=[(\"BARBARIAN\", \"CONAN\")])\n+ expected_profile_path = os.path.abspath(os.path.join(extra_profiles_folder, profile_name))\n+\n+ with self.client.chdir(\"subfolder\"):\n+ self.client.run(\"create project user/testing -pr '{}'\".format(expected_profile_path))\n+ assert \"BARBARIAN=CONAN\" in self.client.out\n+ assert \"WORD=ATLANTEAN\" not in self.client.out\n+\n+ def test_install_profile_relative_dot_current_folder(self):\n+ \"\"\" It does not matter where you are, if profile is an relative path one level below,\n+ Conan MUST use it instead of default profile with same name.\n+\n+ conan install foo/0.1.0@user/testing -pr ./profiles/default\n+ \"\"\"\n+ project_folder = os.path.join(self.client.current_folder, \"subfolder\", \"project\")\n+ extra_profiles_folder = os.path.join(self.client.current_folder, \"subfolder\", \"profiles\")\n+ tools.mkdir(extra_profiles_folder)\n+ tools.mkdir(project_folder)\n+\n+ files = cpp_hello_conan_files(\"foobar\", \"0.1.0\", build=False)\n+ self.client.save(files, project_folder)\n+\n+ profile_name = \"baz\"\n+ create_profile(self.client.cache.profiles_path, profile_name, settings={}, env=[(\"SWORD\", \"ATLANTEAN\")])\n+ create_profile(extra_profiles_folder, profile_name, settings={}, env=[(\"BARBARIAN\", \"CONAN\")])\n+ expected_profile_path = \".\" + os.path.join(os.sep, \"profiles\", profile_name)\n+\n+ with self.client.chdir(\"subfolder\"):\n+ self.client.run(\"create project user/testing -pr '{}'\".format(expected_profile_path))\n+ assert \"BARBARIAN=CONAN\" in self.client.out\n+ assert \"WORD=ATLANTEAN\" not in self.client.out\n+\n+ def test_install_profile_relative_dot_dot_current_folder(self):\n+ \"\"\" It does not matter where you are, if profile is an relative path at same level,\n+ Conan MUST use it instead of default profile with same name.\n+\n+ conan install foo/0.1.0@user/testing -pr ../profiles/default\n+ \"\"\"\n+ project_folder = os.path.join(self.client.current_folder, \"subfolder\", \"project\")\n+ extra_profiles_folder = os.path.join(self.client.current_folder, \"subfolder\", \"profiles\")\n+ tools.mkdir(extra_profiles_folder)\n+ tools.mkdir(project_folder)\n+\n+ files = cpp_hello_conan_files(\"foobar\", \"0.1.0\", build=False)\n+ self.client.save(files, project_folder)\n+\n+ profile_name = \"baz\"\n+ create_profile(self.client.cache.profiles_path, profile_name, settings={}, env=[(\"SWORD\", \"ATLANTEAN\")])\n+ create_profile(extra_profiles_folder, profile_name, settings={}, env=[(\"BARBARIAN\", \"CONAN\")])\n+ expected_profile_path = \"..\" + os.path.join(os.sep, \"profiles\", profile_name)\n+\n+ with self.client.chdir(project_folder):\n+ self.client.run(\"create . user/testing -pr '{}'\".format(expected_profile_path))\n+ assert \"BARBARIAN=CONAN\" in self.client.out\n+ assert \"WORD=ATLANTEAN\" not in self.client.out\n \n class ProfileAggregationTest(unittest.TestCase):\n \n"
}
|
[
{
"diff_hunk": "@@ -90,7 +90,9 @@ def valid_path(_profile_path):\n if os.path.isabs(profile_name):\n return valid_path(profile_name)\n \n- if profile_name[:2] in (\"./\", \".\\\\\"): # local\n+ # relative local profile path\n+ if profile_name[:2] in (\"./\", \".\\\\\") or \\",
"line": null,
"original_line": 94,
"original_start_line": null,
"path": "conans/client/profile_loader.py",
"start_line": null,
"text": "@user1:\nwhy not use `os.path.isabs` instead of our own bicycles?\r\nthat will end up in many misdetected cases, e.g. in posix `..\\\\myfile` is valid file name\n\n@author:\nYes, makes total sense, I gonna improve it. Thanks.\n\n@author:\nDone."
}
] |
8295aefbbb762f6d59b3fa229155f43f980732d9
|
diff --git a/conans/client/profile_loader.py b/conans/client/profile_loader.py
index 9ba3a1cd295..0ce54ba67ea 100644
--- a/conans/client/profile_loader.py
+++ b/conans/client/profile_loader.py
@@ -82,17 +82,17 @@ def _apply_in_profile_text(self):
def get_profile_path(profile_name, default_folder, cwd, exists=True):
- def valid_path(_profile_path):
+ def valid_path(_profile_path, _profile_name=None):
if exists and not os.path.isfile(_profile_path):
- raise ConanException("Profile not found: %s" % _profile_path)
+ raise ConanException("Profile not found: {}".format(_profile_name or _profile_path))
return _profile_path
if os.path.isabs(profile_name):
return valid_path(profile_name)
- if profile_name[:2] in ("./", ".\\"): # local
+ if profile_name[:2] in ("./", ".\\") or profile_name.startswith(".."): # local
profile_path = os.path.abspath(os.path.join(cwd, profile_name))
- return valid_path(profile_path)
+ return valid_path(profile_path, profile_name)
if not os.path.exists(default_folder):
mkdir(default_folder)
diff --git a/conans/test/functional/configuration/profile_test.py b/conans/test/functional/configuration/profile_test.py
index 3b48c821760..13c94caacec 100644
--- a/conans/test/functional/configuration/profile_test.py
+++ b/conans/test/functional/configuration/profile_test.py
@@ -635,3 +635,56 @@ def test_profile_crazy_inheritance(self):
compiler.runtime=MD
compiler.version=15
os=Windows"""), self.client.out)
+
+
+def test_profile_from_cache_path():
+ """ When passing relative folder/profile as profile file, it MUST be used
+ conan install . -pr=profiles/default
+ /tmp/profiles/default MUST be consumed as target profile
+ https://github.com/conan-io/conan/pull/8685
+ """
+ client = TestClient()
+ path = os.path.join(client.cache.profiles_path, "android", "profile1")
+ save(path, "[settings]\nos=Android")
+ client.save({"conanfile.txt": ""})
+ client.run("install . -pr=android/profile1")
+ assert "os=Android" in client.out
+
+
+def test_profile_from_relative_pardir():
+ """ When passing relative ../path as profile file, it MUST be used
+ conan install . -pr=../profiles/default
+ /tmp/profiles/default MUST be consumed as target profile
+ """
+ client = TestClient()
+ client.save({"profiles/default": "[settings]\nos=AIX",
+ "current/conanfile.txt": ""})
+ with client.chdir("current"):
+ client.run("install . -pr=../profiles/default")
+ assert "os=AIX" in client.out
+
+
+def test_profile_from_relative_dotdir():
+ """ When passing relative ./path as profile file, it MUST be used
+ conan install . -pr=./profiles/default
+ /tmp/profiles/default MUST be consumed as target profile
+ """
+ client = TestClient()
+ client.save({os.path.join("profiles", "default"): "[settings]\nos=AIX",
+ os.path.join("current", "conanfile.txt"): ""})
+ client.run("install ./current -pr=./profiles/default")
+ assert "os=AIX" in client.out
+
+
+def test_profile_from_temp_absolute_path():
+ """ When passing absolute path as profile file, it MUST be used
+ conan install . -pr=/tmp/profiles/default
+ /tmp/profiles/default MUST be consumed as target profile
+ """
+ client = TestClient()
+ client.save({"profiles/default": "[settings]\nos=AIX",
+ "current/conanfile.txt": ""})
+ profile_path = os.path.join(client.current_folder, "profiles", "default")
+ recipe_path = os.path.join(client.current_folder, "current", "conanfile.txt")
+ client.run('install "{}" -pr="{}"'.format(recipe_path, profile_path))
+ assert "os=AIX" in client.out
diff --git a/conans/test/unittests/client/profile_loader/profile_loader_test.py b/conans/test/unittests/client/profile_loader/profile_loader_test.py
index 8d84a09e3f2..3a03fca89b1 100644
--- a/conans/test/unittests/client/profile_loader/profile_loader_test.py
+++ b/conans/test/unittests/client/profile_loader/profile_loader_test.py
@@ -355,3 +355,92 @@ def test_include_order(self):
self.assertEqual(variables, {"MYVAR": "fromProfile2",
"PROFILE_DIR": tmp.replace('\\', '/')})
self.assertEqual(profile.settings["os"], "fromProfile2")
+
+
+def test_profile_load_absolute_path():
+ """ When passing absolute path as profile file, it MUST be used.
+ read_profile(/abs/path/profile, /abs, /.conan/profiles)
+ /abs/path/profile MUST be consumed as target profile
+ """
+ profile_name = "default"
+ default_profile_folder = os.path.join(temp_folder(), "profiles")
+ default_profile_path = os.path.join(default_profile_folder, profile_name)
+ current_profile_folder = temp_folder()
+ current_profile_path = os.path.join(current_profile_folder, profile_name)
+ default_profile_content = textwrap.dedent("""
+ [env]
+ BORSCHT=BEET SOUP
+ """)
+ current_profile_content = default_profile_content.replace("BEET", "RUSSIAN")
+
+ save(default_profile_path, default_profile_content)
+ save(current_profile_path, current_profile_content)
+
+ profile, variables = read_profile(current_profile_path, current_profile_folder,
+ default_profile_folder)
+ assert ({"BORSCHT": "RUSSIAN SOUP"}, {}) == profile.env_values.env_dicts("")
+ assert current_profile_folder.replace("\\", "/") == variables["PROFILE_DIR"]
+
+
+def test_profile_load_relative_path_dot():
+ """ When passing relative ./path as profile file, it MUST be used
+ read_profile(./profiles/profile, /tmp, /.conan/profiles)
+ /tmp/profiles/profile MUST be consumed as target profile
+ """
+ profile_name = "default"
+ default_profile_folder = os.path.join(temp_folder(), "profiles")
+ default_profile_path = os.path.join(default_profile_folder, profile_name)
+ current_profile_folder = temp_folder()
+ current_profile_path = os.path.join(current_profile_folder, profile_name)
+ default_profile_content = textwrap.dedent("""
+ [env]
+ BORSCHT=BEET SOUP
+ """)
+ current_profile_content = default_profile_content.replace("BEET", "RUSSIAN")
+ relative_current_profile_path = "." + os.path.join(os.sep,
+ os.path.basename(current_profile_folder),
+ profile_name)
+
+ save(default_profile_path, default_profile_content)
+ save(current_profile_path, current_profile_content)
+
+ profile, variables = read_profile(relative_current_profile_path,
+ os.path.dirname(current_profile_folder),
+ default_profile_folder)
+ assert ({"BORSCHT": "RUSSIAN SOUP"}, {}) == profile.env_values.env_dicts("")
+ assert current_profile_folder.replace("\\", "/") == variables["PROFILE_DIR"]
+
+
+def test_profile_load_relative_path_pardir():
+ """ When passing relative ../path as profile file, it MUST be used
+ read_profile(../profiles/profile, /tmp/current, /.conan/profiles)
+ /tmp/profiles/profile MUST be consumed as target profile
+ """
+ profile_name = "default"
+ default_profile_folder = os.path.join(temp_folder(), "profiles")
+ os.mkdir(default_profile_folder)
+ default_profile_path = os.path.join(default_profile_folder, profile_name)
+
+ current_temp_folder = temp_folder()
+ current_profile_folder = os.path.join(current_temp_folder, "profiles")
+ current_running_folder = os.path.join(current_temp_folder, "current")
+
+ os.mkdir(current_profile_folder)
+ os.mkdir(current_running_folder)
+
+ current_profile_path = os.path.join(current_profile_folder, profile_name)
+ default_profile_content = textwrap.dedent("""
+ [env]
+ BORSCHT=BEET SOUP
+ """)
+ current_profile_content = default_profile_content.replace("BEET", "RUSSIAN")
+ relative_current_profile_path = os.pardir + os.path.join(os.sep, "profiles", profile_name)
+
+ save(default_profile_path, default_profile_content)
+ save(current_profile_path, current_profile_content)
+
+ profile, variables = read_profile(relative_current_profile_path,
+ current_running_folder,
+ default_profile_folder)
+ assert ({"BORSCHT": "RUSSIAN SOUP"}, {}) == profile.env_values.env_dicts("")
+ assert current_profile_folder.replace("\\", "/") == variables["PROFILE_DIR"]
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-8408@7e9a477
|
conan-io/conan
|
Python
| 8,408
|
Fix exit code for conan_build_info
|
Changelog: Bugfix: Fix exit code for `conan_build_info`.
Docs: omit
Fixes: https://github.com/conan-io/conan/issues/8395
- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2021-01-28T18:14:10Z
|
[bug] conan_build_info does not raise in event of an error
Failure to publish buildinfo in one of our CI jobs went undetected because an error occurred but was not raised.
### Environment Details (include every applicable attribute)
* Operating System+version: Windows 10 64-bit
* Compiler+version: n/a
* Conan version: 1.33.0
* Python version: 3.7.3
### Steps to reproduce (Include if Applicable)
Try to publish erroneous buildinfo to artifactory. In my case the 'properties' section contained an à character from the gerrit-injected env var for commit message (which I wasn't aware is not supported).
### Logs (Executed commands with output) (Include/Attach if Applicable)
```
conan_build_info --v2 publish buildinfo.json --url=<artifactory url> --user=**** --password=****
[1m[31mERROR: 'latin-1' codec can't encode character '\u201a' in position 3538: ordinal not in range(256)[0m
```
<!--
Your log content should be related to the bug description, it can be:
- Conan command output
- Server output (Artifactory, conan_server)
-->
|
Hi @GordonJess,
Thanks a lot for reporting, you are right the exit code of `conan_build_info` was not correctly applied.
Will be fixed as soon as possible.
|
[
{
"body": "Failure to publish buildinfo in one of our CI jobs went undetected because an error occurred but was not raised.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Windows 10 64-bit\r\n * Compiler+version: n/a\r\n * Conan version: 1.33.0\r\n * Python version: 3.7.3\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nTry to publish erroneous buildinfo to artifactory. In my case the 'properties' section contained an à character from the gerrit-injected env var for commit message (which I wasn't aware is not supported).\r\n\r\n### Logs (Executed commands with output) (Include/Attach if Applicable)\r\n```\r\nconan_build_info --v2 publish buildinfo.json --url=<artifactory url> --user=**** --password=**** \r\n\u001b[1m\u001b[31mERROR: 'latin-1' codec can't encode character '\\u201a' in position 3538: ordinal not in range(256)\u001b[0m\r\n```\r\n\r\n<!--\r\n Your log content should be related to the bug description, it can be:\r\n - Conan command output\r\n - Server output (Artifactory, conan_server)\r\n-->\r\n",
"number": 8395,
"title": "[bug] conan_build_info does not raise in event of an error"
}
] |
fe8ec5185b0fb19951a8aaa0c110a2bcff4120e4
|
{
"head_commit": "7e9a477495048d1d28853cf68520c06081e2566e",
"head_commit_message": "exit 1 when error",
"patch_to_review": "diff --git a/conans/build_info/command.py b/conans/build_info/command.py\nindex 20dcc74598d..5bc40ead8cd 100644\n--- a/conans/build_info/command.py\n+++ b/conans/build_info/command.py\n@@ -121,10 +121,9 @@ def check_credential_arguments():\n check_credential_arguments()\n publish_build_info(args.buildinfo, args.url, args.user, args.password,\n args.apikey)\n- except ConanException as exc:\n- output.error(exc)\n except Exception as exc:\n output.error(exc)\n+ exit(1)\n \n \n if __name__ == \"__main__\":\n"
}
|
[
{
"diff_hunk": "@@ -121,10 +121,9 @@ def check_credential_arguments():\n check_credential_arguments()\n publish_build_info(args.buildinfo, args.url, args.user, args.password,\n args.apikey)\n- except ConanException as exc:\n- output.error(exc)\n except Exception as exc:\n output.error(exc)\n+ exit(1)",
"line": null,
"original_line": 126,
"original_start_line": null,
"path": "conans/build_info/command.py",
"start_line": null,
"text": "@user1:\nUse sys.exit() instead"
}
] |
dda0eb8ee619daaa696873873aea74879421fcd9
|
diff --git a/conans/build_info/command.py b/conans/build_info/command.py
index 20dcc74598d..f49ded092bc 100644
--- a/conans/build_info/command.py
+++ b/conans/build_info/command.py
@@ -34,10 +34,10 @@ def runv1():
if not os.path.exists(args.trace_path):
output.error("Conan trace log not found! '%s'" % args.trace_path)
- exit(1)
+ sys.exit(1)
if args.output and not os.path.exists(os.path.dirname(args.output)):
output.error("Output file directory not found! '%s'" % args.trace_path)
- exit(1)
+ sys.exit(1)
info = get_build_info(args.trace_path)
the_json = json.dumps(info.serialize())
@@ -47,7 +47,7 @@ def runv1():
output.write(the_json)
except Exception as exc:
output.error(exc)
- exit(1)
+ sys.exit(1)
except SystemExit:
output.writeln("")
output.warn("Use 'conan_build_info --v2' to see the usage of the new recommended way to "
@@ -121,10 +121,9 @@ def check_credential_arguments():
check_credential_arguments()
publish_build_info(args.buildinfo, args.url, args.user, args.password,
args.apikey)
- except ConanException as exc:
- output.error(exc)
except Exception as exc:
output.error(exc)
+ sys.exit(1)
if __name__ == "__main__":
diff --git a/conans/test/functional/conan_build_info/test_build_info_creation.py b/conans/test/functional/conan_build_info/test_build_info_creation.py
index a724ab93cd9..e1631cf9632 100644
--- a/conans/test/functional/conan_build_info/test_build_info_creation.py
+++ b/conans/test/functional/conan_build_info/test_build_info_creation.py
@@ -52,7 +52,7 @@ def mock_response(url, data=None, **kwargs):
kwargs["headers"]["X-JFrog-Art-Api"] != "apikey"):
mock_resp.status_code = 401
buildinfo = json.load(data)
- if not buildinfo["name"] == "MyBuildInfo" or not buildinfo["number"] == "42":
+ if not buildinfo["name"] == "MyBuildName" or not buildinfo["number"] == "42":
mock_resp.status_code = 400
mock_resp.content = None
return mock_resp
@@ -271,6 +271,7 @@ def test_build_info_old_lockfile_version(self, mock_cache, user_home_mock):
result = StringIO()
sys.stderr = result
run()
+ except SystemExit:
result = result.getvalue()
self.assertIn("This lockfile was created with an incompatible version of Conan", result)
finally:
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-8186@27077e9
|
conan-io/conan
|
Python
| 8,186
|
add build_requires to conandeps.props file in MSBuildDeps
|
Changelog: Fix: Include ``build_requires`` in the global ``conandeps.props`` file generated by MSBuildDeps.
Changelog: Fix: Change `MSBuildDeps` file ``conan_deps.props`` to ``conandeps.props`` to avoid collision with a package named "deps".
Docs: Omit
Close https://github.com/conan-io/conan/issues/8170
|
2020-12-10T11:54:48Z
|
[bug] MSBuildDeps generator: build_requires not added to conan_deps.props
Build requirements are not added to the generated `conan_foo.props` file.
### Environment Details (include every applicable attribute)
* Operating System+version:
* Compiler+version:
* Conan version: 1.32.0
* Python version:
### Steps to reproduce (Include if Applicable)
1. add a library package (e.g. `foo`) to `build_requires`
2. execute `conan install` using the `MSBuildDeps`/`msbuild` generator
- `conan_foo.props` is generated
- generated `conan_deps.props` does not contain an import of `conan_foo.props`
Background:
We have a test helper library as a dependency that is only used in the test. Previously we added it as a private requirement, which showed up in the `conan_foo.props`. From our current understanding of Conan, it however is better written as a build requirements.
|
Hi @lieser
This could be a bug, but please take into account that the new ``MSBuildDeps`` generator is an experimental one and might change its behavior, and one of the things that might change is the management of ``build_requires``. As documented, explained in the trainings, etc:
> build_requires are designed for packaging tools, utilities that only run at build-time, but are not part of the final binary code. Anything that is linked into consumer packages like all type of libraries (header only, static, shared) most likely are not build_requires but regular requires. The only exception would be testing libraries and frameworks, as long as the tests are not included in the final package.
So unless you are using it for things like ``cmake`` tool or the ``gtest`` testing library, I would suggest reconsidering your approach. The way that you can see how this could change is using the new ``profile:build`` and the build-context. When using it, the build-requires do NOT longer propagate things like ``includedirs``, ``libdirs``, ``libs``, etc. because those should be regular ``requires`` instead.
We will try to have a look and maybe fix it for next 1.33, but please keep an eye how it evolves regarding ``build_requires``. Thanks!
We are aware that `MSBuildDeps` is experimental, but thanks for reminding us.
Note that this is not an urgent/important bugfix for us (easy workaround available by simply adding another import).
I described our usage of `build_requires` more closely in #8176. Would be nice if you could take a look at it, as it currently still causes problems, and would be nice to know if we are maybe just misusing it.
Hi @lieser I am proposing to add it in #8186
|
[
{
"body": "Build requirements are not added to the generated `conan_foo.props` file.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version:\r\n * Compiler+version:\r\n * Conan version: 1.32.0\r\n * Python version:\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n1. add a library package (e.g. `foo`) to `build_requires`\r\n2. execute `conan install` using the `MSBuildDeps`/`msbuild` generator\r\n\r\n- `conan_foo.props` is generated\r\n- generated `conan_deps.props` does not contain an import of `conan_foo.props`\r\n\r\nBackground:\r\nWe have a test helper library as a dependency that is only used in the test. Previously we added it as a private requirement, which showed up in the `conan_foo.props`. From our current understanding of Conan, it however is better written as a build requirements.\r\n",
"number": 8170,
"title": "[bug] MSBuildDeps generator: build_requires not added to conan_deps.props"
}
] |
da41e3398d46ffc3548166526973d6805ffe0028
|
{
"head_commit": "27077e9b71424a96dc067cc4728ffc75225356d2",
"head_commit_message": "using deps_cpp_info.direct_deps instead of conanfile attrs",
"patch_to_review": "diff --git a/conan/tools/microsoft/msbuilddeps.py b/conan/tools/microsoft/msbuilddeps.py\nindex 2853bd973c8..cea59cf308b 100644\n--- a/conan/tools/microsoft/msbuilddeps.py\n+++ b/conan/tools/microsoft/msbuilddeps.py\n@@ -217,11 +217,12 @@ def _content(self):\n if not self._conanfile.settings.get_safe(\"build_type\"):\n raise ConanException(\"The 'msbuild' generator requires a 'build_type' setting value\")\n result = {}\n- general_name = \"conan_deps.props\"\n+ general_name = \"conandeps.props\"\n conf_name = self._config_filename()\n condition = self._condition()\n- public_deps = self._conanfile.requires.keys()\n- result[general_name] = self._deps_props(general_name, public_deps)\n+ # Include all direct build_requires for both build & host context. This might change\n+ direct_deps = self._conanfile.deps_cpp_info.direct_deps\n+ result[general_name] = self._deps_props(general_name, direct_deps)\n for dep_name, cpp_info in self._conanfile.deps_cpp_info.dependencies:\n # One file per configuration, with just the variables\n vars_props_name = \"conan_%s%s.props\" % (dep_name, conf_name)\ndiff --git a/conans/client/installer.py b/conans/client/installer.py\nindex 300894a0c0d..9f662e4bfa3 100644\n--- a/conans/client/installer.py\n+++ b/conans/client/installer.py\n@@ -588,6 +588,7 @@ def _propagate_info(self, node, using_build_profile):\n env_info.PATH.extend(dep_cpp_info.bin_paths)\n conan_file.deps_env_info.update(env_info, n.ref.name)\n \n+ conan_file.deps_cpp_info.direct_deps = [n.name for n in node.neighbors()]\n # Update the info but filtering the package values that not apply to the subtree\n # of this current node and its dependencies.\n subtree_libnames = [node.ref.name for node in node_order]\ndiff --git a/conans/test/functional/generators/msbuild_test.py b/conans/test/functional/generators/msbuild_test.py\nindex ac03ea27b60..0408c26ba1e 100644\n--- a/conans/test/functional/generators/msbuild_test.py\n+++ b/conans/test/functional/generators/msbuild_test.py\n@@ -573,10 +573,10 @@ def build(self):\n myproject_cpp = gen_function_cpp(name=\"main\", msg=\"MyProject\")\n files = {\"MyProject.sln\": sln_file,\n \"MyProject/MyProject.vcxproj\": myproject_vcxproj.replace(\"conan_Hello3.props\",\n- \"conan_deps.props\"),\n+ \"conandeps.props\"),\n \"MyProject/MyProject.cpp\": myproject_cpp,\n \"MyApp/MyApp.vcxproj\": myapp_vcxproj.replace(\"conan_Hello1.props\",\n- \"conan_deps.props\"),\n+ \"conandeps.props\"),\n \"MyApp/MyApp.cpp\": myapp_cpp,\n \"conanfile.py\": conanfile}\n \n@@ -585,3 +585,27 @@ def build(self):\n self.assertIn(\"'msbuild' has been deprecated and moved.\", client.out)\n client.run(\"build .\")\n self.assertNotIn(\"warning MSB4011\", client.out)\n+\n+ def test_install_build_requires(self):\n+ # https://github.com/conan-io/conan/issues/8170\n+ client = TestClient()\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . tool/1.0@\")\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, load\n+ class HelloConan(ConanFile):\n+ settings = \"os\", \"build_type\", \"compiler\", \"arch\"\n+ build_requires = \"tool/1.0\"\n+ generators = \"MSBuildDeps\"\n+ def build(self):\n+ deps = load(\"conandeps.props\")\n+ assert \"conan_tool.props\" in deps\n+ self.output.info(\"Conan_tools.props in deps\")\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"install .\")\n+ deps = client.load(\"conandeps.props\")\n+ self.assertIn(\"conan_tool.props\", deps)\n+ client.run(\"create . pkg/0.1@\")\n+ self.assertIn(\"Conan_tools.props in deps\", client.out)\n"
}
|
[
{
"diff_hunk": "@@ -217,11 +217,12 @@ def _content(self):\n if not self._conanfile.settings.get_safe(\"build_type\"):\n raise ConanException(\"The 'msbuild' generator requires a 'build_type' setting value\")\n result = {}\n- general_name = \"conan_deps.props\"\n+ general_name = \"conandeps.props\"\n conf_name = self._config_filename()\n condition = self._condition()\n- public_deps = self._conanfile.requires.keys()\n- result[general_name] = self._deps_props(general_name, public_deps)\n+ # Include all direct build_requires for both build & host context. This might change",
"line": null,
"original_line": 223,
"original_start_line": null,
"path": "conan/tools/microsoft/msbuilddeps.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n # Include all direct build_requires for host context\r\n```\r\n\r\n\n\n@author:\nDone, and fixed also, filtering CONTEXT_HOST in the code, not only the comment"
}
] |
8e7f8b5a4191a7af93a5238f4995bcf1ca7ed9ec
|
diff --git a/conan/tools/microsoft/msbuilddeps.py b/conan/tools/microsoft/msbuilddeps.py
index 2853bd973c8..98823066660 100644
--- a/conan/tools/microsoft/msbuilddeps.py
+++ b/conan/tools/microsoft/msbuilddeps.py
@@ -217,11 +217,12 @@ def _content(self):
if not self._conanfile.settings.get_safe("build_type"):
raise ConanException("The 'msbuild' generator requires a 'build_type' setting value")
result = {}
- general_name = "conan_deps.props"
+ general_name = "conandeps.props"
conf_name = self._config_filename()
condition = self._condition()
- public_deps = self._conanfile.requires.keys()
- result[general_name] = self._deps_props(general_name, public_deps)
+ # Include all direct build_requires for host context. This might change
+ direct_deps = self._conanfile.deps_cpp_info.direct_host_deps
+ result[general_name] = self._deps_props(general_name, direct_deps)
for dep_name, cpp_info in self._conanfile.deps_cpp_info.dependencies:
# One file per configuration, with just the variables
vars_props_name = "conan_%s%s.props" % (dep_name, conf_name)
diff --git a/conans/client/installer.py b/conans/client/installer.py
index 300894a0c0d..46c01d0965b 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -588,6 +588,8 @@ def _propagate_info(self, node, using_build_profile):
env_info.PATH.extend(dep_cpp_info.bin_paths)
conan_file.deps_env_info.update(env_info, n.ref.name)
+ conan_file.deps_cpp_info.direct_host_deps = [n.name for n in node.neighbors()
+ if n.context == CONTEXT_HOST]
# Update the info but filtering the package values that not apply to the subtree
# of this current node and its dependencies.
subtree_libnames = [node.ref.name for node in node_order]
diff --git a/conans/test/functional/generators/msbuild_test.py b/conans/test/functional/generators/msbuild_test.py
index ac03ea27b60..0408c26ba1e 100644
--- a/conans/test/functional/generators/msbuild_test.py
+++ b/conans/test/functional/generators/msbuild_test.py
@@ -573,10 +573,10 @@ def build(self):
myproject_cpp = gen_function_cpp(name="main", msg="MyProject")
files = {"MyProject.sln": sln_file,
"MyProject/MyProject.vcxproj": myproject_vcxproj.replace("conan_Hello3.props",
- "conan_deps.props"),
+ "conandeps.props"),
"MyProject/MyProject.cpp": myproject_cpp,
"MyApp/MyApp.vcxproj": myapp_vcxproj.replace("conan_Hello1.props",
- "conan_deps.props"),
+ "conandeps.props"),
"MyApp/MyApp.cpp": myapp_cpp,
"conanfile.py": conanfile}
@@ -585,3 +585,27 @@ def build(self):
self.assertIn("'msbuild' has been deprecated and moved.", client.out)
client.run("build .")
self.assertNotIn("warning MSB4011", client.out)
+
+ def test_install_build_requires(self):
+ # https://github.com/conan-io/conan/issues/8170
+ client = TestClient()
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . tool/1.0@")
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile, load
+ class HelloConan(ConanFile):
+ settings = "os", "build_type", "compiler", "arch"
+ build_requires = "tool/1.0"
+ generators = "MSBuildDeps"
+ def build(self):
+ deps = load("conandeps.props")
+ assert "conan_tool.props" in deps
+ self.output.info("Conan_tools.props in deps")
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("install .")
+ deps = client.load("conandeps.props")
+ self.assertIn("conan_tool.props", deps)
+ client.run("create . pkg/0.1@")
+ self.assertIn("Conan_tools.props in deps", client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-8053@c686eaf
|
conan-io/conan
|
Python
| 8,053
|
Feature/validate
|
Changelog: Feature: Introduce a new `BINARY_INVALID` mode for more flexible definition and management of invalid configurations.
Docs: https://github.com/conan-io/docs/pull/1947
Close https://github.com/conan-io/conan/issues/7591
This PR satisfies 2 current proposals/requests:
- The need to specify "impossible" binaries after the graph is fully compute, options are propagated down to consumers, real requirements are also updated (not declared ones, but the actual computed ones, there can be overrides). Feature request in https://github.com/conan-io/conan/issues/7591
- The need to specify that something cannot be built, but it is possible to have a compatible package defined in ``package_id()``, ``compatible_packages``. PR https://github.com/conan-io/conan/pull/6952
The current proposal provides a relatively straightforward solution for both by adding a new explicit model of BINARY_INVALID, in the same way we already have a BINARY_UNKNOWN. This model will also be good for future installation f ``build_requires`` that do not exist in a host system, because they only exist in the build context, without needing to explicitly define the build profile.
It also supports asking for "what/if need to build" with ``conan info``, it can explicitly return ID: INVALID without raising an exception, so it can be easily used in ConanCenter for example.
There are still some corner cases to be managed (like the processing after a BINARY_UNKNOWN, if that case can happen), or how BINARY_INVALID can produce a BINARY_UNKNOWN if required downstream, but I prefered to propose the core concept before completing it.
|
2020-11-12T22:30:52Z
|
[feature] Method to be run after the graph is resolved
There are some checks that are needed and cannot be run in the `configure()` method:
* values from options of the requirements
* actual version of the requirements (if using version-ranges or overriding).
**Motivation**
Right now, these checks are moved to the `build()` method, which is the only (the first) method were all these values are available. Recipe can check for options values and it can use `deps_cpp_info["xxx"].version` to check the actual version of the requirements.
The big issue is that this method is executed only if the package is going to be built, but if the package is already available we won't notice the error until it is too late (runtime error). Depending on the `package_id_mode`, the consumer will get a different package-id **or not** for the given scenarios above. So in the general use-case, this new method is something needed.
**Proposal**
Let's add a new method to run these checks:
```python
class Recipe(ConanFile):
def validate(self):
if self.options["req1"].shared == True:
raise ConanInvalidConfiguration("req1 needs to be statically linked")
if self.<requirements>["req1"].version >= 7.0:
raise ConanInvalidConfiguration(f"{self.name} requires 'req1<7.0'")
```
This method will run after the graph is built for the same Conan commands `configure()` is running right now.
|
For hooks I think we can use `pre_build`, but for regular validation this new feature looks interesting.
It would be nice if conan was able to report a summary of options/requirements violations, instead of just raise at the first error. Otherwise, consumers may fall in a "die and retry" or "read your 100 conanfiles" to figure out what to do to solve those issues.
> It would be nice if conan was able to report a summary of options/requirements violations, instead of just raise at the first error. Otherwise, consumers may fall in a "die and retry" or "read your 100 conanfiles" to figure out what to do to solve those issues.
Yes, this is something to take into account for the graph model in Conan 2.0, but I want to limit the expectations a little bit: requirements are conditional to options and settings, so as soon as we find a conflict we cannot go ahead in that branch of the graph (it might not be the right branch once the conflict is solved), but I agree we can inspect other branches and report errors that are not affected by existing ones. 👍
|
[
{
"body": "There are some checks that are needed and cannot be run in the `configure()` method:\r\n * values from options of the requirements\r\n * actual version of the requirements (if using version-ranges or overriding).\r\n\r\n**Motivation**\r\n\r\nRight now, these checks are moved to the `build()` method, which is the only (the first) method were all these values are available. Recipe can check for options values and it can use `deps_cpp_info[\"xxx\"].version` to check the actual version of the requirements.\r\n\r\nThe big issue is that this method is executed only if the package is going to be built, but if the package is already available we won't notice the error until it is too late (runtime error). Depending on the `package_id_mode`, the consumer will get a different package-id **or not** for the given scenarios above. So in the general use-case, this new method is something needed.\r\n\r\n**Proposal**\r\n\r\nLet's add a new method to run these checks:\r\n\r\n```python\r\n\r\nclass Recipe(ConanFile):\r\n def validate(self):\r\n if self.options[\"req1\"].shared == True:\r\n raise ConanInvalidConfiguration(\"req1 needs to be statically linked\")\r\n\r\n if self.<requirements>[\"req1\"].version >= 7.0:\r\n raise ConanInvalidConfiguration(f\"{self.name} requires 'req1<7.0'\") \r\n```\r\n\r\nThis method will run after the graph is built for the same Conan commands `configure()` is running right now.",
"number": 7591,
"title": "[feature] Method to be run after the graph is resolved"
}
] |
e15dc7902fbbeaf469798a3b9948ead1ecfc8e3c
|
{
"head_commit": "c686eafa8f63f4310c8bea0a4443d3a91f787241",
"head_commit_message": "add conan info --json checks",
"patch_to_review": "diff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py\nindex 8c842c974c9..c49bb8e3851 100644\n--- a/conans/client/graph/graph.py\n+++ b/conans/client/graph/graph.py\n@@ -21,6 +21,7 @@\n BINARY_SKIP = \"Skip\"\n BINARY_EDITABLE = \"Editable\"\n BINARY_UNKNOWN = \"Unknown\"\n+BINARY_INVALID = \"Invalid\"\n \n CONTEXT_HOST = \"host\"\n CONTEXT_BUILD = \"build\"\ndiff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py\nindex 6b5e5c2fd8f..b9b5d7290f6 100644\n--- a/conans/client/graph/graph_binaries.py\n+++ b/conans/client/graph/graph_binaries.py\n@@ -1,10 +1,11 @@\n from conans.client.graph.build_mode import BuildMode\n from conans.client.graph.graph import (BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_MISSING,\n BINARY_UPDATE, RECIPE_EDITABLE, BINARY_EDITABLE,\n- RECIPE_CONSUMER, RECIPE_VIRTUAL, BINARY_SKIP, BINARY_UNKNOWN)\n+ RECIPE_CONSUMER, RECIPE_VIRTUAL, BINARY_SKIP, BINARY_UNKNOWN,\n+ BINARY_INVALID)\n from conans.errors import NoRemoteAvailable, NotFoundException, conanfile_exception_formatter, \\\n- ConanException\n-from conans.model.info import ConanInfo, PACKAGE_ID_UNKNOWN\n+ ConanException, ConanInvalidConfiguration\n+from conans.model.info import ConanInfo, PACKAGE_ID_UNKNOWN, PACKAGE_ID_INVALID\n from conans.model.manifest import FileTreeManifest\n from conans.model.ref import PackageReference\n from conans.util.conan_v2_mode import conan_v2_property\n@@ -173,7 +174,7 @@ def _evaluate_node(self, node, build_mode, update, remotes):\n assert node.binary is None, \"Node.binary should be None if not locked\"\n pref = PackageReference(node.ref, node.package_id)\n self._process_node(node, pref, build_mode, update, remotes)\n- if node.binary == BINARY_MISSING:\n+ if node.binary in (BINARY_MISSING, BINARY_INVALID):\n if node.conanfile.compatible_packages:\n compatible_build_mode = BuildMode(None, self._out)\n for compatible_package in node.conanfile.compatible_packages:\n@@ -187,16 +188,19 @@ def _evaluate_node(self, node, build_mode, update, remotes):\n # NO Build mode\n self._process_node(node, pref, compatible_build_mode, update, remotes)\n assert node.binary is not None\n- if node.binary != BINARY_MISSING:\n+ if node.binary not in BINARY_MISSING:\n node.conanfile.output.info(\"Main binary package '%s' missing. Using \"\n \"compatible package '%s'\"\n % (node.package_id, package_id))\n+\n # Modifying package id under the hood, FIXME\n node._package_id = package_id\n # So they are available in package_info() method\n node.conanfile.settings.values = compatible_package.settings\n node.conanfile.options.values = compatible_package.options\n break\n+ if node.binary == BINARY_MISSING and node.package_id == PACKAGE_ID_INVALID:\n+ node.binary = BINARY_INVALID\n if node.binary == BINARY_MISSING and build_mode.allowed(node.conanfile):\n node.binary = BINARY_BUILD\n \n@@ -214,6 +218,12 @@ def _process_node(self, node, pref, build_mode, update, remotes):\n node.binary = BINARY_EDITABLE # TODO: PREV?\n return\n \n+ if pref.id == PACKAGE_ID_INVALID:\n+ # annotate pattern, so unused patterns in --build are not displayed as errors\n+ build_mode.forced(node.conanfile, node.ref)\n+ node.binary = BINARY_INVALID\n+ return\n+\n if self._evaluate_build(node, build_mode):\n return\n \n@@ -336,9 +346,17 @@ def _compute_package_id(self, node, default_package_id_mode, default_python_requ\n \"'self.cpp_info' access in package_id() method is deprecated\"):\n conanfile.package_id()\n \n+ if hasattr(conanfile, \"validate\") and callable(conanfile.validate):\n+ with conanfile_exception_formatter(str(conanfile), \"validate\"):\n+ try:\n+ conanfile.validate()\n+ except ConanInvalidConfiguration as e:\n+ conanfile.info.invalid = str(e)\n+\n info = conanfile.info\n node.package_id = info.package_id()\n \n+\n def evaluate_graph(self, deps_graph, build_mode, update, remotes, nodes_subset=None, root=None):\n default_package_id_mode = self._cache.config.default_package_id_mode\n default_python_requires_id_mode = self._cache.config.default_python_requires_id_mode\n@@ -346,6 +364,7 @@ def evaluate_graph(self, deps_graph, build_mode, update, remotes, nodes_subset=N\n self._propagate_options(node)\n \n self._compute_package_id(node, default_package_id_mode, default_python_requires_id_mode)\n+\n if node.recipe in (RECIPE_CONSUMER, RECIPE_VIRTUAL):\n continue\n if node.package_id == PACKAGE_ID_UNKNOWN:\n@@ -368,6 +387,7 @@ def reevaluate_node(self, node, remotes, build_mode, update):\n default_python_requires_id_mode = self._cache.config.default_python_requires_id_mode\n output.info(\"Unknown binary for %s, computing updated ID\" % str(node.ref))\n self._compute_package_id(node, default_package_id_mode, default_python_requires_id_mode)\n+\n output.info(\"Updated ID: %s\" % node.package_id)\n if node.recipe in (RECIPE_CONSUMER, RECIPE_VIRTUAL):\n return\ndiff --git a/conans/client/installer.py b/conans/client/installer.py\nindex d1f20b55a9a..0d8118b6da2 100644\n--- a/conans/client/installer.py\n+++ b/conans/client/installer.py\n@@ -10,7 +10,7 @@\n from conans.client.file_copier import report_copied_files\n from conans.client.generators import TXTGenerator\n from conans.client.graph.graph import BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_EDITABLE, \\\n- BINARY_MISSING, BINARY_SKIP, BINARY_UPDATE, BINARY_UNKNOWN, CONTEXT_HOST\n+ BINARY_MISSING, BINARY_SKIP, BINARY_UPDATE, BINARY_UNKNOWN, CONTEXT_HOST, BINARY_INVALID\n from conans.client.importer import remove_imports, run_imports\n from conans.client.packager import update_package_metadata\n from conans.client.recorder.action_recorder import INSTALL_ERROR_BUILDING, INSTALL_ERROR_MISSING, \\\n@@ -20,7 +20,7 @@\n from conans.client.tools.env import no_op\n from conans.client.tools.env import pythonpath\n from conans.errors import (ConanException, ConanExceptionInUserConanfileMethod,\n- conanfile_exception_formatter)\n+ conanfile_exception_formatter, ConanInvalidConfiguration)\n from conans.model.build_info import CppInfo, DepCppInfo\n from conans.model.conan_file import ConanFile\n from conans.model.editable_layout import EditableLayout\n@@ -307,14 +307,16 @@ def install(self, deps_graph, remotes, build_mode, update, keep_build=False, gra\n \n @staticmethod\n def _classify(nodes_by_level):\n- missing, downloads = [], []\n+ missing, invalid, downloads = [], [], []\n for level in nodes_by_level:\n for node in level:\n if node.binary == BINARY_MISSING:\n missing.append(node)\n+ elif node.binary == BINARY_INVALID:\n+ invalid.append(node)\n elif node.binary in (BINARY_UPDATE, BINARY_DOWNLOAD):\n downloads.append(node)\n- return missing, downloads\n+ return missing, invalid, downloads\n \n def _raise_missing(self, missing):\n if not missing:\n@@ -403,7 +405,11 @@ def _download_pkg(self, layout, node):\n \n def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, build_mode, update):\n using_build_profile = bool(graph_info.profile_build)\n- missing, downloads = self._classify(nodes_by_level)\n+ missing, invalid, downloads = self._classify(nodes_by_level)\n+ if invalid:\n+ node = invalid[0] # Raise the first one\n+ msg = \"{}: Invalid ID: {}\".format(node.conanfile, node.conanfile.info.invalid)\n+ raise ConanInvalidConfiguration(msg)\n self._raise_missing(missing)\n processed_package_refs = set()\n self._download(downloads, processed_package_refs)\n@@ -427,6 +433,7 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._binaries_analyzer.reevaluate_node(node, remotes, build_mode, update)\n if node.binary == BINARY_MISSING:\n self._raise_missing([node])\n+ # TODO: Check if BINARY_INVALID?\n _handle_system_requirements(conan_file, node.pref, self._cache, output)\n self._handle_node_cache(node, keep_build, processed_package_refs, remotes)\n \ndiff --git a/conans/model/info.py b/conans/model/info.py\nindex e1e0b6a2a6a..fc0cd4ec376 100644\n--- a/conans/model/info.py\n+++ b/conans/model/info.py\n@@ -14,6 +14,7 @@\n \n PREV_UNKNOWN = \"PREV unknown\"\n PACKAGE_ID_UNKNOWN = \"Package_ID_unknown\"\n+PACKAGE_ID_INVALID = \"INVALID\"\n \n \n class RequirementInfo(object):\n@@ -425,6 +426,7 @@ def copy(self):\n \"\"\" Useful for build_id implementation\n \"\"\"\n result = ConanInfo()\n+ result.invalid = self.invalid\n result.settings = self.settings.copy()\n result.options = self.options.copy()\n result.requires = self.requires.copy()\n@@ -435,6 +437,7 @@ def copy(self):\n def create(settings, options, prefs_direct, prefs_indirect, default_package_id_mode,\n python_requires, default_python_requires_id_mode):\n result = ConanInfo()\n+ result.invalid = None\n result.full_settings = settings\n result.settings = settings.copy()\n result.full_options = options\n@@ -461,6 +464,7 @@ def loads(text):\n \"requires\", \"full_requires\", \"scope\", \"recipe_hash\", \"env\"],\n raise_unexpected_field=False)\n result = ConanInfo()\n+ result.invalid = None\n result.settings = Values.loads(parser.settings)\n result.full_settings = Values.loads(parser.full_settings)\n result.options = OptionsValues.loads(parser.options)\n@@ -534,6 +538,8 @@ def package_id(self):\n \"\"\" The package_id of a conans is the sha1 of its specific requirements,\n options and settings\n \"\"\"\n+ if self.invalid:\n+ return PACKAGE_ID_INVALID\n result = [self.settings.sha]\n # Only are valid requires for OPtions those Non-Dev who are still in requires\n self.options.filter_used(self.requires.pkg_names)\ndiff --git a/conans/test/functional/package_id/test_validate.py b/conans/test/functional/package_id/test_validate.py\nnew file mode 100644\nindex 00000000000..e3f96474fd0\n--- /dev/null\n+++ b/conans/test/functional/package_id/test_validate.py\n@@ -0,0 +1,224 @@\n+import json\n+import textwrap\n+import unittest\n+\n+from conans.cli.exit_codes import ERROR_INVALID_CONFIGURATION\n+from conans.client.graph.graph import BINARY_INVALID\n+from conans.model.info import PACKAGE_ID_INVALID\n+from conans.test.assets.genconanfile import GenConanfile\n+from conans.test.utils.tools import TestClient\n+\n+\n+class TestValidate(unittest.TestCase):\n+\n+ def test_validate_create(self):\n+ client = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ settings = \"os\"\n+\n+ def validate(self):\n+ if self.settings.os == \"Windows\":\n+ raise ConanInvalidConfiguration(\"Windows not supported\")\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+\n+ client.run(\"create . pkg/0.1@ -s os=Linux\")\n+ self.assertIn(\"pkg/0.1: Package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31' created\",\n+ client.out)\n+\n+ error = client.run(\"create . pkg/0.1@ -s os=Windows\", assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+ client.run(\"info pkg/0.1@ -s os=Windows\")\n+ self.assertIn(\"ID: INVALID\", client.out)\n+ client.run(\"info pkg/0.1@ -s os=Windows --json=myjson\")\n+ myjson = json.loads(client.load(\"myjson\"))\n+ self.assertEqual(myjson[0][\"binary\"], BINARY_INVALID)\n+ self.assertEqual(myjson[0][\"id\"], PACKAGE_ID_INVALID)\n+\n+ def test_validate_compatible(self):\n+ client = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ settings = \"os\"\n+\n+ def validate(self):\n+ if self.settings.os == \"Windows\":\n+ raise ConanInvalidConfiguration(\"Windows not supported\")\n+\n+ def package_id(self):\n+ if self.settings.os == \"Windows\":\n+ compatible_pkg = self.info.clone()\n+ compatible_pkg.settings.os = \"Linux\"\n+ self.compatible_packages.append(compatible_pkg)\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+\n+ client.run(\"create . pkg/0.1@ -s os=Linux\")\n+ self.assertIn(\"pkg/0.1: Package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31' created\",\n+ client.out)\n+\n+ client.run(\"create . pkg/0.1@ -s os=Windows\")\n+ self.assertIn(\"pkg/0.1: Main binary package 'INVALID' missing. \"\n+ \"Using compatible package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31'\",\n+ client.out)\n+ self.assertIn(\"pkg/0.1:cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31 - Cache\", client.out)\n+ client.run(\"info pkg/0.1@ -s os=Windows\")\n+ self.assertIn(\"pkg/0.1: Main binary package 'INVALID' missing. \"\n+ \"Using compatible package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31'\",\n+ client.out)\n+ self.assertIn(\"ID: cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31\", client.out)\n+\n+ def test_validate_compatible_also_invalid(self):\n+ client = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ settings = \"os\", \"build_type\"\n+\n+ def validate(self):\n+ if self.settings.os == \"Windows\":\n+ raise ConanInvalidConfiguration(\"Windows not supported\")\n+\n+ def package_id(self):\n+ if self.settings.build_type == \"Debug\" and self.settings.os != \"Windows\":\n+ compatible_pkg = self.info.clone()\n+ compatible_pkg.settings.build_type = \"Release\"\n+ self.compatible_packages.append(compatible_pkg)\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+\n+ client.run(\"create . pkg/0.1@ -s os=Linux -s build_type=Release\")\n+ self.assertIn(\"pkg/0.1: Package '24c3aa2d6c5929d53bd86b31e020c55d96b265c7' created\",\n+ client.out)\n+ # compatible_packges fallback works\n+ client.run(\"install pkg/0.1@ -s os=Linux -s build_type=Debug\")\n+ self.assertIn(\"pkg/0.1:24c3aa2d6c5929d53bd86b31e020c55d96b265c7 - Cache\", client.out)\n+\n+ error = client.run(\"create . pkg/0.1@ -s os=Windows -s build_type=Release\",\n+ assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+\n+ client.run(\"info pkg/0.1@ -s os=Windows\")\n+ self.assertIn(\"ID: INVALID\", client.out)\n+\n+ def test_validate_compatible_also_invalid_fail(self):\n+ client = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ settings = \"os\", \"build_type\"\n+\n+ def validate(self):\n+ if self.settings.os == \"Windows\":\n+ raise ConanInvalidConfiguration(\"Windows not supported\")\n+\n+ def package_id(self):\n+ if self.settings.build_type == \"Debug\":\n+ compatible_pkg = self.info.clone()\n+ compatible_pkg.settings.build_type = \"Release\"\n+ self.compatible_packages.append(compatible_pkg)\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+\n+ client.run(\"create . pkg/0.1@ -s os=Linux -s build_type=Release\")\n+ self.assertIn(\"pkg/0.1: Package '24c3aa2d6c5929d53bd86b31e020c55d96b265c7' created\",\n+ client.out)\n+ # compatible_packges fallback works\n+ client.run(\"install pkg/0.1@ -s os=Linux -s build_type=Debug\")\n+ self.assertIn(\"pkg/0.1:24c3aa2d6c5929d53bd86b31e020c55d96b265c7 - Cache\", client.out)\n+\n+ # Windows invalid configuration\n+ error = client.run(\"create . pkg/0.1@ -s os=Windows -s build_type=Release\",\n+ assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+\n+ error = client.run(\"install pkg/0.1@ -s os=Windows -s build_type=Release\",\n+ assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+\n+ # Windows missing binary: INVALID\n+ error = client.run(\"install pkg/0.1@ -s os=Windows -s build_type=Debug\",\n+ assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+\n+ error = client.run(\"create . pkg/0.1@ -s os=Windows -s build_type=Debug\",\n+ assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+\n+ # info\n+ client.run(\"info pkg/0.1@ -s os=Windows\")\n+ self.assertIn(\"ID: INVALID\", client.out)\n+ client.run(\"info pkg/0.1@ -s os=Windows -s build_type=Debug\")\n+ self.assertIn(\"ID: INVALID\", client.out)\n+\n+ def test_validate_options(self):\n+ client = TestClient()\n+ client.save({\"conanfile.py\": GenConanfile().with_option(\"myoption\", [1, 2, 3])\n+ .with_default_option(\"myoption\", 1)})\n+ client.run(\"create . dep/0.1@\")\n+ client.run(\"create . dep/0.1@ -o dep:myoption=2\")\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ requires = \"dep/0.1\"\n+\n+ def validate(self):\n+ if self.options[\"dep\"].myoption == 2:\n+ raise ConanInvalidConfiguration(\"Option 2 of 'dep' not supported\")\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . pkg1/0.1@ -o dep:myoption=1\")\n+\n+ client.save({\"conanfile.py\": GenConanfile().with_requires(\"dep/0.1\")\n+ .with_default_option(\"dep:myoption\", 2)})\n+ client.run(\"create . pkg2/0.1@\")\n+\n+ client.save({\"conanfile.py\": GenConanfile().with_requires(\"pkg1/0.1\", \"pkg2/0.1\")})\n+ error = client.run(\"install .\", assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg1/0.1: Invalid ID: Option 2 of 'dep' not supported\", client.out)\n+\n+ def test_validate_requires(self):\n+ client = TestClient()\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . dep/0.1@\")\n+ client.run(\"create . dep/0.2@\")\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ requires = \"dep/0.1\"\n+\n+ def validate(self):\n+ # FIXME: This is a ugly interface\n+ # if self.info.requires[\"dep\"].full_version ==\n+ if self.requires[\"dep\"].ref.version > \"0.1\":\n+ raise ConanInvalidConfiguration(\"dep> 0.1 is not supported\")\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . pkg1/0.1@\")\n+\n+ client.save({\"conanfile.py\": GenConanfile().with_requires(\"pkg1/0.1\", \"dep/0.2\")})\n+ error = client.run(\"install .\", assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg1/0.1: Invalid ID: dep> 0.1 is not supported\", client.out)\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,224 @@\n+import json\n+import textwrap\n+import unittest\n+\n+from conans.cli.exit_codes import ERROR_INVALID_CONFIGURATION\n+from conans.client.graph.graph import BINARY_INVALID\n+from conans.model.info import PACKAGE_ID_INVALID\n+from conans.test.assets.genconanfile import GenConanfile\n+from conans.test.utils.tools import TestClient\n+\n+\n+class TestValidate(unittest.TestCase):\n+\n+ def test_validate_create(self):\n+ client = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.errors import ConanInvalidConfiguration\n+ class Pkg(ConanFile):\n+ settings = \"os\"\n+\n+ def validate(self):\n+ if self.settings.os == \"Windows\":\n+ raise ConanInvalidConfiguration(\"Windows not supported\")\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile})\n+\n+ client.run(\"create . pkg/0.1@ -s os=Linux\")\n+ self.assertIn(\"pkg/0.1: Package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31' created\",\n+ client.out)\n+\n+ error = client.run(\"create . pkg/0.1@ -s os=Windows\", assert_error=True)\n+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)\n+ self.assertIn(\"ERROR: pkg/0.1: Invalid ID: Windows not supported\", client.out)\n+ client.run(\"info pkg/0.1@ -s os=Windows\")\n+ self.assertIn(\"ID: INVALID\", client.out)\n+ client.run(\"info pkg/0.1@ -s os=Windows --json=myjson\")\n+ myjson = json.loads(client.load(\"myjson\"))\n+ self.assertEqual(myjson[0][\"binary\"], BINARY_INVALID)\n+ self.assertEqual(myjson[0][\"id\"], PACKAGE_ID_INVALID)",
"line": null,
"original_line": 41,
"original_start_line": null,
"path": "conans/test/functional/package_id/test_validate.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n self.assertEqual(myjson[0][\"id\"], 'Invalid')\r\n```"
}
] |
651966ce31996f887ca62c522f84d76a8614d64f
|
diff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py
index 8c842c974c9..c49bb8e3851 100644
--- a/conans/client/graph/graph.py
+++ b/conans/client/graph/graph.py
@@ -21,6 +21,7 @@
BINARY_SKIP = "Skip"
BINARY_EDITABLE = "Editable"
BINARY_UNKNOWN = "Unknown"
+BINARY_INVALID = "Invalid"
CONTEXT_HOST = "host"
CONTEXT_BUILD = "build"
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 6b5e5c2fd8f..1de99fed8ca 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -1,10 +1,11 @@
from conans.client.graph.build_mode import BuildMode
from conans.client.graph.graph import (BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_MISSING,
BINARY_UPDATE, RECIPE_EDITABLE, BINARY_EDITABLE,
- RECIPE_CONSUMER, RECIPE_VIRTUAL, BINARY_SKIP, BINARY_UNKNOWN)
+ RECIPE_CONSUMER, RECIPE_VIRTUAL, BINARY_SKIP, BINARY_UNKNOWN,
+ BINARY_INVALID)
from conans.errors import NoRemoteAvailable, NotFoundException, conanfile_exception_formatter, \
- ConanException
-from conans.model.info import ConanInfo, PACKAGE_ID_UNKNOWN
+ ConanException, ConanInvalidConfiguration
+from conans.model.info import ConanInfo, PACKAGE_ID_UNKNOWN, PACKAGE_ID_INVALID
from conans.model.manifest import FileTreeManifest
from conans.model.ref import PackageReference
from conans.util.conan_v2_mode import conan_v2_property
@@ -173,7 +174,7 @@ def _evaluate_node(self, node, build_mode, update, remotes):
assert node.binary is None, "Node.binary should be None if not locked"
pref = PackageReference(node.ref, node.package_id)
self._process_node(node, pref, build_mode, update, remotes)
- if node.binary == BINARY_MISSING:
+ if node.binary in (BINARY_MISSING, BINARY_INVALID):
if node.conanfile.compatible_packages:
compatible_build_mode = BuildMode(None, self._out)
for compatible_package in node.conanfile.compatible_packages:
@@ -187,16 +188,19 @@ def _evaluate_node(self, node, build_mode, update, remotes):
# NO Build mode
self._process_node(node, pref, compatible_build_mode, update, remotes)
assert node.binary is not None
- if node.binary != BINARY_MISSING:
+ if node.binary not in (BINARY_MISSING, ):
node.conanfile.output.info("Main binary package '%s' missing. Using "
"compatible package '%s'"
% (node.package_id, package_id))
+
# Modifying package id under the hood, FIXME
node._package_id = package_id
# So they are available in package_info() method
node.conanfile.settings.values = compatible_package.settings
node.conanfile.options.values = compatible_package.options
break
+ if node.binary == BINARY_MISSING and node.package_id == PACKAGE_ID_INVALID:
+ node.binary = BINARY_INVALID
if node.binary == BINARY_MISSING and build_mode.allowed(node.conanfile):
node.binary = BINARY_BUILD
@@ -214,6 +218,12 @@ def _process_node(self, node, pref, build_mode, update, remotes):
node.binary = BINARY_EDITABLE # TODO: PREV?
return
+ if pref.id == PACKAGE_ID_INVALID:
+ # annotate pattern, so unused patterns in --build are not displayed as errors
+ build_mode.forced(node.conanfile, node.ref)
+ node.binary = BINARY_INVALID
+ return
+
if self._evaluate_build(node, build_mode):
return
@@ -336,6 +346,13 @@ def _compute_package_id(self, node, default_package_id_mode, default_python_requ
"'self.cpp_info' access in package_id() method is deprecated"):
conanfile.package_id()
+ if hasattr(conanfile, "validate") and callable(conanfile.validate):
+ with conanfile_exception_formatter(str(conanfile), "validate"):
+ try:
+ conanfile.validate()
+ except ConanInvalidConfiguration as e:
+ conanfile.info.invalid = str(e)
+
info = conanfile.info
node.package_id = info.package_id()
diff --git a/conans/client/installer.py b/conans/client/installer.py
index a9f5057eef8..80b1becf6f1 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -10,7 +10,7 @@
from conans.client.file_copier import report_copied_files
from conans.client.generators import TXTGenerator, write_toolchain
from conans.client.graph.graph import BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_EDITABLE, \
- BINARY_MISSING, BINARY_SKIP, BINARY_UPDATE, BINARY_UNKNOWN, CONTEXT_HOST
+ BINARY_MISSING, BINARY_SKIP, BINARY_UPDATE, BINARY_UNKNOWN, CONTEXT_HOST, BINARY_INVALID
from conans.client.importer import remove_imports, run_imports
from conans.client.packager import update_package_metadata
from conans.client.recorder.action_recorder import INSTALL_ERROR_BUILDING, INSTALL_ERROR_MISSING, \
@@ -19,7 +19,7 @@
from conans.client.tools.env import no_op
from conans.client.tools.env import pythonpath
from conans.errors import (ConanException, ConanExceptionInUserConanfileMethod,
- conanfile_exception_formatter)
+ conanfile_exception_formatter, ConanInvalidConfiguration)
from conans.model.build_info import CppInfo, DepCppInfo
from conans.model.conan_file import ConanFile
from conans.model.editable_layout import EditableLayout
@@ -306,14 +306,16 @@ def install(self, deps_graph, remotes, build_mode, update, keep_build=False, gra
@staticmethod
def _classify(nodes_by_level):
- missing, downloads = [], []
+ missing, invalid, downloads = [], [], []
for level in nodes_by_level:
for node in level:
if node.binary == BINARY_MISSING:
missing.append(node)
+ elif node.binary == BINARY_INVALID:
+ invalid.append(node)
elif node.binary in (BINARY_UPDATE, BINARY_DOWNLOAD):
downloads.append(node)
- return missing, downloads
+ return missing, invalid, downloads
def _raise_missing(self, missing):
if not missing:
@@ -402,7 +404,12 @@ def _download_pkg(self, layout, node):
def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, build_mode, update):
using_build_profile = bool(graph_info.profile_build)
- missing, downloads = self._classify(nodes_by_level)
+ missing, invalid, downloads = self._classify(nodes_by_level)
+ if invalid:
+ msg = ["There are invalid packages (packages that cannot exist for this configuration):"]
+ for node in invalid:
+ msg.append("{}: Invalid ID: {}".format(node.conanfile, node.conanfile.info.invalid))
+ raise ConanInvalidConfiguration("\n".join(msg))
self._raise_missing(missing)
processed_package_refs = set()
self._download(downloads, processed_package_refs)
diff --git a/conans/model/info.py b/conans/model/info.py
index e1e0b6a2a6a..e5d492d4cbf 100644
--- a/conans/model/info.py
+++ b/conans/model/info.py
@@ -14,6 +14,7 @@
PREV_UNKNOWN = "PREV unknown"
PACKAGE_ID_UNKNOWN = "Package_ID_unknown"
+PACKAGE_ID_INVALID = "INVALID"
class RequirementInfo(object):
@@ -65,6 +66,8 @@ def dumps(self):
def sha(self):
if self.package_id == PACKAGE_ID_UNKNOWN or self.package_revision == PREV_UNKNOWN:
return None
+ if self.package_id == PACKAGE_ID_INVALID:
+ return PACKAGE_ID_INVALID
vals = [str(n) for n in (self.name, self.version, self.user, self.channel, self.package_id)]
# This is done later to NOT affect existing package-IDs (before revisions)
if self.recipe_revision:
@@ -218,6 +221,8 @@ def sha(self):
s = data[key].sha
if s is None:
return None
+ if s == PACKAGE_ID_INVALID:
+ return PACKAGE_ID_INVALID
result.append(s)
return sha1('\n'.join(result).encode())
@@ -425,6 +430,7 @@ def copy(self):
""" Useful for build_id implementation
"""
result = ConanInfo()
+ result.invalid = self.invalid
result.settings = self.settings.copy()
result.options = self.options.copy()
result.requires = self.requires.copy()
@@ -435,6 +441,7 @@ def copy(self):
def create(settings, options, prefs_direct, prefs_indirect, default_package_id_mode,
python_requires, default_python_requires_id_mode):
result = ConanInfo()
+ result.invalid = None
result.full_settings = settings
result.settings = settings.copy()
result.full_options = options
@@ -461,6 +468,7 @@ def loads(text):
"requires", "full_requires", "scope", "recipe_hash", "env"],
raise_unexpected_field=False)
result = ConanInfo()
+ result.invalid = None
result.settings = Values.loads(parser.settings)
result.full_settings = Values.loads(parser.full_settings)
result.options = OptionsValues.loads(parser.options)
@@ -534,6 +542,8 @@ def package_id(self):
""" The package_id of a conans is the sha1 of its specific requirements,
options and settings
"""
+ if self.invalid:
+ return PACKAGE_ID_INVALID
result = [self.settings.sha]
# Only are valid requires for OPtions those Non-Dev who are still in requires
self.options.filter_used(self.requires.pkg_names)
@@ -541,6 +551,9 @@ def package_id(self):
requires_sha = self.requires.sha
if requires_sha is None:
return PACKAGE_ID_UNKNOWN
+ if requires_sha == PACKAGE_ID_INVALID:
+ self.invalid = "Invalid transitive dependencies"
+ return PACKAGE_ID_INVALID
result.append(requires_sha)
if self.python_requires:
result.append(self.python_requires.sha)
diff --git a/conans/test/functional/package_id/test_validate.py b/conans/test/functional/package_id/test_validate.py
new file mode 100644
index 00000000000..11730334816
--- /dev/null
+++ b/conans/test/functional/package_id/test_validate.py
@@ -0,0 +1,249 @@
+import json
+import textwrap
+import unittest
+
+from conans.cli.exit_codes import ERROR_INVALID_CONFIGURATION
+from conans.client.graph.graph import BINARY_INVALID
+from conans.test.assets.genconanfile import GenConanfile
+from conans.test.utils.tools import TestClient
+
+
+class TestValidate(unittest.TestCase):
+
+ def test_validate_create(self):
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ settings = "os"
+
+ def validate(self):
+ if self.settings.os == "Windows":
+ raise ConanInvalidConfiguration("Windows not supported")
+ """)
+
+ client.save({"conanfile.py": conanfile})
+
+ client.run("create . pkg/0.1@ -s os=Linux")
+ self.assertIn("pkg/0.1: Package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31' created",
+ client.out)
+
+ error = client.run("create . pkg/0.1@ -s os=Windows", assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg/0.1: Invalid ID: Windows not supported", client.out)
+ client.run("info pkg/0.1@ -s os=Windows")
+ self.assertIn("ID: INVALID", client.out)
+ client.run("info pkg/0.1@ -s os=Windows --json=myjson")
+ myjson = json.loads(client.load("myjson"))
+ self.assertEqual(myjson[0]["binary"], BINARY_INVALID)
+ self.assertEqual(myjson[0]["id"], 'INVALID')
+
+ def test_validate_compatible(self):
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ settings = "os"
+
+ def validate(self):
+ if self.settings.os == "Windows":
+ raise ConanInvalidConfiguration("Windows not supported")
+
+ def package_id(self):
+ if self.settings.os == "Windows":
+ compatible_pkg = self.info.clone()
+ compatible_pkg.settings.os = "Linux"
+ self.compatible_packages.append(compatible_pkg)
+ """)
+
+ client.save({"conanfile.py": conanfile})
+
+ client.run("create . pkg/0.1@ -s os=Linux")
+ self.assertIn("pkg/0.1: Package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31' created",
+ client.out)
+
+ client.run("create . pkg/0.1@ -s os=Windows")
+ self.assertIn("pkg/0.1: Main binary package 'INVALID' missing. "
+ "Using compatible package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31'",
+ client.out)
+ self.assertIn("pkg/0.1:cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31 - Cache", client.out)
+ client.run("info pkg/0.1@ -s os=Windows")
+ self.assertIn("pkg/0.1: Main binary package 'INVALID' missing. "
+ "Using compatible package 'cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31'",
+ client.out)
+ self.assertIn("ID: cb054d0b3e1ca595dc66bc2339d40f1f8f04ab31", client.out)
+
+ def test_validate_compatible_also_invalid(self):
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ settings = "os", "build_type"
+
+ def validate(self):
+ if self.settings.os == "Windows":
+ raise ConanInvalidConfiguration("Windows not supported")
+
+ def package_id(self):
+ if self.settings.build_type == "Debug" and self.settings.os != "Windows":
+ compatible_pkg = self.info.clone()
+ compatible_pkg.settings.build_type = "Release"
+ self.compatible_packages.append(compatible_pkg)
+ """)
+
+ client.save({"conanfile.py": conanfile})
+
+ client.run("create . pkg/0.1@ -s os=Linux -s build_type=Release")
+ self.assertIn("pkg/0.1: Package '24c3aa2d6c5929d53bd86b31e020c55d96b265c7' created",
+ client.out)
+ # compatible_packges fallback works
+ client.run("install pkg/0.1@ -s os=Linux -s build_type=Debug")
+ self.assertIn("pkg/0.1:24c3aa2d6c5929d53bd86b31e020c55d96b265c7 - Cache", client.out)
+
+ error = client.run("create . pkg/0.1@ -s os=Windows -s build_type=Release",
+ assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg/0.1: Invalid ID: Windows not supported", client.out)
+
+ client.run("info pkg/0.1@ -s os=Windows")
+ self.assertIn("ID: INVALID", client.out)
+
+ def test_validate_compatible_also_invalid_fail(self):
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ settings = "os", "build_type"
+
+ def validate(self):
+ if self.settings.os == "Windows":
+ raise ConanInvalidConfiguration("Windows not supported")
+
+ def package_id(self):
+ if self.settings.build_type == "Debug":
+ compatible_pkg = self.info.clone()
+ compatible_pkg.settings.build_type = "Release"
+ self.compatible_packages.append(compatible_pkg)
+ """)
+
+ client.save({"conanfile.py": conanfile})
+
+ client.run("create . pkg/0.1@ -s os=Linux -s build_type=Release")
+ self.assertIn("pkg/0.1: Package '24c3aa2d6c5929d53bd86b31e020c55d96b265c7' created",
+ client.out)
+ # compatible_packges fallback works
+ client.run("install pkg/0.1@ -s os=Linux -s build_type=Debug")
+ self.assertIn("pkg/0.1:24c3aa2d6c5929d53bd86b31e020c55d96b265c7 - Cache", client.out)
+
+ # Windows invalid configuration
+ error = client.run("create . pkg/0.1@ -s os=Windows -s build_type=Release",
+ assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg/0.1: Invalid ID: Windows not supported", client.out)
+
+ error = client.run("install pkg/0.1@ -s os=Windows -s build_type=Release",
+ assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg/0.1: Invalid ID: Windows not supported", client.out)
+
+ # Windows missing binary: INVALID
+ error = client.run("install pkg/0.1@ -s os=Windows -s build_type=Debug",
+ assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg/0.1: Invalid ID: Windows not supported", client.out)
+
+ error = client.run("create . pkg/0.1@ -s os=Windows -s build_type=Debug",
+ assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg/0.1: Invalid ID: Windows not supported", client.out)
+
+ # info
+ client.run("info pkg/0.1@ -s os=Windows")
+ self.assertIn("ID: INVALID", client.out)
+ client.run("info pkg/0.1@ -s os=Windows -s build_type=Debug")
+ self.assertIn("ID: INVALID", client.out)
+
+ def test_validate_options(self):
+ client = TestClient()
+ client.save({"conanfile.py": GenConanfile().with_option("myoption", [1, 2, 3])
+ .with_default_option("myoption", 1)})
+ client.run("create . dep/0.1@")
+ client.run("create . dep/0.1@ -o dep:myoption=2")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ requires = "dep/0.1"
+
+ def validate(self):
+ if self.options["dep"].myoption == 2:
+ raise ConanInvalidConfiguration("Option 2 of 'dep' not supported")
+ """)
+
+ client.save({"conanfile.py": conanfile})
+ client.run("create . pkg1/0.1@ -o dep:myoption=1")
+
+ client.save({"conanfile.py": GenConanfile().with_requires("dep/0.1")
+ .with_default_option("dep:myoption", 2)})
+ client.run("create . pkg2/0.1@")
+
+ client.save({"conanfile.py": GenConanfile().with_requires("pkg1/0.1", "pkg2/0.1")})
+ error = client.run("install .", assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg1/0.1: Invalid ID: Option 2 of 'dep' not supported", client.out)
+
+ def test_validate_requires(self):
+ client = TestClient()
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . dep/0.1@")
+ client.run("create . dep/0.2@")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ requires = "dep/0.1"
+
+ def validate(self):
+ # FIXME: This is a ugly interface DO NOT MAKE IT PUBLIC
+ # if self.info.requires["dep"].full_version ==
+ if self.requires["dep"].ref.version > "0.1":
+ raise ConanInvalidConfiguration("dep> 0.1 is not supported")
+ """)
+
+ client.save({"conanfile.py": conanfile})
+ client.run("create . pkg1/0.1@")
+
+ client.save({"conanfile.py": GenConanfile().with_requires("pkg1/0.1", "dep/0.2")})
+ error = client.run("install .", assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("pkg1/0.1: Invalid ID: dep> 0.1 is not supported", client.out)
+
+ def test_validate_package_id_mode(self):
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=full_package_mode")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conans.errors import ConanInvalidConfiguration
+ class Pkg(ConanFile):
+ settings = "os"
+
+ def validate(self):
+ if self.settings.os == "Windows":
+ raise ConanInvalidConfiguration("Windows not supported")
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("export . dep/0.1@")
+
+ client.save({"conanfile.py": GenConanfile().with_requires("dep/0.1")})
+ error = client.run("create . pkg/0.1@ -s os=Windows", assert_error=True)
+ self.assertEqual(error, ERROR_INVALID_CONFIGURATION)
+ self.assertIn("dep/0.1:INVALID - Invalid", client.out)
+ self.assertIn("pkg/0.1:INVALID - Invalid", client.out)
+ self.assertIn("ERROR: There are invalid packages (packages that cannot "
+ "exist for this configuration):", client.out)
+ self.assertIn("dep/0.1: Invalid ID: Windows not supported", client.out)
+ self.assertIn("pkg/0.1: Invalid ID: Invalid transitive dependencies", client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-8457@d5d238e
|
conan-io/conan
|
Python
| 8,457
|
New AutotoolsDeps, AutotoolsToolchain helpers in conan.tools.gnu
|
Changelog: Feature: New AutotoolsDeps, AutotoolsToolchain helpers in conan.tools.gnu
Docs: https://github.com/conan-io/docs/pull/2057
Close https://github.com/conan-io/conan/issues/7070 (will invert the logic in _GLIBCXX_USE_CXX11_ABI)
|
2021-02-08T00:45:55Z
|
[bug] Invert the logic of _GLIBCXX_USE_CXX11_ABI ?
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
To help us debug your issue please explain:
-->
### Environment Details (include every applicable attribute)
* Operating System+version: Ubuntu 18.04
* Compiler+version: gcc9
* Conan version: 1.25.1
* Python version: 3.6.9
### Steps to reproduce (Include if Applicable)
The default for _GLIBCXX_USE_CXX11_ABI is 1 for several years now. Can we invert the logic of compiler.libcxx to use libstdc++ancient for the older version and use libstdc++ by default, or have libstdc++ _NOT_ emit _GLIBCXX_USE_CXX11_ABI ?
The reason I'm asking this is that we have a rule for formatting global macros in clang-tidy which is tripped by this macro, and there is no way to suppress it in tidy, since it does not come from a source file, but it is injected on the command line.
I suppose this is a bug in tidy, but there is some redundancy in Conan / CMake injecting this variable explicitly when it is not needed for quite some time now.
|
Hi @0x8000-0000
> The default for _GLIBCXX_USE_CXX11_ABI is 1 for several years now. Can we invert the logic of compiler.libcxx to use libstdc++ancient for the older version and use libstdc++ by default, or have libstdc++ NOT emit _GLIBCXX_USE_CXX11_ABI ?
I am not sure I understood that. Is that you are using the default auto-detected profile that is using ``compiler.libcxx=libstdc++``?
If that is the case, this is not a bug, this is by design, and cannot be changed without breaking. The reason it is this way is because of wider binary compatibility with older distros, even modern compilers can't upgrade to libstdc++11 in older distros: https://blog.conan.io/2016/03/22/From-CMake-syntax-to-libstdc++-ABI-incompatibiliy-migrations-are-always-hard.html
Using the default profile is not recommended for production. Better use your own profiles (can manage them with ``conan config install``), with the ``compiler.libcxx=libstdc++11`` value. Or if not, dynamically change the default one with ``conan profile update compiler.libcxx=libstdc++11 default``.
Please let me know if that helps, most likely I didn't understand this well enough.
I have compiler.libcxx=libstdc++11 in my profile, and it injects the definition on the command line when building, even though it is redundant given the compiler version I'm using.
The [dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) page says:
> Using the default configuration options for GCC the default value of the macro is 1 which causes the new ABI to be active, so to use the old ABI you must explicitly define the macro to 0 before including any library headers. (Be aware that some GNU/Linux distributions configure GCC 5 differently so that the default value of the macro is 0 and users must define it to 1 to enable the new ABI.)
So since it was introduced, on most distributions the value was 1.
What I am asking in effect is that for `compiler.libcxx=libstdc++11` with GCC9 we should not get a define, and for `compiler.libcxx=libstdc++` with GCC9 we should get -D_GLIBCXX_USE_CXX11_ABI=0, otherwise we'll keep adding this define until the end of time ;)
This is the bug in clang-tidy: https://bugs.llvm.org/show_bug.cgi?id=42635
The problem I see with this is that it may not be evident what is the default value for all the compilers we are supporting including all the versions.
Currently, we adjusting `_GLIBCXX_USE_CXX11_ABI=0` or `_GLIBCXX_USE_CXX11_ABI=1` for compilers `gcc`, `clang` & `apple-clang`
@0x8000-0000 do you know if we can find this information in the GNU website?
> The problem I see with this is that it may not be evident what is the default value for all the compilers we are supporting including all the versions.
>
> Currently, we adjusting `_GLIBCXX_USE_CXX11_ABI=0` or `_GLIBCXX_USE_CXX11_ABI=1` for compilers `gcc`, `clang` & `apple-clang`
>
> @0x8000-0000 do you know if we can find this information in the GNU website?
I don't know, and as documented by the page I linked possible that distributions are setting up the default differently than what upstream has.
The way that the configuration setting was defined in Conan, it is not clear what the plan for sunsetting it is. GCC5 was released 5 years ago, and when it was release the default was to use the new ABI. Perhaps this should have been followed here and users should have had to explicitly select the old ABI, and by default to follow what the compiler is doing by default.
With the current semantic of compiler.libcxx, I am forced to make a choice, and it is possible I'll make the wrong choice.
@0x8000-0000 I understand the hassle of being forced to set `compiler.libcxx` and it is something you don't worry about in a small project if you are not using Conan. However, as Conan manages the binary compatibility I think this is something to really take into account.
With the lack of information about the defaults, I find it difficult to rely on just the defaults of the compiler and I consider changing this a bit risky for the value. @uilianries @SSE4 WDYT about this topic?
For gcc you can find out if your compiler was built with `_GLIBCXX_USE_CXX11_ABI=0` by running `gcc -v` and looking for `--with-default-libstdcxx-abi=gcc4-compatible`. For example:
```
$ gcc --version
gcc (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)
$ gcc -v 2>&1 | sed -n 's/.*\(--with-default-libstdcxx-abi=gcc4-compatible\).*/\1/p'
--with-default-libstdcxx-abi=gcc4-compatible
```
The Red Hat Devtoolset compiler suites continue to use `_GLIBCXX_USE_CXX11_ABI=0` all the way up to gcc 8 for backwards compatibility. For example:
```
$ gcc --version
gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3)
$ gcc -v 2>&1 | sed -n 's/.*\(--with-default-libstdcxx-abi=gcc4-compatible\).*/\1/p'
--with-default-libstdcxx-abi=gcc4-compatible
```
Devtoolset is very common in Red Hat Enterprise Linux shops.
Thanks a lot for the info @sourcedelica. I still think it is too much for the value and a bit risky to change the definition for the benefit (just a clang-tidy warning, or am I missing something?).
I can only think about something like #5740 to improve this.
> The way that the configuration setting was defined in Conan, it is not clear what the plan for sunsetting it is. GCC5 was released 5 years ago, and when it was release the default was to use the new ABI. Perhaps this should have been followed here and users should have had to explicitly select the old ABI, and by default to follow what the compiler is doing by default.
This is not very accurate, and much part of the problem. Specifically this: ``GCC5 was released 5 years ago, and when it was release the default was to use the new ABI.``
GCC5 and even more modern versions, when used in a somewhat older distro, still defaults to ``libstdc++``, not to ``libstdc++11``. This is very unfortunate, but it is the reality, and nothing Conan can do about, but to model the choice of ``libcxx``: https://blog.conan.io/2016/03/22/From-CMake-syntax-to-libstdc++-ABI-incompatibiliy-migrations-are-always-hard.html
> With the current semantic of compiler.libcxx, I am forced to make a choice, and it is possible I'll make the wrong choice.
Yes, as explained above, because it is not a constant default, it really changes from platform to platform, distro to distro, so the only way is to making it a choice.
Summary: I agree with @danimtb, definitely this is too much of a risk. Conan not passing _GLIBCXX_USE_CXX11_ABI will cause many ABI incompatibilities for many compilers, and at the moment it is almost impossible to know which compiler in which distros have this as the default or not. As the request is related to a warning of clang-tidy, it doesn't seem worth the risk (almost a certainty) of breaking hundreds of Conan users.
Hi @danimtb - I’m not looking for a change. I agree with the current approach. I was just providing a datapoint - that the old ABI is still in use and probably will be indefinitely on Red Hat Enterprise.
|
[
{
"body": "<!--\r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n\r\n To help us debug your issue please explain:\r\n-->\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Ubuntu 18.04\r\n * Compiler+version: gcc9\r\n * Conan version: 1.25.1\r\n * Python version: 3.6.9\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\nThe default for _GLIBCXX_USE_CXX11_ABI is 1 for several years now. Can we invert the logic of compiler.libcxx to use libstdc++ancient for the older version and use libstdc++ by default, or have libstdc++ _NOT_ emit _GLIBCXX_USE_CXX11_ABI ?\r\n\r\nThe reason I'm asking this is that we have a rule for formatting global macros in clang-tidy which is tripped by this macro, and there is no way to suppress it in tidy, since it does not come from a source file, but it is injected on the command line.\r\n\r\nI suppose this is a bug in tidy, but there is some redundancy in Conan / CMake injecting this variable explicitly when it is not needed for quite some time now.\r\n",
"number": 7070,
"title": "[bug] Invert the logic of _GLIBCXX_USE_CXX11_ABI ?"
}
] |
beaca4195d700c944f0f7b098e267a509f78acb4
|
{
"head_commit": "d5d238e22c9a255f8b0320cc0a2fc1e562dac2c9",
"head_commit_message": "fix adjust path",
"patch_to_review": "diff --git a/.gitignore b/.gitignore\nindex 2e4f6277c3e..4a27bb1ea69 100644\n--- a/.gitignore\n+++ b/.gitignore\n@@ -7,7 +7,6 @@ __pycache__/\n \n # Distribution / packaging\n .Python\n-env/\n build/\n develop-eggs/\n dist/\ndiff --git a/conan/tools/_compilers.py b/conan/tools/_compilers.py\nindex b1eaf5ec587..199eb456eb0 100644\n--- a/conan/tools/_compilers.py\n+++ b/conan/tools/_compilers.py\n@@ -39,3 +39,58 @@ def architecture_flag(settings):\n \"e2k-v6\": \"-march=elbrus-v6\",\n \"e2k-v7\": \"-march=elbrus-v7\"}.get(str(arch), \"\")\n return \"\"\n+\n+\n+def build_type_flags(settings):\n+ \"\"\"\n+ returns flags specific to the build type (Debug, Release, etc.)\n+ (-s, -g, /Zi, etc.)\n+ \"\"\"\n+ compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n+\n+ build_type = settings.get_safe(\"build_type\")\n+ vs_toolset = settings.get_safe(\"compiler.toolset\")\n+ if not compiler or not build_type:\n+ return \"\"\n+\n+ # https://github.com/Kitware/CMake/blob/d7af8a34b67026feaee558433db3a835d6007e06/\n+ # Modules/Platform/Windows-MSVC.cmake\n+ if str(compiler) == 'Visual Studio':\n+ if vs_toolset and \"clang\" in str(vs_toolset):\n+ flags = {\"Debug\": [\"-gline-tables-only\", \"-fno-inline\", \"-O0\"],\n+ \"Release\": [\"-O2\"],\n+ \"RelWithDebInfo\": [\"-gline-tables-only\", \"-O2\", \"-fno-inline\"],\n+ \"MinSizeRel\": []\n+ }.get(build_type, [\"-O2\", \"-Ob2\"])\n+ else:\n+ flags = {\"Debug\": [\"-Zi\", \"-Ob0\", \"-Od\"],\n+ \"Release\": [\"-O2\", \"-Ob2\"],\n+ \"RelWithDebInfo\": [\"-Zi\", \"-O2\", \"-Ob1\"],\n+ \"MinSizeRel\": [\"-O1\", \"-Ob1\"],\n+ }.get(build_type, [])\n+ return flags\n+ else:\n+ # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/\n+ # Modules/Compiler/GNU.cmake\n+ # clang include the gnu (overriding some things, but not build type) and apple clang\n+ # overrides clang but it doesn't touch clang either\n+ if str(compiler) in [\"clang\", \"gcc\", \"apple-clang\", \"qcc\", \"mcst-lcc\"]:\n+ # FIXME: It is not clear that the \"-s\" is something related with the build type\n+ # cmake is not adjusting it\n+ # -s: Remove all symbol table and relocation information from the executable.\n+ flags = {\"Debug\": [\"-g\"],\n+ \"Release\": [\"-O3\", \"-s\"] if str(compiler) == \"gcc\" else [\"-O3\"],\n+ \"RelWithDebInfo\": [\"-O2\", \"-g\"],\n+ \"MinSizeRel\": [\"-Os\"],\n+ }.get(build_type, [])\n+ return flags\n+ elif str(compiler) == \"sun-cc\":\n+ # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/\n+ # Modules/Compiler/SunPro-CXX.cmake\n+ flags = {\"Debug\": [\"-g\"],\n+ \"Release\": [\"-xO3\"],\n+ \"RelWithDebInfo\": [\"-xO2\", \"-g\"],\n+ \"MinSizeRel\": [\"-xO2\", \"-xspace\"],\n+ }.get(build_type, [])\n+ return flags\n+ return \"\"\ndiff --git a/conan/tools/cmake/utils.py b/conan/tools/cmake/utils.py\nindex 7f572daf165..76513909c88 100644\n--- a/conan/tools/cmake/utils.py\n+++ b/conan/tools/cmake/utils.py\n@@ -1,7 +1,6 @@\n import os\n \n from conans.errors import ConanException\n-from conans.util.log import logger\n \n \n def is_multi_configuration(generator):\n@@ -33,22 +32,7 @@ def get_generator(conanfile):\n return base\n \n compiler_base = conanfile.settings.get_safe(\"compiler.base\")\n- arch = conanfile.settings.get_safe(\"arch\")\n-\n compiler_base_version = conanfile.settings.get_safe(\"compiler.base.version\")\n- if hasattr(conanfile, 'settings_build'):\n- os_build = conanfile.settings_build.get_safe('os')\n- else:\n- os_build = conanfile.settings.get_safe('os_build')\n- if os_build is None: # Assume is the same specified in host settings, not cross-building\n- os_build = conanfile.settings.get_safe(\"os\")\n-\n- if not compiler or not compiler_version or not arch:\n- if os_build == \"Windows\":\n- logger.warning(\"CMake generator could not be deduced from settings\")\n- return None\n- return \"Unix Makefiles\"\n-\n if compiler == \"Visual Studio\" or compiler_base == \"Visual Studio\":\n version = compiler_base_version or compiler_version\n major_version = version.split('.', 1)[0]\n@@ -59,12 +43,22 @@ def get_generator(conanfile):\n '12': '12 2013',\n '14': '14 2015',\n '15': '15 2017',\n- '16': '16 2019'}.get(major_version, \"UnknownVersion %s\" % version)\n+ '16': '16 2019'}.get(major_version)\n base = \"Visual Studio %s\" % _visuals\n return base\n \n- # The generator depends on the build machine, not the target\n- if os_build == \"Windows\" and compiler != \"qcc\":\n- return \"MinGW Makefiles\" # it is valid only under Windows\n+ if hasattr(conanfile, 'settings_build'):\n+ os_build = conanfile.settings_build.get_safe('os')\n+ else:\n+ os_build = conanfile.settings.get_safe('os_build')\n+ if os_build is None: # Assume is the same specified in host settings, not cross-building\n+ os_build = conanfile.settings.get_safe(\"os\")\n+\n+ if os_build == \"Windows\":\n+ sub = conanfile.settings.get_safe(\"os.subsystem\")\n+ if sub in (\"cygwin\", \"msys2\", \"msys\") or compiler == \"qcc\":\n+ return \"Unix Makefiles\"\n+ else:\n+ return \"MinGW Makefiles\"\n \n return \"Unix Makefiles\"\ndiff --git a/conan/tools/env/__init__.py b/conan/tools/env/__init__.py\nnew file mode 100644\nindex 00000000000..7656f772c0d\n--- /dev/null\n+++ b/conan/tools/env/__init__.py\n@@ -0,0 +1 @@\n+from conan.tools.env.environment import Environment\ndiff --git a/conan/tools/env/environment.py b/conan/tools/env/environment.py\nnew file mode 100644\nindex 00000000000..4ee2952c062\n--- /dev/null\n+++ b/conan/tools/env/environment.py\n@@ -0,0 +1,138 @@\n+import textwrap\n+from collections import OrderedDict\n+\n+from conans.util.files import save\n+\n+PLACEHOLDER = \"$CONANVARPLACEHOLDER%\"\n+\n+\n+class EnvironmentItem:\n+\n+ def __init__(self, value=None, separator=None):\n+ self._value = value\n+ self._separator = separator\n+\n+ def value(self, placeholder):\n+ value = [v if v != PLACEHOLDER else placeholder for v in self._value]\n+ value = self._separator.join(value) if value else \"\"\n+ return value\n+\n+ def copy(self):\n+ return EnvironmentItem(self._value[:], self._separator)\n+\n+ def define(self, value, separator=\" \"):\n+ self._value = value if isinstance(value, list) else [value]\n+ self._separator = separator\n+\n+ def append(self, value, separator=\" \"):\n+ value = value if isinstance(value, list) else [value]\n+ self._value = [PLACEHOLDER] + value\n+ self._separator = separator\n+\n+ def prepend(self, value, separator=\" \"):\n+ value = value if isinstance(value, list) else [value]\n+ self._value = value + [PLACEHOLDER]\n+ self._separator = separator\n+\n+ def clean(self):\n+ self._value = []\n+ self._separator = None\n+\n+ def update(self, other):\n+ \"\"\"\n+ :type other: EnvironmentItem\n+ \"\"\"\n+ result = other._value\n+ try:\n+ index = result.index(PLACEHOLDER)\n+ result[index:index+1] = self._value\n+ assert self._separator == other._separator\n+ except ValueError:\n+ pass\n+ self._value = result\n+ self._separator = other._separator\n+\n+\n+class Environment:\n+ def __init__(self):\n+ # It being ordered allows for Windows case-insensitive composition\n+ self._values = OrderedDict()\n+\n+ def __getitem__(self, name):\n+ return self._values.setdefault(name, EnvironmentItem())\n+\n+ def save_bat(self, filename, generate_deactivate=True):\n+ deactivate = textwrap.dedent(\"\"\"\\\n+ echo Capturing current environment in deactivate_{filename}\n+ setlocal\n+ echo @echo off > \"deactivate_{filename}\"\n+ echo echo Restoring environment >> \"deactivate_{filename}\"\n+ for %%v in ({vars}) do (\n+ set foundenvvar=\n+ for /f \"delims== tokens=1,2\" %%a in ('set') do (\n+ if \"%%a\" == \"%%v\" (\n+ echo set %%a=%%b>> \"deactivate_{filename}\"\n+ set foundenvvar=1\n+ )\n+ )\n+ if not defined foundenvvar (\n+ echo set %%v=>> \"deactivate_{filename}\"\n+ )\n+ )\n+ endlocal\n+\n+ \"\"\").format(filename=filename, vars=\" \".join(self._values.keys()))\n+ capture = textwrap.dedent(\"\"\"\\\n+ @echo off\n+ {deactivate}\n+ echo Configuring environment variables\n+ \"\"\").format(deactivate=deactivate if generate_deactivate else \"\")\n+ result = [capture]\n+ for k, v in self._values.items():\n+ value = v.value(\"%{}%\".format(k))\n+ result.append('set {}={}'.format(k, value))\n+\n+ content = \"\\n\".join(result)\n+ save(filename, content)\n+\n+ def save_sh(self, filename):\n+ capture = textwrap.dedent(\"\"\"\\\n+ echo Capturing current environment in deactivate_{filename}\n+ echo echo Restoring variables >> deactivate_{filename}\n+ for v in {vars}\n+ do\n+ value=${{!v}}\n+ if [ -n \"$value\" ]\n+ then\n+ echo export \"$v=$value\" >> deactivate_{filename}\n+ else\n+ echo unset $v >> deactivate_{filename}\n+ fi\n+ done\n+ echo Configuring environment variables\n+ \"\"\".format(filename=filename, vars=\" \".join(self._values.keys())))\n+ result = [capture]\n+ for k, v in self._values.items():\n+ value = v.value(\"${}\".format(k))\n+ if value:\n+ result.append('export {}=\"{}\"'.format(k, value))\n+ else:\n+ result.append('unset {}'.format(k))\n+\n+ content = \"\\n\".join(result)\n+ save(filename, content)\n+\n+ def compose(self, other):\n+ \"\"\"\n+ :type other: Environment\n+ \"\"\"\n+ result = Environment()\n+ result._values = OrderedDict([(k, v.copy()) for k, v in self._values.items()])\n+ for k, v in other._values.items():\n+ v = v.copy()\n+ existing = result._values.get(k)\n+ if existing is None:\n+ result._values[k] = v\n+ else:\n+ existing.update(v)\n+ return result\ndiff --git a/conan/tools/gnu/__init__.py b/conan/tools/gnu/__init__.py\nindex 32860e46378..73fe02bb986 100644\n--- a/conan/tools/gnu/__init__.py\n+++ b/conan/tools/gnu/__init__.py\n@@ -1 +1,4 @@\n from .make import MakeToolchain\n+from conan.tools.gnu.autotoolstoolchain import AutotoolsToolchain\n+from conan.tools.gnu.autotoolsdeps import AutotoolsDeps\n+from conan.tools.gnu.autotools import Autotools\ndiff --git a/conan/tools/gnu/autotools.py b/conan/tools/gnu/autotools.py\nnew file mode 100644\nindex 00000000000..495658790eb\n--- /dev/null\n+++ b/conan/tools/gnu/autotools.py\n@@ -0,0 +1,232 @@\n+import copy\n+import os\n+import platform\n+\n+\n+from conans.client import tools\n+from conans.client.tools.oss import OSInfo, cross_building, \\\n+ detected_architecture, detected_os, get_gnu_triplet, get_target_os_arch, get_build_os_arch\n+from conans.errors import ConanException\n+from conans.model.build_info import DEFAULT_BIN, DEFAULT_INCLUDE, DEFAULT_LIB, DEFAULT_SHARE\n+from conans.util.files import get_abs_path\n+\n+\n+class Autotools(object):\n+\n+ def __init__(self, conanfile, win_bash=False, include_rpath_flags=False):\n+ \"\"\"\n+ FIXME: include_rpath_flags CONAN 2.0 to default True? Could break many packages in center\n+ \"\"\"\n+ self._conanfile = conanfile\n+ self._win_bash = win_bash\n+ self._include_rpath_flags = include_rpath_flags\n+ self.subsystem = OSInfo().detect_windows_subsystem() if self._win_bash else None\n+ self._os = conanfile.settings.get_safe(\"os\")\n+ self._os_version = conanfile.settings.get_safe(\"os.version\")\n+ self._os_sdk = conanfile.settings.get_safe(\"os.sdk\")\n+ self._os_subsystem = conanfile.settings.get_safe(\"os.subsystem\")\n+ self._arch = conanfile.settings.get_safe(\"arch\")\n+ self._os_target, self._arch_target = get_target_os_arch(conanfile)\n+ self._build_type = conanfile.settings.get_safe(\"build_type\")\n+ self._compiler = conanfile.settings.get_safe(\"compiler\")\n+ self._compiler_version = conanfile.settings.get_safe(\"compiler.version\")\n+\n+ # Precalculate build, host, target triplets\n+ self.build, self.host, self.target = self._get_host_build_target_flags()\n+\n+ def _get_host_build_target_flags(self):\n+ \"\"\"Based on google search for build/host triplets, it could need a lot\n+ and complex verification\"\"\"\n+\n+ if self._os_target and self._arch_target:\n+ try:\n+ target = get_gnu_triplet(self._os_target, self._arch_target, self._compiler)\n+ except ConanException as exc:\n+ self._conanfile.output.warn(str(exc))\n+ target = None\n+ else:\n+ target = None\n+\n+ if hasattr(self._conanfile, 'settings_build'):\n+ os_build, arch_build = get_build_os_arch(self._conanfile)\n+ else:\n+ # FIXME: Why not use 'os_build' and 'arch_build' from conanfile.settings?\n+ os_build = detected_os() or platform.system()\n+ arch_build = detected_architecture() or platform.machine()\n+\n+ if os_build is None or arch_build is None or self._arch is None or self._os is None:\n+ return False, False, target\n+\n+ if not cross_building(self._conanfile, os_build, arch_build):\n+ return False, False, target\n+\n+ try:\n+ build = get_gnu_triplet(os_build, arch_build, self._compiler)\n+ except ConanException as exc:\n+ self._conanfile.output.warn(str(exc))\n+ build = None\n+ try:\n+ host = get_gnu_triplet(self._os, self._arch, self._compiler)\n+ except ConanException as exc:\n+ self._conanfile.output.warn(str(exc))\n+ host = None\n+ return build, host, target\n+\n+ def configure(self, configure_dir=None, args=None, build=None, host=None, target=None,\n+ pkg_config_paths=None, vars=None, use_default_install_dirs=True):\n+ \"\"\"\n+ http://jingfenghanmax.blogspot.com.es/2010/09/configure-with-host-target-and-build.html\n+ https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html\n+ :param use_default_install_dirs: Use or not the defaulted installation dirs\n+\n+ \"\"\"\n+ if not self._conanfile.should_configure:\n+ return\n+ if configure_dir:\n+ configure_dir = configure_dir.rstrip(\"/\")\n+ else:\n+ configure_dir = \".\"\n+ \"\"\"\n+ triplet_args = []\n+\n+ if build is not False: # Skipped by user\n+ if build or self.build: # User specified value or automatic\n+ triplet_args.append(\"--build=%s\" % (build or self.build))\n+\n+ if host is not False: # Skipped by user\n+ if host or self.host: # User specified value or automatic\n+ triplet_args.append(\"--host=%s\" % (host or self.host))\n+\n+ if target is not False: # Skipped by user\n+ if target or self.target: # User specified value or automatic\n+ triplet_args.append(\"--target=%s\" % (target or self.target))\n+\n+ if pkg_config_paths:\n+ pkg_env = {\"PKG_CONFIG_PATH\":\n+ [os.pathsep.join(get_abs_path(f, self._conanfile.install_folder)\n+ for f in pkg_config_paths)]}\n+ else:\n+ # If we are using pkg_config generator automate the pcs location, otherwise it could\n+ # read wrong files\n+ pkg_env = {\"PKG_CONFIG_PATH\": [self._conanfile.install_folder]} \\\n+ if \"pkg_config\" in self._conanfile.generators else None\n+ \"\"\"\n+ configure_dir = configure_dir\n+\n+ \"\"\"if self._conanfile.package_folder is not None:\n+ if not args:\n+ args = [\"--prefix=%s\" % self._conanfile.package_folder.replace(\"\\\\\", \"/\")]\n+ elif not self._is_flag_in_args(\"prefix\", args):\n+ args.append(\"--prefix=%s\" % self._conanfile.package_folder.replace(\"\\\\\", \"/\"))\n+\n+ all_flags = [\"bindir\", \"sbindir\", \"libexecdir\", \"libdir\", \"includedir\", \"oldincludedir\",\n+ \"datarootdir\"]\n+ help_output = self._configure_help_output(configure_dir)\n+ available_flags = [flag for flag in all_flags if \"--%s\" % flag in help_output]\n+\n+ if use_default_install_dirs:\n+ for varname in [\"bindir\", \"sbindir\", \"libexecdir\"]:\n+ if self._valid_configure_flag(varname, args, available_flags):\n+ args.append(\"--%s=${prefix}/%s\" % (varname, DEFAULT_BIN))\n+ if self._valid_configure_flag(\"libdir\", args, available_flags):\n+ args.append(\"--libdir=${prefix}/%s\" % DEFAULT_LIB)\n+ for varname in [\"includedir\", \"oldincludedir\"]:\n+ if self._valid_configure_flag(varname, args, available_flags):\n+ args.append(\"--%s=${prefix}/%s\" % (varname, DEFAULT_INCLUDE))\n+ if self._valid_configure_flag(\"datarootdir\", args, available_flags):\n+ args.append(\"--datarootdir=${prefix}/%s\" % DEFAULT_SHARE)\n+\n+ with environment_append(pkg_env):\n+ with environment_append(vars or self.vars):\n+ command = '%s/configure %s %s' % (configure_dir, args_to_string(args),\n+ \" \".join(triplet_args))\n+ self._conanfile.output.info(\"Calling:\\n > %s\" % command)\n+ self._conanfile.run(command, win_bash=self._win_bash, subsystem=self.subsystem)\"\"\"\n+\n+ cmd = \"bash -c 'source autotoolsdeps.sh && source autotools.sh && %s/configure'\" % configure_dir\n+ self._conanfile.output.info(\"Calling:\\n > %s\" % cmd)\n+ self._conanfile.run(cmd, win_bash=self._win_bash, subsystem=self.subsystem)\n+\n+ def _configure_help_output(self, configure_path):\n+ from six import StringIO # Python 2 and 3 compatible\n+ mybuf = StringIO()\n+ try:\n+ self._conanfile.run(\"%s/configure --help\" % configure_path, win_bash=self._win_bash,\n+ output=mybuf)\n+ except ConanException as e:\n+ self._conanfile.output.warn(\"Error running `configure --help`: %s\" % e)\n+ return \"\"\n+ return mybuf.getvalue()\n+\n+ @staticmethod\n+ def _valid_configure_flag(varname, args, available_flags):\n+ return not AutoToolsBuildEnvironment._is_flag_in_args(varname, args) and \\\n+ varname in available_flags\n+\n+ @staticmethod\n+ def _is_flag_in_args(varname, args):\n+ flag = \"--%s=\" % varname\n+ return any([flag in arg for arg in args])\n+\n+ def make(self, target=None):\n+ \"\"\"if not self._build_type:\n+ raise ConanException(\"build_type setting should be defined.\")\n+ with environment_append(vars or self.vars):\n+ str_args = args_to_string(args)\n+ cpu_count_option = ((\"-j%s\" % cpu_count(output=self._conanfile.output))\n+ if (\"-j\" not in str_args and \"nmake\" not in make_program.lower())\n+ else None)\n+ self._conanfile.run(\"%s\" % join_arguments([make_program, target, str_args,\n+ cpu_count_option]),\n+ win_bash=self._win_bash, subsystem=self.subsystem)\"\"\"\n+\n+ make_program = self._conanfile.conf[\"tools.gnu\"].make_program\n+ if make_program is None:\n+ make_program = \"mingw32-make\" if platform.system() == \"Windows\" else \"make\"\n+ if platform.system() == \"Windows\":\n+ cmd = \"autotoolsdeps.bat && autotools.bat && {}\".format(make_program)\n+ else:\n+ cmd = \"bash -c 'source autotoolsdeps.sh \"\\\n+ \"&& source autotools.sh && {}'\".format(make_program)\n+ self._conanfile.run(cmd, win_bash=self._win_bash, subsystem=self.subsystem)\n+\n+ def install(self, args=\"\"):\n+ if not self._conanfile.should_install:\n+ return\n+ self.make(target=\"install\")\n+\n+ def _get_vars(self):\n+ def append(*args):\n+ ret = []\n+ for arg in args:\n+ if arg:\n+ if isinstance(arg, list):\n+ ret.extend(arg)\n+ else:\n+ ret.append(arg)\n+ return ret\n+\n+ tmp_compilation_flags = copy.copy(self.flags)\n+\n+ if tools.is_apple_os(self._os):\n+ concat = \" \".join(tmp_compilation_flags)\n+ if os.environ.get(\"CFLAGS\", None):\n+ concat += \" \" + os.environ.get(\"CFLAGS\", None)\n+ if os.environ.get(\"CXXFLAGS\", None):\n+ concat += \" \" + os.environ.get(\"CXXFLAGS\", None)\n+ if self._os_version and \"-version-min\" not in concat and \"-target\" not in concat:\n+ tmp_compilation_flags.append(tools.apple_deployment_target_flag(self._os,\n+ self._os_version,\n+ self._os_sdk,\n+ self._os_subsystem,\n+ self._arch))\n+ if \"-isysroot\" not in concat and platform.system() == \"Darwin\":\n+ tmp_compilation_flags.extend([\"-isysroot\",\n+ tools.XCRun(self._conanfile.settings).sdk_path])\n+ if \"-arch\" not in concat and self._arch:\n+ tmp_compilation_flags.extend([\"-arch\", tools.to_apple_arch(self._arch)])\n+\n+ cxx_flags = append(tmp_compilation_flags, self.cxx_flags, self.cppstd_flag)\n+ c_flags = tmp_compilation_flags\n+\n+ return ld_flags, cpp_flags, libs, cxx_flags, c_flags\ndiff --git a/conan/tools/gnu/autotoolsdeps.py b/conan/tools/gnu/autotoolsdeps.py\nnew file mode 100644\nindex 00000000000..0b66fa584e6\n--- /dev/null\n+++ b/conan/tools/gnu/autotoolsdeps.py\n@@ -0,0 +1,62 @@\n+from conan.tools.env import Environment\n+\n+\n+class AutotoolsDeps(object):\n+ def __init__(self, conanfile):\n+ # Set the generic objects before mapping to env vars to let the user\n+ # alter some value\n+ self._conanfile = conanfile\n+ deps_cpp_info = conanfile.deps_cpp_info\n+ self.libs = list(deps_cpp_info.libs)\n+ self.libs.extend(list(deps_cpp_info.system_libs))\n+ self.include_paths = list(deps_cpp_info.include_paths)\n+ self.library_paths = list(deps_cpp_info.lib_paths)\n+ self.defines = list(deps_cpp_info.defines)\n+ self.cflags = list(deps_cpp_info.cflags)\n+ self.cxx_flags = list(deps_cpp_info.cxxflags)\n+ self.sharedlinkflags = list(deps_cpp_info.sharedlinkflags)\n+ self.exelinkflags = list(deps_cpp_info.exelinkflags)\n+ self.frameworks = list(deps_cpp_info.frameworks)\n+ self.frameworks_paths = list(deps_cpp_info.framework_paths)\n+ self.sysroot = deps_cpp_info.sysroot\n+\n+ def generate(self):\n+ # cpp_flags\n+ cpp_flags = []\n+ include_paths = ['-I\"%s\"' % p for p in self.include_paths]\n+ cpp_flags.extend(include_paths)\n+ cpp_flags.extend([\"-D%s\" % define for define in self.defines])\n+\n+ # Libs\n+ libs = [\"-l%s\" % library for library in self.libs]\n+\n+ # Ldflags\n+ # TODO: Discuss, should the helper filter frameworks based on compiler?\n+ frameworks = [\"-framework %s\" % framework for framework in self.frameworks]\n+ frameworks_paths = [\"-F %s\" % framework_path for framework_path in self.frameworks_paths]\n+ ldflags = self.sharedlinkflags\n+ ldflags.extend(self.exelinkflags)\n+ ldflags.extend(frameworks)\n+ ldflags.extend(frameworks_paths)\n+ lib_paths = ['-L\"%s\"' % p for p in self.library_paths]\n+ ldflags.extend(lib_paths)\n+\n+ # cflags\n+ cflags = []\n+ cxxflags = []\n+\n+ if self.sysroot:\n+ srf = '--sysroot={}'.format(self.sysroot)\n+ cflags.append(srf)\n+ cxxflags.append(srf)\n+ ldflags.append(srf)\n+\n+ env = Environment()\n+ env[\"CPPFLAGS\"].append(cpp_flags)\n+ env[\"LIBS\"].append(libs)\n+ env[\"LDFLAGS\"].append(ldflags)\n+ env[\"CXXFLAGS\"].append(cxxflags)\n+ env[\"CFLAGS\"].append(cflags)\n+\n+ env.save_sh(\"autotoolsdeps.sh\")\n+ env.save_bat(\"autotoolsdeps.bat\")\ndiff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py\nnew file mode 100644\nindex 00000000000..60d9847a0d5\n--- /dev/null\n+++ b/conan/tools/gnu/autotoolstoolchain.py\n@@ -0,0 +1,111 @@\n+from conan.tools._compilers import architecture_flag, build_type_flags\n+from conan.tools.env import Environment\n+# FIXME: need to refactor this import and bring to conan.tools\n+from conans.client.build.cppstd_flags import cppstd_flag_new\n+\n+\n+class AutotoolsToolchain(object):\n+ def __init__(self, conanfile):\n+ self._conanfile = conanfile\n+ build_type = self._conanfile.settings.get_safe(\"build_type\")\n+\n+ # TODO: compiler.runtime for Visual studio?\n+ # defines\n+ self.ndebug = None\n+ if build_type in ['Release', 'RelWithDebInfo', 'MinSizeRel']:\n+ self.ndebug = \"NDEBUG\"\n+ self.gcc_cxx11_abi = self._cxx11_abi_define()\n+ self.defines = []\n+\n+ # cxxflags, cflags\n+ self.cxxflags = []\n+ self.cflags = []\n+ self.ldflags = []\n+ self.libcxx = self._libcxx()\n+ self.fpic = self._conanfile.options.get_safe(\"fPIC\")\n+\n+ # FIXME: This needs to be imported here into conan.tools\n+ self.cppstd = cppstd_flag_new(self._conanfile.settings)\n+ self.arch_flag = architecture_flag(self._conanfile.settings)\n+ # TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)\n+ self.build_type_flags = build_type_flags(self._conanfile.settings)\n+\n+ def _rpaths_link(self):\n+ # TODO: Not used yet\n+ lib_paths = self._conanfile.deps_cpp_info.lib_paths\n+ compiler = _base_compiler(settings)\n+ if compiler in GCC_LIKE:\n+ return ['-Wl,-rpath,\"%s\"' % (x.replace(\"\\\\\", \"/\"))\n+ for x in lib_paths if x]\n+\n+ def _cxx11_abi_define(self):\n+ # https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html\n+ # The default is libstdc++11, only specify the contrary '_GLIBCXX_USE_CXX11_ABI=0'\n+ settings = self._conanfile.settings\n+ libcxx = settings.get_safe(\"compiler.libcxx\")\n+ if not libcxx:\n+ return\n+\n+ compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n+ if compiler == \"gcc\":\n+ if libcxx == 'libstdc++':\n+ return '_GLIBCXX_USE_CXX11_ABI=0'\n+\n+ def _libcxx(self):\n+ settings = self._conanfile.settings\n+ libcxx = settings.get_safe(\"compiler.libcxx\")\n+ if not libcxx:\n+ return\n+\n+ compiler = settings.get_safe(\"compiler.base\") or settings.get_safe(\"compiler\")\n+\n+ if compiler in ['clang', 'apple-clang']:\n+ if libcxx in ['libstdc++', 'libstdc++11']:\n+ return '-stdlib=libstdc++'\n+ elif libcxx == 'libc++':\n+ return '-stdlib=libc++'\n+ elif compiler == 'sun-cc':\n+ return ({\"libCstd\": \"-library=Cstd\",\n+ \"libstdcxx\": \"-library=stdcxx4\",\n+ \"libstlport\": \"-library=stlport4\",\n+ \"libstdc++\": \"-library=stdcpp\"}.get(libcxx))\n+ elif compiler == \"qcc\":\n+ return \"-Y _%s\" % str(libcxx)\n+\n+ def _environment(self):\n+ env = Environment()\n+ # defines\n+ if self.ndebug:\n+ self.defines.append(self.ndebug)\n+ if self.gcc_cxx11_abi:\n+ self.defines.append(self.gcc_cxx11_abi)\n+\n+ if self.libcxx:\n+ self.cxxflags.append(self.libcxx)\n+\n+ if self.cppstd:\n+ self.cxxflags.append(self.cppstd)\n+\n+ if self.arch_flag:\n+ self.cxxflags.append(self.arch_flag)\n+ self.cflags.append(self.arch_flag)\n+ self.ldflags.append(self.arch_flag)\n+\n+ if self.build_type_flags:\n+ self.cxxflags.extend(self.build_type_flags)\n+ self.cflags.extend(self.build_type_flags)\n+\n+ if self.fpic:\n+ self.cxxflags.append(\"-fPIC\")\n+ self.cflags.append(\"-fPIC\")\n+\n+ env[\"CPPFLAGS\"].append([\"-D{}\".format(d) for d in self.defines])\n+ env[\"CXXFLAGS\"].append(self.cxxflags)\n+ env[\"CFLAGS\"].append(self.cflags)\n+ env[\"LDFLAGS\"].append(self.ldflags)\n+ return env\n+\n+ def generate(self):\n+ env = self._environment()\n+ env.save_sh(\"autotools.sh\")\n+ env.save_bat(\"autotools.bat\")\ndiff --git a/conans/client/envvars/environment.py b/conans/client/envvars/environment.py\nindex 84f68f906a9..3665ae469b4 100644\n--- a/conans/client/envvars/environment.py\n+++ b/conans/client/envvars/environment.py\n@@ -174,7 +174,7 @@ def _files(env_vars, vars_with_spaces, flavor, activate_tpl, deactivate_tpl, ven\n activate_content = activate_tpl.render(environment_file=env_filepath,\n modified_vars=modified_vars, new_vars=new_vars,\n venv_name=venv_name)\n- deactivate_content = deactivate_tpl.render(modified_vars=modified_vars, new_vars=new_vars, \n+ deactivate_content = deactivate_tpl.render(modified_vars=modified_vars, new_vars=new_vars,\n venv_name=venv_name)\n \n environment_lines = [\"{}={}\".format(name, value) for name, value, _ in ret]\ndiff --git a/conans/client/generators/__init__.py b/conans/client/generators/__init__.py\nindex 3f0c96a6281..573d7afff27 100644\n--- a/conans/client/generators/__init__.py\n+++ b/conans/client/generators/__init__.py\n@@ -66,7 +66,8 @@ def __init__(self):\n \"deploy\": DeployGenerator,\n \"markdown\": MarkdownGenerator}\n self._new_generators = [\"CMakeToolchain\", \"CMakeDeps\", \"MakeToolchain\", \"MSBuildToolchain\",\n- \"MesonToolchain\", \"MSBuildDeps\", \"QbsToolchain\", \"msbuild\"]\n+ \"MesonToolchain\", \"MSBuildDeps\", \"QbsToolchain\", \"msbuild\",\n+ \"AutotoolsDeps\", \"AutotoolsToolchain\"]\n \n def add(self, name, generator_class, custom=False):\n if name not in self._generators or custom:\n@@ -95,6 +96,12 @@ def _new_generator(self, generator_name, output):\n elif generator_name == \"MakeToolchain\":\n from conan.tools.gnu import MakeToolchain\n return MakeToolchain\n+ elif generator_name == \"AutotoolsDeps\":\n+ from conan.tools.gnu import AutotoolsDeps\n+ return AutotoolsDeps\n+ elif generator_name == \"AutotoolsToolchain\":\n+ from conan.tools.gnu import AutotoolsToolchain\n+ return AutotoolsToolchain\n elif generator_name == \"MSBuildToolchain\":\n from conan.tools.microsoft import MSBuildToolchain\n return MSBuildToolchain\ndiff --git a/conans/test/assets/sources.py b/conans/test/assets/sources.py\nindex bd9934897b1..052e96b904f 100644\n--- a/conans/test/assets/sources.py\n+++ b/conans/test/assets/sources.py\n@@ -35,6 +35,11 @@\n std::cout << \" {{ msg or name }} __x86_64__ defined\\n\";\n #endif\n \n+ // Libstdc++\n+ #if defined _GLIBCXX_USE_CXX11_ABI\n+ std::cout << \" {{ msg or name }} _GLIBCXX_USE_CXX11_ABI \"<< _GLIBCXX_USE_CXX11_ABI << \"\\n\";\n+ #endif\n+\n // COMPILER VERSIONS\n #if _MSC_VER\n std::cout << \" {{ msg or name }} _MSC_VER\" << _MSC_VER<< \"\\n\";\ndiff --git a/conans/test/functional/toolchains/gnu/__init__.py b/conans/test/functional/toolchains/gnu/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/functional/toolchains/gnu/test_autotools.py b/conans/test/functional/toolchains/gnu/test_autotools.py\nnew file mode 100644\nindex 00000000000..189a29f4a34\n--- /dev/null\n+++ b/conans/test/functional/toolchains/gnu/test_autotools.py\n@@ -0,0 +1,206 @@\n+import os\n+import platform\n+import textwrap\n+import time\n+\n+import pytest\n+\n+from conans.test.assets.autotools import gen_makefile_am, gen_configure_ac\n+from conans.test.assets.sources import gen_function_cpp\n+from conans.test.functional.utils import check_exe_run\n+from conans.test.utils.tools import TestClient\n+from conans.util.files import touch\n+\n+\[email protected](platform.system() != \"Linux\", reason=\"Requires Autotools\")\[email protected]_autotools()\n+def test_autotools():\n+ client = TestClient(path_with_spaces=False)\n+ client.run(\"new hello/0.1 --template=v2_cmake\")\n+ client.run(\"create .\")\n+\n+ main = gen_function_cpp(name=\"main\", includes=[\"hello\"], calls=[\"hello\"])\n+ makefile_am = gen_makefile_am(main=\"main\", main_srcs=\"main.cpp\")\n+ configure_ac = gen_configure_ac()\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps\n+\n+ class TestConan(ConanFile):\n+ requires = \"hello/0.1\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+ exports_sources = \"configure.ac\", \"Makefile.am\", \"main.cpp\"\n+\n+ def generate(self):\n+ deps = AutotoolsDeps(self)\n+ deps.generate()\n+ tc = AutotoolsToolchain(self)\n+ tc.generate()\n+\n+ def build(self):\n+ self.run(\"aclocal\")\n+ self.run(\"autoconf\")\n+ self.run(\"automake --add-missing --foreign\")\n+ autotools = Autotools(self)\n+ autotools.configure()\n+ autotools.make()\n+ autotools.install()\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile,\n+ \"configure.ac\": configure_ac,\n+ \"Makefile.am\": makefile_am,\n+ \"main.cpp\": main}, clean_first=True)\n+ client.run(\"install .\")\n+ client.run(\"build .\")\n+ client.run_command(\"./main\")\n+ check_exe_run(client.out, \"main\", \"gcc\", None, \"Release\", \"x86_64\", None, cxx11_abi=0)\n+ assert \"hello/0.1: Hello World Release!\" in client.out\n+\n+\n+def build_windows_subsystem(profile, make_program):\n+ \"\"\" The AutotoolsDeps can be used also in pure Makefiles, if the makefiles follow\n+ the Autotools conventions\n+ \"\"\"\n+ # FIXME: cygwin in CI (my local machine works) seems broken for path with spaces\n+ client = TestClient(path_with_spaces=False)\n+ client.run(\"new hello/0.1 --template=v2_cmake\")\n+ # TODO: Test Windows subsystems in CMake, at least msys is broken\n+ os.rename(os.path.join(client.current_folder, \"test_package\"),\n+ os.path.join(client.current_folder, \"test_package2\"))\n+ client.save({\"profile\": profile})\n+ client.run(\"create . --profile=profile\")\n+ print(client.out)\n+\n+ main = gen_function_cpp(name=\"main\", includes=[\"hello\"], calls=[\"hello\"])\n+ makefile = textwrap.dedent(\"\"\"\\\n+ app: main.o\n+ \t$(CXX) $(CFLAGS) $(LDFLAGS) -o app main.o $(LIBS)\n+\n+ main.o: main.cpp\n+ \t$(CXX) $(CFLAGS) $(CXXFLAGS) $(CPPFLAGS) -c -o main.o main.cpp\n+ \"\"\")\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps\n+\n+ class TestConan(ConanFile):\n+ requires = \"hello/0.1\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+ exports_sources = \"Makefile\"\n+\n+ def generate(self):\n+ deps = AutotoolsDeps(self)\n+ deps.generate()\n+ tc = AutotoolsToolchain(self)\n+ tc.generate()\n+\n+ def build(self):\n+ autotools = Autotools(self)\n+ autotools.make()\n+ \"\"\")\n+ client.save({\"main.cpp\": main,\n+ \"Makefile\": makefile,\n+ \"conanfile.py\": conanfile,\n+ \"profile\": profile}, clean_first=True)\n+\n+ client.run(\"install . --profile=profile\")\n+ client.run_command(\"autotoolsdeps.bat && autotools.bat && {}\".format(make_program))\n+ print(client.out)\n+ client.run_command(\"app\")\n+ # TODO: fill compiler version when ready\n+ check_exe_run(client.out, \"main\", \"gcc\", None, \"Release\", \"x86_64\", None)\n+ assert \"hello/0.1: Hello World Release!\" in client.out\n+\n+ client.save({\"main.cpp\": gen_function_cpp(name=\"main\", msg=\"main2\",\n+ includes=[\"hello\"], calls=[\"hello\"])})\n+ # Make sure it is newer\n+ t = time.time() + 1\n+ touch(os.path.join(client.current_folder, \"main.cpp\"), (t, t))\n+\n+ client.run(\"build .\")\n+ print(client.out)\n+ client.run_command(\"app\")\n+ # TODO: fill compiler version when ready\n+ check_exe_run(client.out, \"main2\", \"gcc\", None, \"Release\", \"x86_64\", None, cxx11_abi=0)\n+ assert \"hello/0.1: Hello World Release!\" in client.out\n+ return client.out\n+\n+\[email protected]_cygwin\[email protected](platform.system() != \"Windows\", reason=\"Needs windows\")\n+def test_autotoolsdeps_cygwin():\n+ gcc = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Windows\n+ os.subsystem=cygwin\n+ compiler=gcc\n+ compiler.version=4.9\n+ compiler.libcxx=libstdc++\n+ arch=x86_64\n+ build_type=Release\n+ \"\"\")\n+ out = build_windows_subsystem(gcc, make_program=\"make\")\n+ print(out)\n+ assert \"__MSYS__\" not in out\n+ assert \"MINGW\" not in out\n+ assert \"main2 __CYGWIN__1\" in out\n+\n+\[email protected]_mingw\[email protected](platform.system() != \"Windows\", reason=\"Needs windows\")\n+def test_autotoolsdeps_mingw():\n+ gcc = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Windows\n+ compiler=gcc\n+ compiler.version=4.9\n+ compiler.libcxx=libstdc++\n+ arch=x86_64\n+ build_type=Release\n+ \"\"\")\n+ out = build_windows_subsystem(gcc, make_program=\"mingw32-make\")\n+ print(out)\n+ assert \"__MSYS__\" not in out\n+ assert \"main2 __MINGW64__1\" in out\n+\n+\[email protected]_mingw64\[email protected](platform.system() != \"Windows\", reason=\"Needs windows\")\n+def test_autotoolsdeps_mingw_msys():\n+ gcc = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Windows\n+ compiler=gcc\n+ compiler.version=4.9\n+ compiler.libcxx=libstdc++\n+ arch=x86_64\n+ build_type=Release\n+ \"\"\")\n+ out = build_windows_subsystem(gcc, make_program=\"mingw32-make\")\n+ print(out)\n+ assert \"__MSYS__\" not in out\n+ assert \"main2 __MINGW64__1\" in out\n+\n+\[email protected]_msys2\[email protected](platform.system() != \"Windows\", reason=\"Needs windows\")\n+def test_autotoolsdeps_msys():\n+ gcc = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Windows\n+ os.subsystem=msys2\n+ compiler=gcc\n+ compiler.version=4.9\n+ compiler.libcxx=libstdc++\n+ arch=x86_64\n+ build_type=Release\n+ \"\"\")\n+ out = build_windows_subsystem(gcc, make_program=\"make\")\n+ print(out)\n+ # Msys2 is a rewrite of Msys, using Cygwin\n+ assert \"MINGW\" not in out\n+ assert \"main2 __MSYS__1\" in out\n+ assert \"main2 __CYGWIN__1\" in out\ndiff --git a/conans/test/functional/toolchains/test_make.py b/conans/test/functional/toolchains/gnu/test_make.py\nsimilarity index 100%\nrename from conans/test/functional/toolchains/test_make.py\nrename to conans/test/functional/toolchains/gnu/test_make.py\ndiff --git a/conans/test/functional/utils.py b/conans/test/functional/utils.py\nindex 29fcc98f8b1..a9180eb8403 100644\n--- a/conans/test/functional/utils.py\n+++ b/conans/test/functional/utils.py\n@@ -23,7 +23,8 @@ def check_vs_runtime(exe, client, vs_version, build_type, static, architecture=\"\n raise NotImplementedError()\n \n \n-def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, definitions=None):\n+def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, definitions=None,\n+ cxx11_abi=None):\n output = str(output)\n names = names if isinstance(names, list) else [names]\n \n@@ -41,6 +42,8 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de\n assert \"{} _MSVC_LANG20{}\".format(name, cppstd) in output\n \n elif compiler == \"gcc\":\n+ assert \"{} __GNUC__\".format(name) in output\n+\n if arch == \"x86\":\n assert \"{} __i386__ defined\".format(name) in output\n elif arch == \"x86_64\":\n@@ -59,6 +62,9 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de\n \"17\": \"201703\"}[cppstd]\n assert \"{} __cplusplus{}\".format(name, cppstd_value) in output\n \n+ if cxx11_abi is not None:\n+ assert \"{} _GLIBCXX_USE_CXX11_ABI {}\".format(name, cxx11_abi) in output\n+\n if definitions:\n for k, v in definitions.items():\n assert \"{}: {}\".format(k, v) in output\ndiff --git a/conans/test/unittests/tools/env/__init__.py b/conans/test/unittests/tools/env/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/unittests/tools/env/test_env.py b/conans/test/unittests/tools/env/test_env.py\nnew file mode 100644\nindex 00000000000..74c150385b9\n--- /dev/null\n+++ b/conans/test/unittests/tools/env/test_env.py\n@@ -0,0 +1,125 @@\n+import platform\n+import subprocess\n+import textwrap\n+\n+import pytest\n+\n+from conan.tools.env import Environment\n+from conans.client.tools import chdir\n+from conans.test.utils.test_files import temp_folder\n+from conans.util.files import save\n+\n+\n+def test_compose():\n+ env = Environment()\n+ env[\"MyVar\"].define(\"MyValue\")\n+ env[\"MyVar2\"].define(\"MyValue2\")\n+ env[\"MyVar3\"].define(\"MyValue3\")\n+ env[\"MyVar4\"].define(\"MyValue4\")\n+\n+ env2 = Environment()\n+ env2[\"MyVar\"].define(\"MyNewValue\")\n+ env2[\"MyVar2\"].append(\"MyNewValue2\")\n+ env2[\"MyVar3\"].prepend(\"MyNewValue3\")\n+ env2[\"MyVar4\"].clean()\n+\n+ env3 = env.compose(env2)\n+ assert env3[\"MyVar\"].value(\"MyVar\") == \"MyNewValue\"\n+ assert env3[\"MyVar2\"].value(\"MyVar2\") == 'MyValue2 MyNewValue2'\n+ assert env3[\"MyVar3\"].value(\"MyVar3\") == 'MyNewValue3 MyValue3'\n+\n+\n+def test_env_files():\n+ env = Environment()\n+ env[\"MyVar\"].define(\"MyValue\")\n+ env[\"MyVar1\"].define(\"MyValue1\")\n+ env[\"MyVar2\"].append(\"MyValue2\")\n+ env[\"MyVar3\"].prepend(\"MyValue3\")\n+ env[\"MyVar4\"].clean()\n+ env[\"MyVar5\"].define(\"MyValue5 With Space5=More Space5;:More\")\n+ folder = temp_folder()\n+\n+ prevenv = {\"MyVar1\": \"OldVar1\",\n+ \"MyVar2\": \"OldVar2\",\n+ \"MyVar3\": \"OldVar3\",\n+ \"MyVar4\": \"OldVar4\"}\n+\n+ display_bat = textwrap.dedent(\"\"\"\\\n+ @echo off\n+ echo MyVar=%MyVar%!!\n+ echo MyVar1=%MyVar1%!!\n+ echo MyVar2=%MyVar2%!!\n+ echo MyVar3=%MyVar3%!!\n+ echo MyVar4=%MyVar4%!!\n+ echo MyVar5=%MyVar5%!!\n+ \"\"\")\n+\n+ display_sh = textwrap.dedent(\"\"\"\\\n+ echo MyVar=$MyVar!!\n+ echo MyVar1=$MyVar1!!\n+ echo MyVar2=$MyVar2!!\n+ echo MyVar3=$MyVar3!!\n+ echo MyVar4=$MyVar4!!\n+ echo MyVar5=$MyVar5!!\n+ \"\"\")\n+\n+ with chdir(folder):\n+ if platform.system() == \"Windows\":\n+ env.save_bat(\"test.bat\")\n+ save(\"display.bat\", display_bat)\n+ cmd = \"test.bat && display.bat && deactivate_test.bat && display.bat\"\n+\n+ exe = None\n+ else:\n+ env.save_sh(\"test.sh\")\n+ save(\"display.sh\", display_sh)\n+ cmd = \". test.sh && . display.sh && . deactivate_test.sh && . display.sh\"\n+\n+ exe = \"/bin/bash\"\n+ out, _ = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n+ env=prevenv, shell=True, executable=exe).communicate()\n+\n+ out = out.decode()\n+ assert \"MyVar=MyValue!!\" in out\n+ assert \"MyVar1=MyValue1!!\" in out\n+ assert \"MyVar2=OldVar2 MyValue2!!\" in out\n+ assert \"MyVar3=MyValue3 OldVar3!!\" in out\n+ assert \"MyVar4=!!\" in out\n+ assert \"MyVar5=MyValue5 With Space5=More Space5;:More!!\" in out\n+\n+ assert \"MyVar=!!\" in out\n+ assert \"MyVar1=OldVar1!!\" in out\n+ assert \"MyVar2=OldVar2!!\" in out\n+ assert \"MyVar3=OldVar3!!\" in out\n+ assert \"MyVar4=OldVar4!!\" in out\n+ assert \"MyVar5=!!\" in out\n+\n+\[email protected](platform.system() != \"Windows\", reason=\"Requires MSBuild\")\n+def test_windows_case_insensitive():\n+ # Append and define operation over the same variable in Windows preserve order\n+ env = Environment()\n+ env[\"MyVar\"].define(\"MyValueA\")\n+ env[\"MYVAR\"].define(\"MyValueB\")\n+ env[\"MyVar1\"].define(\"MyValue1A\")\n+ env[\"MYVAR1\"].append(\"MyValue1B\")\n+ folder = temp_folder()\n+\n+ display_bat = textwrap.dedent(\"\"\"\\\n+ @echo off\n+ echo MyVar=%MyVar%!!\n+ echo MyVar1=%MyVar1%!!\n+ \"\"\")\n+\n+ with chdir(folder):\n+ env.save_bat(\"test.bat\")\n+ save(\"display.bat\", display_bat)\n+ cmd = \"test.bat && display.bat && deactivate_test.bat && display.bat\"\n+ out, _ = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n+ shell=True).communicate()\n+\n+ out = out.decode()\n+ assert \"MyVar=MyValueB!!\" in out\n+ assert \"MyVar=!!\" in out\n+ assert \"MyVar1=MyValue1A MyValue1B!!\" in out\n+ assert \"MyVar1=!!\" in out\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,206 @@\n+import os\n+import platform\n+import textwrap\n+import time\n+\n+import pytest\n+\n+from conans.test.assets.autotools import gen_makefile_am, gen_configure_ac\n+from conans.test.assets.sources import gen_function_cpp\n+from conans.test.functional.utils import check_exe_run\n+from conans.test.utils.tools import TestClient\n+from conans.util.files import touch\n+\n+\[email protected](platform.system() != \"Linux\", reason=\"Requires Autotools\")\[email protected]_autotools()\n+def test_autotools():\n+ client = TestClient(path_with_spaces=False)\n+ client.run(\"new hello/0.1 --template=v2_cmake\")\n+ client.run(\"create .\")\n+\n+ main = gen_function_cpp(name=\"main\", includes=[\"hello\"], calls=[\"hello\"])\n+ makefile_am = gen_makefile_am(main=\"main\", main_srcs=\"main.cpp\")\n+ configure_ac = gen_configure_ac()\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps\n+\n+ class TestConan(ConanFile):\n+ requires = \"hello/0.1\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+ exports_sources = \"configure.ac\", \"Makefile.am\", \"main.cpp\"\n+\n+ def generate(self):\n+ deps = AutotoolsDeps(self)\n+ deps.generate()\n+ tc = AutotoolsToolchain(self)\n+ tc.generate()\n+\n+ def build(self):\n+ self.run(\"aclocal\")\n+ self.run(\"autoconf\")\n+ self.run(\"automake --add-missing --foreign\")\n+ autotools = Autotools(self)\n+ autotools.configure()\n+ autotools.make()\n+ autotools.install()\n+ \"\"\")\n+\n+ client.save({\"conanfile.py\": conanfile,\n+ \"configure.ac\": configure_ac,\n+ \"Makefile.am\": makefile_am,\n+ \"main.cpp\": main}, clean_first=True)\n+ client.run(\"install .\")\n+ client.run(\"build .\")\n+ client.run_command(\"./main\")\n+ check_exe_run(client.out, \"main\", \"gcc\", None, \"Release\", \"x86_64\", None, cxx11_abi=0)\n+ assert \"hello/0.1: Hello World Release!\" in client.out\n+\n+\n+def build_windows_subsystem(profile, make_program):\n+ \"\"\" The AutotoolsDeps can be used also in pure Makefiles, if the makefiles follow\n+ the Autotools conventions\n+ \"\"\"\n+ # FIXME: cygwin in CI (my local machine works) seems broken for path with spaces\n+ client = TestClient(path_with_spaces=False)\n+ client.run(\"new hello/0.1 --template=v2_cmake\")\n+ # TODO: Test Windows subsystems in CMake, at least msys is broken\n+ os.rename(os.path.join(client.current_folder, \"test_package\"),\n+ os.path.join(client.current_folder, \"test_package2\"))\n+ client.save({\"profile\": profile})\n+ client.run(\"create . --profile=profile\")\n+ print(client.out)\n+\n+ main = gen_function_cpp(name=\"main\", includes=[\"hello\"], calls=[\"hello\"])\n+ makefile = textwrap.dedent(\"\"\"\\\n+ app: main.o\n+ \t$(CXX) $(CFLAGS) $(LDFLAGS) -o app main.o $(LIBS)\n+\n+ main.o: main.cpp\n+ \t$(CXX) $(CFLAGS) $(CXXFLAGS) $(CPPFLAGS) -c -o main.o main.cpp\n+ \"\"\")\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps\n+\n+ class TestConan(ConanFile):\n+ requires = \"hello/0.1\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+ exports_sources = \"Makefile\"\n+\n+ def generate(self):\n+ deps = AutotoolsDeps(self)\n+ deps.generate()\n+ tc = AutotoolsToolchain(self)\n+ tc.generate()\n+\n+ def build(self):\n+ autotools = Autotools(self)\n+ autotools.make()\n+ \"\"\")\n+ client.save({\"main.cpp\": main,\n+ \"Makefile\": makefile,\n+ \"conanfile.py\": conanfile,\n+ \"profile\": profile}, clean_first=True)\n+\n+ client.run(\"install . --profile=profile\")\n+ client.run_command(\"autotoolsdeps.bat && autotools.bat && {}\".format(make_program))\n+ print(client.out)\n+ client.run_command(\"app\")\n+ # TODO: fill compiler version when ready\n+ check_exe_run(client.out, \"main\", \"gcc\", None, \"Release\", \"x86_64\", None)\n+ assert \"hello/0.1: Hello World Release!\" in client.out\n+\n+ client.save({\"main.cpp\": gen_function_cpp(name=\"main\", msg=\"main2\",\n+ includes=[\"hello\"], calls=[\"hello\"])})\n+ # Make sure it is newer\n+ t = time.time() + 1\n+ touch(os.path.join(client.current_folder, \"main.cpp\"), (t, t))\n+\n+ client.run(\"build .\")\n+ print(client.out)\n+ client.run_command(\"app\")\n+ # TODO: fill compiler version when ready\n+ check_exe_run(client.out, \"main2\", \"gcc\", None, \"Release\", \"x86_64\", None, cxx11_abi=0)\n+ assert \"hello/0.1: Hello World Release!\" in client.out\n+ return client.out\n+\n+\[email protected]_cygwin\[email protected](platform.system() != \"Windows\", reason=\"Needs windows\")\n+def test_autotoolsdeps_cygwin():\n+ gcc = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Windows\n+ os.subsystem=cygwin\n+ compiler=gcc\n+ compiler.version=4.9\n+ compiler.libcxx=libstdc++\n+ arch=x86_64\n+ build_type=Release\n+ \"\"\")\n+ out = build_windows_subsystem(gcc, make_program=\"make\")\n+ print(out)\n+ assert \"__MSYS__\" not in out\n+ assert \"MINGW\" not in out\n+ assert \"main2 __CYGWIN__1\" in out\n+\n+\[email protected]_mingw",
"line": null,
"original_line": 152,
"original_start_line": null,
"path": "conans/test/functional/toolchains/gnu/test_autotools.py",
"start_line": null,
"text": "@user2:\n```suggestion\r\[email protected]_msys2\r\[email protected]_mingw32\r\n```\r\nnot sure about an old `tool_mingw`, let's use the one from the MSYS2\n\n@author:\nIt is another mingw, quite popular that doesn't work under the umbrella of Msys2. We need to make sure it is working too, the commands, paths, syntax, might be different of what Msys2 is doing, so lets keep this variant too."
}
] |
67d2275470332b6dda23350df087b98126c454bd
|
diff --git a/conan/tools/_compilers.py b/conan/tools/_compilers.py
index b1eaf5ec587..05b4d10c399 100644
--- a/conan/tools/_compilers.py
+++ b/conan/tools/_compilers.py
@@ -39,3 +39,76 @@ def architecture_flag(settings):
"e2k-v6": "-march=elbrus-v6",
"e2k-v7": "-march=elbrus-v7"}.get(str(arch), "")
return ""
+
+
+def build_type_flags(settings):
+ """
+ returns flags specific to the build type (Debug, Release, etc.)
+ (-s, -g, /Zi, etc.)
+ """
+ compiler = settings.get_safe("compiler.base") or settings.get_safe("compiler")
+
+ build_type = settings.get_safe("build_type")
+ vs_toolset = settings.get_safe("compiler.toolset")
+ if not compiler or not build_type:
+ return ""
+
+ # https://github.com/Kitware/CMake/blob/d7af8a34b67026feaee558433db3a835d6007e06/
+ # Modules/Platform/Windows-MSVC.cmake
+ if str(compiler) == 'Visual Studio':
+ if vs_toolset and "clang" in str(vs_toolset):
+ flags = {"Debug": ["-gline-tables-only", "-fno-inline", "-O0"],
+ "Release": ["-O2"],
+ "RelWithDebInfo": ["-gline-tables-only", "-O2", "-fno-inline"],
+ "MinSizeRel": []
+ }.get(build_type, ["-O2", "-Ob2"])
+ else:
+ flags = {"Debug": ["-Zi", "-Ob0", "-Od"],
+ "Release": ["-O2", "-Ob2"],
+ "RelWithDebInfo": ["-Zi", "-O2", "-Ob1"],
+ "MinSizeRel": ["-O1", "-Ob1"],
+ }.get(build_type, [])
+ return flags
+ else:
+ # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/
+ # Modules/Compiler/GNU.cmake
+ # clang include the gnu (overriding some things, but not build type) and apple clang
+ # overrides clang but it doesn't touch clang either
+ if str(compiler) in ["clang", "gcc", "apple-clang", "qcc", "mcst-lcc"]:
+ # FIXME: It is not clear that the "-s" is something related with the build type
+ # cmake is not adjusting it
+ # -s: Remove all symbol table and relocation information from the executable.
+ flags = {"Debug": ["-g"],
+ "Release": ["-O3", "-s"] if str(compiler) == "gcc" else ["-O3"],
+ "RelWithDebInfo": ["-O2", "-g"],
+ "MinSizeRel": ["-Os"],
+ }.get(build_type, [])
+ return flags
+ elif str(compiler) == "sun-cc":
+ # https://github.com/Kitware/CMake/blob/f3bbb37b253a1f4a26809d6f132b3996aa2e16fc/
+ # Modules/Compiler/SunPro-CXX.cmake
+ flags = {"Debug": ["-g"],
+ "Release": ["-xO3"],
+ "RelWithDebInfo": ["-xO2", "-g"],
+ "MinSizeRel": ["-xO2", "-xspace"],
+ }.get(build_type, [])
+ return flags
+ return ""
+
+
+def use_win_mingw(conanfile):
+ if hasattr(conanfile, 'settings_build'):
+ os_build = conanfile.settings_build.get_safe('os')
+ else:
+ os_build = conanfile.settings.get_safe('os_build')
+ if os_build is None: # Assume is the same specified in host settings, not cross-building
+ os_build = conanfile.settings.get_safe("os")
+
+ if os_build == "Windows":
+ compiler = conanfile.settings.get_safe("compiler")
+ sub = conanfile.settings.get_safe("os.subsystem")
+ if sub in ("cygwin", "msys2", "msys") or compiler == "qcc":
+ return False
+ else:
+ return True
+ return False
diff --git a/conan/tools/cmake/utils.py b/conan/tools/cmake/utils.py
index 7f572daf165..006d5187d10 100644
--- a/conan/tools/cmake/utils.py
+++ b/conan/tools/cmake/utils.py
@@ -1,7 +1,7 @@
import os
+from conan.tools._compilers import use_win_mingw
from conans.errors import ConanException
-from conans.util.log import logger
def is_multi_configuration(generator):
@@ -33,21 +33,7 @@ def get_generator(conanfile):
return base
compiler_base = conanfile.settings.get_safe("compiler.base")
- arch = conanfile.settings.get_safe("arch")
-
compiler_base_version = conanfile.settings.get_safe("compiler.base.version")
- if hasattr(conanfile, 'settings_build'):
- os_build = conanfile.settings_build.get_safe('os')
- else:
- os_build = conanfile.settings.get_safe('os_build')
- if os_build is None: # Assume is the same specified in host settings, not cross-building
- os_build = conanfile.settings.get_safe("os")
-
- if not compiler or not compiler_version or not arch:
- if os_build == "Windows":
- logger.warning("CMake generator could not be deduced from settings")
- return None
- return "Unix Makefiles"
if compiler == "Visual Studio" or compiler_base == "Visual Studio":
version = compiler_base_version or compiler_version
@@ -63,8 +49,7 @@ def get_generator(conanfile):
base = "Visual Studio %s" % _visuals
return base
- # The generator depends on the build machine, not the target
- if os_build == "Windows" and compiler != "qcc":
- return "MinGW Makefiles" # it is valid only under Windows
+ if use_win_mingw(conanfile):
+ return "MinGW Makefiles"
return "Unix Makefiles"
diff --git a/conan/tools/gnu/__init__.py b/conan/tools/gnu/__init__.py
index 32860e46378..87688ef9e40 100644
--- a/conan/tools/gnu/__init__.py
+++ b/conan/tools/gnu/__init__.py
@@ -1 +1,5 @@
from .make import MakeToolchain
+from conan.tools.gnu.autotoolstoolchain import AutotoolsToolchain
+from conan.tools.gnu.autotoolsdeps import AutotoolsDeps
+from conan.tools.gnu.autotools import Autotools
+from conan.tools.gnu.autotoolsgen import AutotoolsGen
diff --git a/conan/tools/gnu/autotools.py b/conan/tools/gnu/autotools.py
new file mode 100644
index 00000000000..6c912487b43
--- /dev/null
+++ b/conan/tools/gnu/autotools.py
@@ -0,0 +1,61 @@
+import platform
+
+from conan.tools._compilers import use_win_mingw
+
+
+class Autotools(object):
+
+ def __init__(self, conanfile):
+ """
+ FIXME: include_rpath_flags CONAN 2.0 to default True? Could break many packages in center
+ """
+ self._conanfile = conanfile
+ self._win_bash = False
+ self._include_rpath_flags = False
+ self._os = conanfile.settings.get_safe("os")
+ self._os_version = conanfile.settings.get_safe("os.version")
+ self._os_sdk = conanfile.settings.get_safe("os.sdk")
+ self._os_subsystem = conanfile.settings.get_safe("os.subsystem")
+ self._arch = conanfile.settings.get_safe("arch")
+ self._build_type = conanfile.settings.get_safe("build_type")
+ self._compiler = conanfile.settings.get_safe("compiler")
+ self._compiler_version = conanfile.settings.get_safe("compiler.version")
+
+ # Precalculate build, host, target triplets
+ # TODO self.build, self.host, self.target = self._get_host_build_target_flags()
+
+ def configure(self):
+ """
+ http://jingfenghanmax.blogspot.com.es/2010/09/configure-with-host-target-and-build.html
+ https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html
+ """
+ if not self._conanfile.should_configure:
+ return
+ configure_dir = "."
+
+ # TODO: Management of build, host, target triplet
+ # TODO: Management of PKG_CONFIG_PATH
+ # TODO: Implement management of --prefix, bindir, sbindir, libexecdir, libdir, includedir
+
+ cmd = "%s/configure" % configure_dir
+ self._conanfile.output.info("Calling:\n > %s" % cmd)
+ self._conanfile.run(cmd)
+
+ def make(self):
+ """if not self._build_type:
+ raise ConanException("build_type setting should be defined.")
+ with environment_append(vars or self.vars):
+ str_args = args_to_string(args)
+ cpu_count_option = (("-j%s" % cpu_count(output=self._conanfile.output))
+ if ("-j" not in str_args and "nmake" not in make_program.lower())
+ else None)
+ self._conanfile.run("%s" % join_arguments([make_program, target, str_args,
+ cpu_count_option]),
+ win_bash=self._win_bash, subsystem=self.subsystem)"""
+
+ make_program = self._conanfile.conf["tools.gnu"].make_program
+ if make_program is None:
+ make_program = "mingw32-make" if use_win_mingw(self._conanfile) else "make"
+ # Need to activate the buildenv if existing
+ command = make_program
+ self._conanfile.run(command)
diff --git a/conan/tools/gnu/autotoolsdeps.py b/conan/tools/gnu/autotoolsdeps.py
new file mode 100644
index 00000000000..74139dcffb6
--- /dev/null
+++ b/conan/tools/gnu/autotoolsdeps.py
@@ -0,0 +1,106 @@
+from conan.tools.env import Environment
+from conans.model.build_info import DepCppInfo
+
+
+class AutotoolsDeps:
+ def __init__(self, conanfile):
+ # Set the generic objects before mapping to env vars to let the user
+ # alter some value
+ self._conanfile = conanfile
+
+ self.libs = []
+ self.system_libs = []
+ self.include_paths = []
+ self.lib_paths = []
+ self.defines = []
+ self.cflags = []
+ self.cxxflags = []
+ self.sharedlinkflags = []
+ self.exelinkflags = []
+ self.frameworks = []
+ self.framework_paths = []
+ self.sysroot = None
+
+ def merge_lists(seq1, seq2):
+ return [s for s in seq1 if s not in seq2] + seq2
+
+ def merge(dep):
+ dep_cpp_info = DepCppInfo(dep.cpp_info) # To deal with components
+ self.system_libs = merge_lists(self.system_libs, dep_cpp_info.system_libs)
+ self.include_paths = merge_lists(self.include_paths, dep_cpp_info.include_paths)
+ self.lib_paths = merge_lists(self.lib_paths, dep_cpp_info.lib_paths)
+ self.framework_paths = merge_lists(self.framework_paths, dep_cpp_info.framework_paths)
+ self.libs = merge_lists(self.libs, dep_cpp_info.libs)
+ self.frameworks = merge_lists(self.frameworks, dep_cpp_info.frameworks)
+
+ # Note these are in reverse order
+ self.defines = merge_lists(dep_cpp_info.defines, self.defines)
+ self.cxxflags = merge_lists(dep_cpp_info.cxxflags, self.cxxflags)
+ self.cflags = merge_lists(dep_cpp_info.cflags, self.cflags)
+ self.sharedlinkflags = merge_lists(dep_cpp_info.sharedlinkflags, self.sharedlinkflags)
+ self.exelinkflags = merge_lists(dep_cpp_info.exelinkflags, self.exelinkflags)
+
+ if not self.sysroot:
+ self.sysroot = dep_cpp_info.sysroot
+
+ def _apply_transitive_runenv(next_requires):
+ # TODO: This visitor is same as VirtualEnv runenv_info one, extract
+ all_requires = []
+ while next_requires:
+ new_requires = []
+ for require in next_requires:
+ # The explicit has more priority
+ merge(require)
+ all_requires.append(require)
+
+ for transitive in require.dependencies.requires:
+ # Avoid duplication/repetitions
+ if transitive not in new_requires and transitive not in all_requires:
+ new_requires.append(transitive)
+ next_requires = new_requires
+
+ _apply_transitive_runenv(self._conanfile.dependencies.requires)
+
+ def environment(self):
+ # cpp_flags
+ cpp_flags = []
+ include_paths = ['-I"%s"' % p for p in self.include_paths]
+ cpp_flags.extend(include_paths)
+ cpp_flags.extend(["-D%s" % define for define in self.defines])
+
+ # Libs
+ libs = ["-l%s" % library for library in self.libs]
+
+ # Ldflags
+ # TODO: Discuss, should the helper filter frameworks based on compiler?
+ frameworks = ["-framework %s" % framework for framework in self.frameworks]
+ frameworks_paths = ["-F %s" % framework_path for framework_path in self.framework_paths]
+ ldflags = self.sharedlinkflags
+ ldflags.extend(self.exelinkflags)
+ ldflags.extend(frameworks)
+ ldflags.extend(frameworks_paths)
+ lib_paths = ['-L"%s"' % p for p in self.lib_paths]
+ ldflags.extend(lib_paths)
+
+ # cflags
+ cflags = self.cflags
+ cxxflags = self.cxxflags
+
+ if self.sysroot:
+ srf = '--sysroot={}'.format(self.sysroot)
+ cflags.append(srf)
+ cxxflags.append(srf)
+ ldflags.append(srf)
+
+ env = Environment()
+ env.append("CPPFLAGS", cpp_flags)
+ env.append("LIBS", libs)
+ env.append("LDFLAGS", ldflags)
+ env.append("CXXFLAGS", cxxflags)
+ env.append("CFLAGS", cflags)
+ return env
+
+ def generate(self):
+ env = self.environment()
+ env.save_sh("conanautotoolsdeps.sh")
+ env.save_bat("conanautotoolsdeps.bat")
diff --git a/conan/tools/gnu/autotoolsgen.py b/conan/tools/gnu/autotoolsgen.py
new file mode 100644
index 00000000000..ac0b523fefc
--- /dev/null
+++ b/conan/tools/gnu/autotoolsgen.py
@@ -0,0 +1,35 @@
+import platform
+
+from conan.tools.env import VirtualEnv
+from conan.tools.gnu import AutotoolsToolchain, AutotoolsDeps
+
+
+class AutotoolsGen:
+ def __init__(self, conanfile):
+ self.toolchain = AutotoolsToolchain(conanfile)
+ self.deps = AutotoolsDeps(conanfile)
+ self.env = VirtualEnv(conanfile)
+
+ def build_environment(self):
+ envtoolchain = self.toolchain.environment()
+ envdeps = self.deps.environment()
+ build_env = self.env.build_environment()
+ build_env.compose(envtoolchain)
+ build_env.compose(envdeps)
+ return build_env
+
+ def run_environment(self):
+ run_env = self.env.run_environment()
+ return run_env
+
+ def generate(self):
+ build_env = self.build_environment()
+ run_env = self.run_environment()
+ # FIXME: Use settings, not platform Not always defined :(
+ # os_ = self._conanfile.settings_build.get_safe("os")
+ if platform.system() == "Windows":
+ build_env.save_bat("conanbuildenv.bat")
+ run_env.save_bat("conanrunenv.bat")
+ else:
+ build_env.save_sh("conanbuildenv.sh")
+ run_env.save_sh("conanrunenv.sh")
diff --git a/conan/tools/gnu/autotoolstoolchain.py b/conan/tools/gnu/autotoolstoolchain.py
new file mode 100644
index 00000000000..c9358d85f70
--- /dev/null
+++ b/conan/tools/gnu/autotoolstoolchain.py
@@ -0,0 +1,111 @@
+from conan.tools._compilers import architecture_flag, build_type_flags
+from conan.tools.env import Environment
+# FIXME: need to refactor this import and bring to conan.tools
+from conans.client.build.cppstd_flags import cppstd_flag_new
+
+
+class AutotoolsToolchain:
+ def __init__(self, conanfile):
+ self._conanfile = conanfile
+ build_type = self._conanfile.settings.get_safe("build_type")
+
+ # TODO: compiler.runtime for Visual studio?
+ # defines
+ self.ndebug = None
+ if build_type in ['Release', 'RelWithDebInfo', 'MinSizeRel']:
+ self.ndebug = "NDEBUG"
+ self.gcc_cxx11_abi = self._cxx11_abi_define()
+ self.defines = []
+
+ # cxxflags, cflags
+ self.cxxflags = []
+ self.cflags = []
+ self.ldflags = []
+ self.libcxx = self._libcxx()
+ self.fpic = self._conanfile.options.get_safe("fPIC")
+
+ # FIXME: This needs to be imported here into conan.tools
+ self.cppstd = cppstd_flag_new(self._conanfile.settings)
+ self.arch_flag = architecture_flag(self._conanfile.settings)
+ # TODO: This is also covering compilers like Visual Studio, necessary to test it (&remove?)
+ self.build_type_flags = build_type_flags(self._conanfile.settings)
+
+ def _rpaths_link(self):
+ # TODO: Not implemented yet
+ pass
+
+ # TODO: Apple: tools.apple_deployment_target_flag,
+ # TODO: tools.XCRun(self._conanfile.settings).sdk_path
+ # TODO: "-arch", tools.to_apple_arch(self._arch)
+
+ def _cxx11_abi_define(self):
+ # https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html
+ # The default is libstdc++11, only specify the contrary '_GLIBCXX_USE_CXX11_ABI=0'
+ settings = self._conanfile.settings
+ libcxx = settings.get_safe("compiler.libcxx")
+ if not libcxx:
+ return
+
+ compiler = settings.get_safe("compiler.base") or settings.get_safe("compiler")
+ if compiler == "gcc":
+ if libcxx == 'libstdc++':
+ return '_GLIBCXX_USE_CXX11_ABI=0'
+
+ def _libcxx(self):
+ settings = self._conanfile.settings
+ libcxx = settings.get_safe("compiler.libcxx")
+ if not libcxx:
+ return
+
+ compiler = settings.get_safe("compiler.base") or settings.get_safe("compiler")
+
+ if compiler in ['clang', 'apple-clang']:
+ if libcxx in ['libstdc++', 'libstdc++11']:
+ return '-stdlib=libstdc++'
+ elif libcxx == 'libc++':
+ return '-stdlib=libc++'
+ elif compiler == 'sun-cc':
+ return ({"libCstd": "-library=Cstd",
+ "libstdcxx": "-library=stdcxx4",
+ "libstlport": "-library=stlport4",
+ "libstdc++": "-library=stdcpp"}.get(libcxx))
+ elif compiler == "qcc":
+ return "-Y _%s" % str(libcxx)
+
+ def environment(self):
+ env = Environment()
+ # defines
+ if self.ndebug:
+ self.defines.append(self.ndebug)
+ if self.gcc_cxx11_abi:
+ self.defines.append(self.gcc_cxx11_abi)
+
+ if self.libcxx:
+ self.cxxflags.append(self.libcxx)
+
+ if self.cppstd:
+ self.cxxflags.append(self.cppstd)
+
+ if self.arch_flag:
+ self.cxxflags.append(self.arch_flag)
+ self.cflags.append(self.arch_flag)
+ self.ldflags.append(self.arch_flag)
+
+ if self.build_type_flags:
+ self.cxxflags.extend(self.build_type_flags)
+ self.cflags.extend(self.build_type_flags)
+
+ if self.fpic:
+ self.cxxflags.append("-fPIC")
+ self.cflags.append("-fPIC")
+
+ env.append("CPPFLAGS", ["-D{}".format(d) for d in self.defines])
+ env.append("CXXFLAGS", self.cxxflags)
+ env.append("CFLAGS", self.cflags)
+ env.append("LDFLAGS", self.ldflags)
+ return env
+
+ def generate(self):
+ env = self.environment()
+ env.save_sh("conanautotoolstoolchain.sh")
+ env.save_bat("conanautotoolstoolchain.bat")
diff --git a/conans/client/envvars/environment.py b/conans/client/envvars/environment.py
index 161a4c2aba7..9ce9c047264 100644
--- a/conans/client/envvars/environment.py
+++ b/conans/client/envvars/environment.py
@@ -177,7 +177,7 @@ def _files(env_vars, vars_with_spaces, flavor, activate_tpl, deactivate_tpl, ven
activate_content = activate_tpl.render(environment_file=env_filepath,
modified_vars=modified_vars, new_vars=new_vars,
venv_name=venv_name)
- deactivate_content = deactivate_tpl.render(modified_vars=modified_vars, new_vars=new_vars,
+ deactivate_content = deactivate_tpl.render(modified_vars=modified_vars, new_vars=new_vars,
venv_name=venv_name)
environment_lines = ["{}={}".format(name, value) for name, value, _ in ret]
diff --git a/conans/client/generators/__init__.py b/conans/client/generators/__init__.py
index bb5eb3990e9..b80ce5f82d6 100644
--- a/conans/client/generators/__init__.py
+++ b/conans/client/generators/__init__.py
@@ -67,7 +67,7 @@ def __init__(self):
"markdown": MarkdownGenerator}
self._new_generators = ["CMakeToolchain", "CMakeDeps", "MakeToolchain", "MSBuildToolchain",
"MesonToolchain", "MSBuildDeps", "QbsToolchain", "msbuild",
- "VirtualEnv"]
+ "VirtualEnv", "AutotoolsDeps", "AutotoolsToolchain", "AutotoolsGen"]
def add(self, name, generator_class, custom=False):
if name not in self._generators or custom:
@@ -96,6 +96,15 @@ def _new_generator(self, generator_name, output):
elif generator_name == "MakeToolchain":
from conan.tools.gnu import MakeToolchain
return MakeToolchain
+ elif generator_name == "AutotoolsDeps":
+ from conan.tools.gnu import AutotoolsDeps
+ return AutotoolsDeps
+ elif generator_name == "AutotoolsToolchain":
+ from conan.tools.gnu import AutotoolsToolchain
+ return AutotoolsToolchain
+ elif generator_name == "AutotoolsGen":
+ from conan.tools.gnu import AutotoolsGen
+ return AutotoolsGen
elif generator_name == "MSBuildToolchain":
from conan.tools.microsoft import MSBuildToolchain
return MSBuildToolchain
diff --git a/conans/test/assets/sources.py b/conans/test/assets/sources.py
index bd9934897b1..052e96b904f 100644
--- a/conans/test/assets/sources.py
+++ b/conans/test/assets/sources.py
@@ -35,6 +35,11 @@
std::cout << " {{ msg or name }} __x86_64__ defined\n";
#endif
+ // Libstdc++
+ #if defined _GLIBCXX_USE_CXX11_ABI
+ std::cout << " {{ msg or name }} _GLIBCXX_USE_CXX11_ABI "<< _GLIBCXX_USE_CXX11_ABI << "\n";
+ #endif
+
// COMPILER VERSIONS
#if _MSC_VER
std::cout << " {{ msg or name }} _MSC_VER" << _MSC_VER<< "\n";
diff --git a/conans/test/functional/toolchains/gnu/__init__.py b/conans/test/functional/toolchains/gnu/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/functional/toolchains/gnu/test_autotools.py b/conans/test/functional/toolchains/gnu/test_autotools.py
new file mode 100644
index 00000000000..59f7afeb06d
--- /dev/null
+++ b/conans/test/functional/toolchains/gnu/test_autotools.py
@@ -0,0 +1,167 @@
+import os
+import platform
+import textwrap
+import time
+
+import pytest
+
+from conan.tools.env.environment import environment_wrap_command
+from conans.test.assets.autotools import gen_makefile_am, gen_configure_ac, gen_makefile
+from conans.test.assets.sources import gen_function_cpp
+from conans.test.functional.utils import check_exe_run
+from conans.test.utils.tools import TestClient
+from conans.util.files import touch
+
+
[email protected](platform.system() != "Linux", reason="Requires Autotools")
[email protected]_autotools()
+def test_autotools():
+ client = TestClient(path_with_spaces=False)
+ client.run("new hello/0.1 --template=v2_cmake")
+ client.run("create .")
+
+ main = gen_function_cpp(name="main", includes=["hello"], calls=["hello"])
+ makefile_am = gen_makefile_am(main="main", main_srcs="main.cpp")
+ configure_ac = gen_configure_ac()
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conan.tools.gnu import Autotools
+
+ class TestConan(ConanFile):
+ requires = "hello/0.1"
+ settings = "os", "compiler", "arch", "build_type"
+ exports_sources = "configure.ac", "Makefile.am", "main.cpp"
+ generators = "AutotoolsGen"
+
+ def build(self):
+ self.run("aclocal")
+ self.run("autoconf")
+ self.run("automake --add-missing --foreign")
+ autotools = Autotools(self)
+ autotools.configure()
+ autotools.make()
+ """)
+
+ client.save({"conanfile.py": conanfile,
+ "configure.ac": configure_ac,
+ "Makefile.am": makefile_am,
+ "main.cpp": main}, clean_first=True)
+ client.run("install .")
+ client.run("build .")
+ client.run_command("./main")
+ check_exe_run(client.out, "main", "gcc", None, "Release", "x86_64", None, cxx11_abi=0)
+ assert "hello/0.1: Hello World Release!" in client.out
+
+
+def build_windows_subsystem(profile, make_program):
+ """ The AutotoolsDeps can be used also in pure Makefiles, if the makefiles follow
+ the Autotools conventions
+ """
+ # FIXME: cygwin in CI (my local machine works) seems broken for path with spaces
+ client = TestClient(path_with_spaces=False)
+ client.run("new hello/0.1 --template=v2_cmake")
+ # TODO: Test Windows subsystems in CMake, at least msys is broken
+ os.rename(os.path.join(client.current_folder, "test_package"),
+ os.path.join(client.current_folder, "test_package2"))
+ client.save({"profile": profile})
+ client.run("create . --profile=profile")
+
+ main = gen_function_cpp(name="main", includes=["hello"], calls=["hello"])
+ makefile = gen_makefile(apps=["app"])
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ from conan.tools.gnu import AutotoolsToolchain, Autotools, AutotoolsDeps
+
+ class TestConan(ConanFile):
+ requires = "hello/0.1"
+ settings = "os", "compiler", "arch", "build_type"
+ exports_sources = "Makefile"
+ generators = "AutotoolsGen"
+
+ def build(self):
+ autotools = Autotools(self)
+ autotools.make()
+ """)
+ client.save({"app.cpp": main,
+ "Makefile": makefile,
+ "conanfile.py": conanfile,
+ "profile": profile}, clean_first=True)
+
+ client.run("install . --profile=profile")
+ cmd = environment_wrap_command("conanbuildenv", make_program, cwd=client.current_folder)
+ client.run_command(cmd)
+ client.run_command("app")
+ # TODO: fill compiler version when ready
+ check_exe_run(client.out, "main", "gcc", None, "Release", "x86_64", None)
+ assert "hello/0.1: Hello World Release!" in client.out
+
+ client.save({"app.cpp": gen_function_cpp(name="main", msg="main2",
+ includes=["hello"], calls=["hello"])})
+ # Make sure it is newer
+ t = time.time() + 1
+ touch(os.path.join(client.current_folder, "app.cpp"), (t, t))
+
+ client.run("build .")
+ client.run_command("app")
+ # TODO: fill compiler version when ready
+ check_exe_run(client.out, "main2", "gcc", None, "Release", "x86_64", None, cxx11_abi=0)
+ assert "hello/0.1: Hello World Release!" in client.out
+ return client.out
+
+
[email protected]_cygwin
[email protected](platform.system() != "Windows", reason="Needs windows")
+def test_autotoolsdeps_cygwin():
+ gcc = textwrap.dedent("""
+ [settings]
+ os=Windows
+ os.subsystem=cygwin
+ compiler=gcc
+ compiler.version=4.9
+ compiler.libcxx=libstdc++
+ arch=x86_64
+ build_type=Release
+ """)
+ out = build_windows_subsystem(gcc, make_program="make")
+ assert "__MSYS__" not in out
+ assert "MINGW" not in out
+ assert "main2 __CYGWIN__1" in out
+
+
[email protected]_mingw64
[email protected](platform.system() != "Windows", reason="Needs windows")
+def test_autotoolsdeps_mingw_msys():
+ gcc = textwrap.dedent("""
+ [settings]
+ os=Windows
+ compiler=gcc
+ compiler.version=4.9
+ compiler.libcxx=libstdc++
+ arch=x86_64
+ build_type=Release
+ """)
+ out = build_windows_subsystem(gcc, make_program="mingw32-make")
+ assert "__MSYS__" not in out
+ assert "main2 __MINGW64__1" in out
+
+
[email protected]_msys2
[email protected](platform.system() != "Windows", reason="Needs windows")
+def test_autotoolsdeps_msys():
+ gcc = textwrap.dedent("""
+ [settings]
+ os=Windows
+ os.subsystem=msys2
+ compiler=gcc
+ compiler.version=4.9
+ compiler.libcxx=libstdc++
+ arch=x86_64
+ build_type=Release
+ """)
+ out = build_windows_subsystem(gcc, make_program="make")
+ # Msys2 is a rewrite of Msys, using Cygwin
+ assert "MINGW" not in out
+ assert "main2 __MSYS__1" in out
+ assert "main2 __CYGWIN__1" in out
diff --git a/conans/test/functional/toolchains/test_make.py b/conans/test/functional/toolchains/gnu/test_make.py
similarity index 100%
rename from conans/test/functional/toolchains/test_make.py
rename to conans/test/functional/toolchains/gnu/test_make.py
diff --git a/conans/test/functional/utils.py b/conans/test/functional/utils.py
index 29fcc98f8b1..a9180eb8403 100644
--- a/conans/test/functional/utils.py
+++ b/conans/test/functional/utils.py
@@ -23,7 +23,8 @@ def check_vs_runtime(exe, client, vs_version, build_type, static, architecture="
raise NotImplementedError()
-def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, definitions=None):
+def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, definitions=None,
+ cxx11_abi=None):
output = str(output)
names = names if isinstance(names, list) else [names]
@@ -41,6 +42,8 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de
assert "{} _MSVC_LANG20{}".format(name, cppstd) in output
elif compiler == "gcc":
+ assert "{} __GNUC__".format(name) in output
+
if arch == "x86":
assert "{} __i386__ defined".format(name) in output
elif arch == "x86_64":
@@ -59,6 +62,9 @@ def check_exe_run(output, names, compiler, version, build_type, arch, cppstd, de
"17": "201703"}[cppstd]
assert "{} __cplusplus{}".format(name, cppstd_value) in output
+ if cxx11_abi is not None:
+ assert "{} _GLIBCXX_USE_CXX11_ABI {}".format(name, cxx11_abi) in output
+
if definitions:
for k, v in definitions.items():
assert "{}: {}".format(k, v) in output
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-8046@ed667e6
|
conan-io/conan
|
Python
| 8,046
|
[feat] Send HTTP GET header with settings when looking for a 'package_reference'
|
Changelog: Feature: Add headers with settings and options to HTTP GET requests when searching for packages.
Docs: Omit
closes https://github.com/conan-io/conan/issues/7870
Conan sends a request header `Conan-PkgID-Settings` when looking for a `pref` in the remotes:
* if using revisions, it is sent when looking for the latest package revision, URL like:
```
v2/conans/<reference>/revisions/<rrev>/packages/<package_id>/latest
```
* if not using revisions, it is in the downloads_url:
```
v1/conans/<reference>/packages/<package_id>/download_urls
```
These are the headers, do we want this information in a single header? do we want to use different names?
```
Conan-PkgID-Settings
Conan-PkgID-Options
```
---
#REVISIONS: 1
|
2020-11-11T16:43:49Z
|
[feature] Identify configuration in remote requests when querying for a packageID
When Conan checks if a package for a given configuration is available in a remote, it sends just the _package ID_. It would be very useful to send the information used to compute that hash:
* We can send _all_ the configuration: _package ID_ computation is reproducible (needed? desired?)
* We can send only the information taken into account to compute the _package ID_ (~`conaninfo.txt`)
|
I've been having a look at it and it can be very challenging to pass the information about the configuration to the place where the calls to the remote are being done. I would say we need first to store the actual configuration for a package (settings, options, requirements,...) in the `node` object to have it available here https://github.com/conan-io/conan/blob/7f03c1d340eedcf9772722361a42dfb4ba2fc81e/conans/client/graph/graph_binaries.py#L100
Then we can deal with whatever is needed on the RemoteManager side.
> I've been having a look at it and it can be very challenging to pass the information about the configuration to the place where the calls to the remote are being done. I would say we need first to store the actual configuration for a package (settings, options, requirements,...) in the node object to have it available here
That would be accessible in ``node.conanfile.info`` object at that point.
|
[
{
"body": "When Conan checks if a package for a given configuration is available in a remote, it sends just the _package ID_. It would be very useful to send the information used to compute that hash:\r\n * We can send _all_ the configuration: _package ID_ computation is reproducible (needed? desired?)\r\n * We can send only the information taken into account to compute the _package ID_ (~`conaninfo.txt`)\r\n",
"number": 7870,
"title": "[feature] Identify configuration in remote requests when querying for a packageID"
}
] |
201480146740d2ab298b9f360037ab9e56c2a6a5
|
{
"head_commit": "ed667e69b58294867d192551418d8105065868c8",
"head_commit_message": "handle info variable with care",
"patch_to_review": "diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 798fbe3ef35..69da5ae8dad 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -1133,7 +1133,8 @@ def get_path(self, reference, package_id=None, path=None, remote_name=None):\n if package_id:\n pref = PackageReference(ref, package_id)\n if self.app.config.revisions_enabled and not pref.revision:\n- pref = self.app.remote_manager.get_latest_package_revision(pref, remote)\n+ pref = self.app.remote_manager.get_latest_package_revision(pref, remote,\n+ headers=None)\n return self.app.remote_manager.get_package_path(pref, path, remote), path\n else:\n return self.app.remote_manager.get_recipe_path(ref, path, remote), path\ndiff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py\nindex 033067b313a..ee81dd4aab6 100644\n--- a/conans/client/graph/graph_binaries.py\n+++ b/conans/client/graph/graph_binaries.py\n@@ -93,11 +93,14 @@ def _evaluate_cache_pkg(self, node, package_layout, pref, metadata, remote, remo\n node.prev = metadata.packages[pref.id].revision\n assert node.prev, \"PREV for %s is None: %s\" % (str(pref), metadata.dumps())\n \n+ def _get_package_info(self, node, pref, remote):\n+ return self._remote_manager.get_package_info(pref, remote, info=node.conanfile.info)\n+\n def _evaluate_remote_pkg(self, node, pref, remote, remotes):\n remote_info = None\n if remote:\n try:\n- remote_info, pref = self._remote_manager.get_package_info(pref, remote)\n+ remote_info, pref = self._get_package_info(node, pref, remote)\n except NotFoundException:\n pass\n except Exception:\n@@ -107,9 +110,9 @@ def _evaluate_remote_pkg(self, node, pref, remote, remotes):\n # If the \"remote\" came from the registry but the user didn't specified the -r, with\n # revisions iterate all remotes\n if not remote or (not remote_info and self._cache.config.revisions_enabled):\n- for r in remotes.values():\n+ for r in remotes.values(): # FIXME: Here we hit the same remote we did before\n try:\n- remote_info, pref = self._remote_manager.get_package_info(pref, r)\n+ remote_info, pref = self._get_package_info(node, pref, r)\n except NotFoundException:\n pass\n else:\n@@ -236,7 +239,7 @@ def _process_node(self, node, pref, build_mode, update, remotes):\n if build_mode.outdated:\n if node.binary in (BINARY_CACHE, BINARY_DOWNLOAD, BINARY_UPDATE):\n if node.binary == BINARY_UPDATE:\n- info, pref = self._remote_manager.get_package_info(pref, remote)\n+ info, pref = self._get_package_info(node, pref, remote)\n recipe_hash = info.recipe_hash\n elif node.binary == BINARY_CACHE:\n package_folder = package_layout.package(pref)\ndiff --git a/conans/client/remote_manager.py b/conans/client/remote_manager.py\nindex 6df71cd4dbc..8c72ed459a8 100644\n--- a/conans/client/remote_manager.py\n+++ b/conans/client/remote_manager.py\n@@ -20,6 +20,27 @@\n log_recipe_download, log_recipe_sources_download,\n log_uncompressed_file)\n \n+CONAN_REQUEST_HEADER_SETTINGS = 'Conan-PkgID-Settings'\n+CONAN_REQUEST_HEADER_OPTIONS = 'Conan-PkgID-Options'\n+\n+\n+def _headers_for_info(info):\n+ if not info:\n+ return None\n+\n+ r = {}\n+ settings = info.full_settings.as_list()\n+ if settings:\n+ settings = ['{}={}'.format(*it) for it in settings]\n+ r.update({CONAN_REQUEST_HEADER_SETTINGS: ';'.join(settings)})\n+\n+ options = info.options.as_list()\n+ if options:\n+ options = filter(lambda u: u[0] in ['shared', 'fPIC', 'header_only'], options)\n+ options = ['{}={}'.format(*it) for it in options]\n+ r.update({CONAN_REQUEST_HEADER_OPTIONS: ';'.join(options)})\n+ return r\n+\n \n class RemoteManager(object):\n \"\"\" Will handle the remotes to get recipes, packages etc \"\"\"\n@@ -38,7 +59,7 @@ def get_recipe_snapshot(self, ref, remote):\n return self._call_remote(remote, \"get_recipe_snapshot\", ref)\n \n def get_package_snapshot(self, pref, remote):\n- assert pref.ref.revision, \"upload_package requires RREV\"\n+ assert pref.ref.revision, \"get_package_snapshot requires RREV\"\n assert pref.revision, \"get_package_snapshot requires PREV\"\n return self._call_remote(remote, \"get_package_snapshot\", pref)\n \n@@ -58,14 +79,16 @@ def get_recipe_manifest(self, ref, remote):\n return self._call_remote(remote, \"get_recipe_manifest\", ref), ref\n \n def get_package_manifest(self, pref, remote):\n- pref = self._resolve_latest_pref(pref, remote)\n+ pref = self._resolve_latest_pref(pref, remote, headers=None)\n return self._call_remote(remote, \"get_package_manifest\", pref), pref\n \n- def get_package_info(self, pref, remote):\n+ def get_package_info(self, pref, remote, info=None):\n \"\"\" Read a package ConanInfo from remote\n \"\"\"\n- pref = self._resolve_latest_pref(pref, remote)\n- return self._call_remote(remote, \"get_package_info\", pref), pref\n+ headers = _headers_for_info(info)\n+ pref = self._resolve_latest_pref(pref, remote, headers=headers)\n+ # FIXME Conan 2.0: With revisions, it is not needed to pass headers to this second function\n+ return self._call_remote(remote, \"get_package_info\", pref, headers=headers), pref\n \n def get_recipe(self, ref, remote):\n \"\"\"\n@@ -140,16 +163,18 @@ def get_package(self, conanfile, pref, layout, remote, output, recorder):\n output.info(\"Retrieving package %s from remote '%s' \" % (pref.id, remote.name))\n layout.package_remove(pref) # Remove first the destination folder\n with layout.set_dirty_context_manager(pref):\n- self._get_package(layout, pref, remote, output, recorder)\n+ info = getattr(conanfile, 'info', None)\n+ self._get_package(layout, pref, remote, output, recorder, info=info)\n \n self._hook_manager.execute(\"post_download_package\", conanfile_path=conanfile_path,\n reference=pref.ref, package_id=pref.id, remote=remote,\n conanfile=conanfile)\n \n- def _get_package(self, layout, pref, remote, output, recorder):\n+ def _get_package(self, layout, pref, remote, output, recorder, info):\n t1 = time.time()\n try:\n- pref = self._resolve_latest_pref(pref, remote)\n+ headers = _headers_for_info(info)\n+ pref = self._resolve_latest_pref(pref, remote, headers=headers)\n snapshot = self._call_remote(remote, \"get_package_snapshot\", pref)\n if not is_package_snapshot_complete(snapshot):\n raise PackageNotFoundException(pref)\n@@ -230,8 +255,8 @@ def get_latest_recipe_revision(self, ref, remote):\n revision = self._call_remote(remote, \"get_latest_recipe_revision\", ref)\n return revision\n \n- def get_latest_package_revision(self, pref, remote):\n- revision = self._call_remote(remote, \"get_latest_package_revision\", pref)\n+ def get_latest_package_revision(self, pref, remote, headers):\n+ revision = self._call_remote(remote, \"get_latest_package_revision\", pref, headers=headers)\n return revision\n \n def _resolve_latest_ref(self, ref, remote):\n@@ -242,16 +267,16 @@ def _resolve_latest_ref(self, ref, remote):\n ref = ref.copy_with_rev(DEFAULT_REVISION_V1)\n return ref\n \n- def _resolve_latest_pref(self, pref, remote):\n+ def _resolve_latest_pref(self, pref, remote, headers):\n if pref.revision is None:\n try:\n- pref = self.get_latest_package_revision(pref, remote)\n+ pref = self.get_latest_package_revision(pref, remote, headers=headers)\n except NoRestV2Available:\n pref = pref.copy_with_revs(pref.ref.revision, DEFAULT_REVISION_V1)\n return pref\n \n def _call_remote(self, remote, method, *args, **kwargs):\n- assert(isinstance(remote, Remote))\n+ assert (isinstance(remote, Remote))\n try:\n return self._auth_manager.call_rest_api_method(remote, method, *args, **kwargs)\n except ConnectionError as exc:\n@@ -294,7 +319,7 @@ def uncompress_file(src_path, dest_folder, output):\n t1 = time.time()\n try:\n with progress_bar.open_binary(src_path, output, \"Decompressing %s\" % os.path.basename(\n- src_path)) as file_handler:\n+ src_path)) as file_handler:\n tar_extract(file_handler, dest_folder)\n except Exception as e:\n error_msg = \"Error while downloading/extracting files to %s\\n%s\\n\" % (dest_folder, str(e))\ndiff --git a/conans/client/rest/rest_client.py b/conans/client/rest/rest_client.py\nindex b3b6dc7ef29..4ee532047a7 100644\n--- a/conans/client/rest/rest_client.py\n+++ b/conans/client/rest/rest_client.py\n@@ -79,8 +79,8 @@ def get_recipe_manifest(self, ref):\n def get_package_manifest(self, pref):\n return self._get_api().get_package_manifest(pref)\n \n- def get_package_info(self, pref):\n- return self._get_api().get_package_info(pref)\n+ def get_package_info(self, pref, headers):\n+ return self._get_api().get_package_info(pref, headers=headers)\n \n def get_recipe(self, ref, dest_folder):\n return self._get_api().get_recipe(ref, dest_folder)\n@@ -161,5 +161,5 @@ def get_package_revisions(self, pref):\n def get_latest_recipe_revision(self, ref):\n return self._get_api().get_latest_recipe_revision(ref)\n \n- def get_latest_package_revision(self, pref):\n- return self._get_api().get_latest_package_revision(pref)\n+ def get_latest_package_revision(self, pref, headers):\n+ return self._get_api().get_latest_package_revision(pref, headers=headers)\ndiff --git a/conans/client/rest/rest_client_common.py b/conans/client/rest/rest_client_common.py\nindex 769759e8050..18405e2cdf4 100644\n--- a/conans/client/rest/rest_client_common.py\n+++ b/conans/client/rest/rest_client_common.py\n@@ -13,6 +13,7 @@\n \n class JWTAuth(AuthBase):\n \"\"\"Attaches JWT Authentication to the given Request object.\"\"\"\n+\n def __init__(self, token):\n self.token = token\n \n@@ -42,6 +43,7 @@ def handle_return_deserializer(deserializer=None):\n Map exceptions and http return codes and deserialize if needed.\n \n deserializer: Function for deserialize values\"\"\"\n+\n def handle_return(method):\n def inner(*argc, **argv):\n ret = method(*argc, **argv)\n@@ -50,7 +52,9 @@ def inner(*argc, **argv):\n text = ret.text if ret.status_code != 404 else \"404 Not found\"\n raise get_exception_from_error(ret.status_code)(text)\n return deserializer(ret.content) if deserializer else decode_text(ret.content)\n+\n return inner\n+\n return handle_return\n \n \n@@ -169,19 +173,20 @@ def server_capabilities(self, user=None, password=None):\n \n return [cap.strip() for cap in server_capabilities.split(\",\") if cap]\n \n- def get_json(self, url, data=None):\n- headers = self.custom_headers\n+ def get_json(self, url, data=None, headers=None):\n+ req_headers = self.custom_headers.copy()\n+ req_headers.update(headers or {})\n if data: # POST request\n- headers.update({'Content-type': 'application/json',\n- 'Accept': 'application/json'})\n+ req_headers.update({'Content-type': 'application/json',\n+ 'Accept': 'application/json'})\n logger.debug(\"REST: post: %s\" % url)\n- response = self.requester.post(url, auth=self.auth, headers=headers,\n+ response = self.requester.post(url, auth=self.auth, headers=req_headers,\n verify=self.verify_ssl,\n stream=True,\n data=json.dumps(data))\n else:\n logger.debug(\"REST: get: %s\" % url)\n- response = self.requester.get(url, auth=self.auth, headers=headers,\n+ response = self.requester.get(url, auth=self.auth, headers=req_headers,\n verify=self.verify_ssl,\n stream=True)\n \n@@ -244,4 +249,3 @@ def search_packages(self, ref, query):\n url = self.router.search_packages(ref, query)\n package_infos = self.get_json(url)\n return package_infos\n-\ndiff --git a/conans/client/rest/rest_client_v1.py b/conans/client/rest/rest_client_v1.py\nindex 96fdbd43c84..ed6e34a384f 100644\n--- a/conans/client/rest/rest_client_v1.py\n+++ b/conans/client/rest/rest_client_v1.py\n@@ -106,11 +106,11 @@ def get_package_manifest(self, pref):\n logger.error(traceback.format_exc())\n raise ConanException(msg)\n \n- def get_package_info(self, pref):\n+ def get_package_info(self, pref, headers):\n \"\"\"Gets a ConanInfo file from a package\"\"\"\n pref = pref.copy_with_revs(None, None)\n url = self.router.package_download_urls(pref)\n- urls = self._get_file_to_url_dict(url)\n+ urls = self._get_file_to_url_dict(url, headers=headers)\n if not urls:\n raise PackageNotFoundException(pref)\n \n@@ -124,10 +124,10 @@ def get_package_info(self, pref):\n contents = {key: decode_text(value) for key, value in dict(contents).items()}\n return ConanInfo.loads(contents[CONANINFO])\n \n- def _get_file_to_url_dict(self, url, data=None):\n+ def _get_file_to_url_dict(self, url, data=None, headers=None):\n \"\"\"Call to url and decode the json returning a dict of {filepath: url} dict\n converting the url to a complete url when needed\"\"\"\n- urls = self.get_json(url, data=data)\n+ urls = self.get_json(url, data=data, headers=headers)\n return {filepath: complete_url(self.remote_url, url) for filepath, url in urls.items()}\n \n def _upload_recipe(self, ref, files_to_upload, retry, retry_wait):\n@@ -340,7 +340,7 @@ def get_package_revisions(self, pref):\n def get_latest_recipe_revision(self, ref):\n raise NoRestV2Available(\"The remote doesn't support revisions\")\n \n- def get_latest_package_revision(self, pref):\n+ def get_latest_package_revision(self, pref, headers):\n raise NoRestV2Available(\"The remote doesn't support revisions\")\n \n def _post_json(self, url, payload):\ndiff --git a/conans/client/rest/rest_client_v2.py b/conans/client/rest/rest_client_v2.py\nindex dd53f7b209f..8f49198e2ef 100644\n--- a/conans/client/rest/rest_client_v2.py\n+++ b/conans/client/rest/rest_client_v2.py\n@@ -39,12 +39,12 @@ def _get_file_list_json(self, url):\n data[\"files\"] = list(data[\"files\"].keys())\n return data\n \n- def _get_remote_file_contents(self, url, use_cache):\n+ def _get_remote_file_contents(self, url, use_cache, headers=None):\n # We don't want traces in output of these downloads, they are ugly in output\n downloader = FileDownloader(self.requester, None, self.verify_ssl, self._config)\n if use_cache and self._config.download_cache:\n downloader = CachedFileDownloader(self._config.download_cache, downloader)\n- contents = downloader.download(url, auth=self.auth)\n+ contents = downloader.download(url, auth=self.auth, headers=headers)\n return contents\n \n def _get_snapshot(self, url):\n@@ -77,10 +77,10 @@ def get_package_manifest(self, pref):\n logger.error(traceback.format_exc())\n raise ConanException(msg)\n \n- def get_package_info(self, pref):\n+ def get_package_info(self, pref, headers):\n url = self.router.package_info(pref)\n cache = (pref.revision != DEFAULT_REVISION_V1)\n- content = self._get_remote_file_contents(url, use_cache=cache)\n+ content = self._get_remote_file_contents(url, use_cache=cache, headers=headers)\n return ConanInfo.loads(decode_text(content))\n \n def get_recipe(self, ref, dest_folder):\n@@ -329,9 +329,9 @@ def get_latest_recipe_revision(self, ref):\n # Ignored data[\"time\"]\n return ref.copy_with_rev(rev)\n \n- def get_latest_package_revision(self, pref):\n+ def get_latest_package_revision(self, pref, headers):\n url = self.router.package_latest(pref)\n- data = self.get_json(url)\n+ data = self.get_json(url, headers=headers)\n prev = data[\"revision\"]\n # Ignored data[\"time\"]\n return pref.copy_with_revs(pref.ref.revision, prev)\ndiff --git a/conans/test/functional/remote/rest_api_test.py b/conans/test/functional/remote/rest_api_test.py\nindex 3fdd01e09b0..fd09f407a38 100644\n--- a/conans/test/functional/remote/rest_api_test.py\n+++ b/conans/test/functional/remote/rest_api_test.py\n@@ -159,7 +159,7 @@ def test_get_package_info(self):\n self._upload_package(pref, {CONANINFO: conan_info})\n \n # Get the package info\n- info = self.api.get_package_info(pref)\n+ info = self.api.get_package_info(pref, headers=None)\n self.assertIsInstance(info, ConanInfo)\n self.assertEqual(info, ConanInfo.loads(conan_info))\n \ndiff --git a/conans/test/functional/remote/test_request_headers.py b/conans/test/functional/remote/test_request_headers.py\nnew file mode 100644\nindex 00000000000..e1cedf3b5ed\n--- /dev/null\n+++ b/conans/test/functional/remote/test_request_headers.py\n@@ -0,0 +1,160 @@\n+import textwrap\n+import unittest\n+\n+from parameterized.parameterized import parameterized_class\n+\n+from conans.client.remote_manager import CONAN_REQUEST_HEADER_SETTINGS, CONAN_REQUEST_HEADER_OPTIONS\n+from conans.test.assets.genconanfile import GenConanfile\n+from conans.test.utils.tools import TestClient, TestServer, TestRequester\n+\n+\n+class RequesterClass(TestRequester):\n+ requests = None\n+\n+ def __init__(self, *args, **kwargs):\n+ self.requests = []\n+ super(RequesterClass, self).__init__(*args, **kwargs)\n+\n+ def get(self, url, headers=None, **kwargs):\n+ self.requests.append((url, headers))\n+ return super(RequesterClass, self).get(url, headers=headers, **kwargs)\n+\n+\n+@parameterized_class([{\"revs_enabled\": True}, {\"revs_enabled\": False}, ])\n+class RequestHeadersTestCase(unittest.TestCase):\n+ \"\"\" Conan adds a header with the settings used to compute the package ID \"\"\"\n+\n+ profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Macos\n+ arch=x86_64\n+ compiler=apple-clang\n+ compiler.version=11.0\n+ compiler.libcxx=libc++\n+ build_type=Release\n+ \"\"\")\n+\n+ conanfile = GenConanfile().with_settings('os', 'arch', 'compiler') \\\n+ .with_option('opt1', [True, False]) \\\n+ .with_option('shared', [True, False]) \\\n+ .with_default_option('opt1', True) \\\n+ .with_default_option('shared', False)\n+\n+ def setUp(self):\n+ test_server = TestServer(users={\"user\": \"mypass\"})\n+ self.servers = {\"default\": test_server}\n+ t = TestClient(servers=self.servers, users={\"default\": [(\"user\", \"mypass\")]})\n+ t.save({'conanfile.py': self.conanfile,\n+ 'profile': self.profile})\n+ t.run('create conanfile.py name/version@user/channel --profile:host=profile')\n+ t.run('upload name/version@user/channel --all')\n+\n+ def _get_header(self, requester, header_name):\n+ hits = sum([header_name in headers for _, headers in requester.requests])\n+ self.assertEquals(hits, 2 if self.revs_enabled else 1)\n+ for url, headers in requester.requests:\n+ if header_name in headers:\n+ if self.revs_enabled:\n+ self.assertTrue(url.endswith('/latest'), msg=url)\n+ else:\n+ self.assertTrue(url.endswith('/download_urls'), msg=url)\n+ return headers.get(header_name)\n+\n+ def _assert_settings_headers(self, settings_header, compiler_version='11.0'):\n+ # It takes only the values that are relevant to the recipe\n+ self.assertListEqual(\n+ sorted(['os', 'arch', 'compiler', 'compiler.version', 'compiler.libcxx']),\n+ sorted([it.split('=', 1)[0] for it in settings_header.split(';')]))\n+ self.assertIn('os=Macos', settings_header)\n+ self.assertIn('arch=x86_64', settings_header)\n+ self.assertIn('compiler=apple-clang', settings_header)\n+ self.assertIn('compiler.libcxx=libc++', settings_header)\n+ self.assertIn('compiler.version={}'.format(compiler_version), settings_header)\n+ self.assertNotIn('build_type', settings_header)\n+\n+ def _assert_options_headers(self, options_header, shared_value='False'):\n+ self.assertListEqual(['shared'], [it.split('=', 1)[0] for it in options_header.split(';')])\n+ self.assertIn('shared={}'.format(shared_value), options_header)\n+\n+ def _get_test_client(self):\n+ t = TestClient(requester_class=RequesterClass, servers=self.servers,\n+ users={\"default\": [(\"user\", \"mypass\")]})\n+ t.run('config set general.revisions_enabled={}'.format('1' if self.revs_enabled else '0'))\n+ return t\n+\n+ def test_install_recipe_mismatch(self):\n+ t = self._get_test_client()\n+ t.save({'profile': self.profile})\n+ t.run('install failing/version@user/channel --profile=profile', assert_error=True)\n+ self.assertFalse(any([CONAN_REQUEST_HEADER_SETTINGS in headers for _, headers in\n+ t.api.http_requester.requests]))\n+ self.assertFalse(any([CONAN_REQUEST_HEADER_OPTIONS in headers for _, headers in\n+ t.api.http_requester.requests]))\n+\n+ def test_install_package_match(self):\n+ t = self._get_test_client()\n+ t.save({'profile': self.profile})\n+\n+ # Package match\n+ t.run('install name/version@user/channel --profile=profile')\n+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)\n+ self._assert_settings_headers(settings_header)\n+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)\n+ self._assert_options_headers(options_headers)\n+\n+ # Package mismatch (settings)\n+ t.run('install name/version@user/channel --profile=profile -s compiler.version=12.0',\n+ assert_error=True)\n+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)\n+ self._assert_settings_headers(settings_header, compiler_version='12.0')\n+\n+ # Package mismatch (options)\n+ t.run('install name/version@user/channel --profile=profile -o shared=True',\n+ assert_error=True)\n+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)\n+ self._assert_options_headers(options_headers, shared_value='True')\n+\n+ def test_info_package_match(self):\n+ t = self._get_test_client()\n+ t.save({'profile': self.profile})\n+\n+ # Package match\n+ t.run('info name/version@user/channel --profile=profile')\n+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)\n+ self._assert_settings_headers(settings_header)\n+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)\n+ self._assert_options_headers(options_headers)\n+\n+ # Package mismatch (settings)\n+ t.run('info name/version@user/channel --profile=profile -s compiler.version=12.0')\n+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)\n+ self._assert_settings_headers(settings_header, compiler_version='12.0')\n+\n+ # Package mismatch (options)\n+ t.run('install name/version@user/channel --profile=profile -o shared=True',\n+ assert_error=True)\n+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)\n+ self._assert_options_headers(options_headers, shared_value='True')\n+\n+ def test_install_as_requirement(self):\n+ t = self._get_test_client()\n+ t.save({'conanfile.py': GenConanfile().with_requires('name/version@user/channel'),\n+ 'profile': self.profile})\n+\n+ # Requirement is found\n+ t.run('install . consumer/version@ --profile=profile')\n+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)\n+ self._assert_settings_headers(settings_header)\n+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)\n+ self._assert_options_headers(options_headers)\n+\n+ # Requirement is not found (settings)\n+ t.run('install . consumer/version@ --profile=profile -s compiler.version=12.0',\n+ assert_error=True)\n+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)\n+ self._assert_settings_headers(settings_header, compiler_version='12.0')\n+\n+ # Requirement is not found (options)\n+ t.run('install . consumer/version@ --profile=profile -o name:shared=True', assert_error=True)\n+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)\n+ self._assert_options_headers(options_headers, shared_value='True')\n"
}
|
[
{
"diff_hunk": "@@ -1133,7 +1133,8 @@ def get_path(self, reference, package_id=None, path=None, remote_name=None):\n if package_id:\n pref = PackageReference(ref, package_id)\n if self.app.config.revisions_enabled and not pref.revision:\n- pref = self.app.remote_manager.get_latest_package_revision(pref, remote)\n+ pref = self.app.remote_manager.get_latest_package_revision(pref, remote,",
"line": null,
"original_line": 1136,
"original_start_line": null,
"path": "conans/client/conan_api.py",
"start_line": null,
"text": "@user1:\nMaybe define a default=None for headers arg?"
}
] |
b69a00bd5dc7fa6cd2ddfdcf25f09600081036b1
|
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 033067b313a..ee81dd4aab6 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -93,11 +93,14 @@ def _evaluate_cache_pkg(self, node, package_layout, pref, metadata, remote, remo
node.prev = metadata.packages[pref.id].revision
assert node.prev, "PREV for %s is None: %s" % (str(pref), metadata.dumps())
+ def _get_package_info(self, node, pref, remote):
+ return self._remote_manager.get_package_info(pref, remote, info=node.conanfile.info)
+
def _evaluate_remote_pkg(self, node, pref, remote, remotes):
remote_info = None
if remote:
try:
- remote_info, pref = self._remote_manager.get_package_info(pref, remote)
+ remote_info, pref = self._get_package_info(node, pref, remote)
except NotFoundException:
pass
except Exception:
@@ -107,9 +110,9 @@ def _evaluate_remote_pkg(self, node, pref, remote, remotes):
# If the "remote" came from the registry but the user didn't specified the -r, with
# revisions iterate all remotes
if not remote or (not remote_info and self._cache.config.revisions_enabled):
- for r in remotes.values():
+ for r in remotes.values(): # FIXME: Here we hit the same remote we did before
try:
- remote_info, pref = self._remote_manager.get_package_info(pref, r)
+ remote_info, pref = self._get_package_info(node, pref, r)
except NotFoundException:
pass
else:
@@ -236,7 +239,7 @@ def _process_node(self, node, pref, build_mode, update, remotes):
if build_mode.outdated:
if node.binary in (BINARY_CACHE, BINARY_DOWNLOAD, BINARY_UPDATE):
if node.binary == BINARY_UPDATE:
- info, pref = self._remote_manager.get_package_info(pref, remote)
+ info, pref = self._get_package_info(node, pref, remote)
recipe_hash = info.recipe_hash
elif node.binary == BINARY_CACHE:
package_folder = package_layout.package(pref)
diff --git a/conans/client/remote_manager.py b/conans/client/remote_manager.py
index 6df71cd4dbc..677e143d7d4 100644
--- a/conans/client/remote_manager.py
+++ b/conans/client/remote_manager.py
@@ -20,6 +20,27 @@
log_recipe_download, log_recipe_sources_download,
log_uncompressed_file)
+CONAN_REQUEST_HEADER_SETTINGS = 'Conan-PkgID-Settings'
+CONAN_REQUEST_HEADER_OPTIONS = 'Conan-PkgID-Options'
+
+
+def _headers_for_info(info):
+ if not info:
+ return None
+
+ r = {}
+ settings = info.full_settings.as_list()
+ if settings:
+ settings = ['{}={}'.format(*it) for it in settings]
+ r.update({CONAN_REQUEST_HEADER_SETTINGS: ';'.join(settings)})
+
+ options = info.options.as_list()
+ if options:
+ options = filter(lambda u: u[0] in ['shared', 'fPIC', 'header_only'], options)
+ options = ['{}={}'.format(*it) for it in options]
+ r.update({CONAN_REQUEST_HEADER_OPTIONS: ';'.join(options)})
+ return r
+
class RemoteManager(object):
""" Will handle the remotes to get recipes, packages etc """
@@ -38,7 +59,7 @@ def get_recipe_snapshot(self, ref, remote):
return self._call_remote(remote, "get_recipe_snapshot", ref)
def get_package_snapshot(self, pref, remote):
- assert pref.ref.revision, "upload_package requires RREV"
+ assert pref.ref.revision, "get_package_snapshot requires RREV"
assert pref.revision, "get_package_snapshot requires PREV"
return self._call_remote(remote, "get_package_snapshot", pref)
@@ -58,14 +79,16 @@ def get_recipe_manifest(self, ref, remote):
return self._call_remote(remote, "get_recipe_manifest", ref), ref
def get_package_manifest(self, pref, remote):
- pref = self._resolve_latest_pref(pref, remote)
+ pref = self._resolve_latest_pref(pref, remote, headers=None)
return self._call_remote(remote, "get_package_manifest", pref), pref
- def get_package_info(self, pref, remote):
+ def get_package_info(self, pref, remote, info=None):
""" Read a package ConanInfo from remote
"""
- pref = self._resolve_latest_pref(pref, remote)
- return self._call_remote(remote, "get_package_info", pref), pref
+ headers = _headers_for_info(info)
+ pref = self._resolve_latest_pref(pref, remote, headers=headers)
+ # FIXME Conan 2.0: With revisions, it is not needed to pass headers to this second function
+ return self._call_remote(remote, "get_package_info", pref, headers=headers), pref
def get_recipe(self, ref, remote):
"""
@@ -140,16 +163,18 @@ def get_package(self, conanfile, pref, layout, remote, output, recorder):
output.info("Retrieving package %s from remote '%s' " % (pref.id, remote.name))
layout.package_remove(pref) # Remove first the destination folder
with layout.set_dirty_context_manager(pref):
- self._get_package(layout, pref, remote, output, recorder)
+ info = getattr(conanfile, 'info', None)
+ self._get_package(layout, pref, remote, output, recorder, info=info)
self._hook_manager.execute("post_download_package", conanfile_path=conanfile_path,
reference=pref.ref, package_id=pref.id, remote=remote,
conanfile=conanfile)
- def _get_package(self, layout, pref, remote, output, recorder):
+ def _get_package(self, layout, pref, remote, output, recorder, info):
t1 = time.time()
try:
- pref = self._resolve_latest_pref(pref, remote)
+ headers = _headers_for_info(info)
+ pref = self._resolve_latest_pref(pref, remote, headers=headers)
snapshot = self._call_remote(remote, "get_package_snapshot", pref)
if not is_package_snapshot_complete(snapshot):
raise PackageNotFoundException(pref)
@@ -230,8 +255,8 @@ def get_latest_recipe_revision(self, ref, remote):
revision = self._call_remote(remote, "get_latest_recipe_revision", ref)
return revision
- def get_latest_package_revision(self, pref, remote):
- revision = self._call_remote(remote, "get_latest_package_revision", pref)
+ def get_latest_package_revision(self, pref, remote, headers=None):
+ revision = self._call_remote(remote, "get_latest_package_revision", pref, headers=headers)
return revision
def _resolve_latest_ref(self, ref, remote):
@@ -242,16 +267,16 @@ def _resolve_latest_ref(self, ref, remote):
ref = ref.copy_with_rev(DEFAULT_REVISION_V1)
return ref
- def _resolve_latest_pref(self, pref, remote):
+ def _resolve_latest_pref(self, pref, remote, headers):
if pref.revision is None:
try:
- pref = self.get_latest_package_revision(pref, remote)
+ pref = self.get_latest_package_revision(pref, remote, headers=headers)
except NoRestV2Available:
pref = pref.copy_with_revs(pref.ref.revision, DEFAULT_REVISION_V1)
return pref
def _call_remote(self, remote, method, *args, **kwargs):
- assert(isinstance(remote, Remote))
+ assert (isinstance(remote, Remote))
try:
return self._auth_manager.call_rest_api_method(remote, method, *args, **kwargs)
except ConnectionError as exc:
@@ -294,7 +319,7 @@ def uncompress_file(src_path, dest_folder, output):
t1 = time.time()
try:
with progress_bar.open_binary(src_path, output, "Decompressing %s" % os.path.basename(
- src_path)) as file_handler:
+ src_path)) as file_handler:
tar_extract(file_handler, dest_folder)
except Exception as e:
error_msg = "Error while downloading/extracting files to %s\n%s\n" % (dest_folder, str(e))
diff --git a/conans/client/rest/rest_client.py b/conans/client/rest/rest_client.py
index b3b6dc7ef29..4ee532047a7 100644
--- a/conans/client/rest/rest_client.py
+++ b/conans/client/rest/rest_client.py
@@ -79,8 +79,8 @@ def get_recipe_manifest(self, ref):
def get_package_manifest(self, pref):
return self._get_api().get_package_manifest(pref)
- def get_package_info(self, pref):
- return self._get_api().get_package_info(pref)
+ def get_package_info(self, pref, headers):
+ return self._get_api().get_package_info(pref, headers=headers)
def get_recipe(self, ref, dest_folder):
return self._get_api().get_recipe(ref, dest_folder)
@@ -161,5 +161,5 @@ def get_package_revisions(self, pref):
def get_latest_recipe_revision(self, ref):
return self._get_api().get_latest_recipe_revision(ref)
- def get_latest_package_revision(self, pref):
- return self._get_api().get_latest_package_revision(pref)
+ def get_latest_package_revision(self, pref, headers):
+ return self._get_api().get_latest_package_revision(pref, headers=headers)
diff --git a/conans/client/rest/rest_client_common.py b/conans/client/rest/rest_client_common.py
index 769759e8050..18405e2cdf4 100644
--- a/conans/client/rest/rest_client_common.py
+++ b/conans/client/rest/rest_client_common.py
@@ -13,6 +13,7 @@
class JWTAuth(AuthBase):
"""Attaches JWT Authentication to the given Request object."""
+
def __init__(self, token):
self.token = token
@@ -42,6 +43,7 @@ def handle_return_deserializer(deserializer=None):
Map exceptions and http return codes and deserialize if needed.
deserializer: Function for deserialize values"""
+
def handle_return(method):
def inner(*argc, **argv):
ret = method(*argc, **argv)
@@ -50,7 +52,9 @@ def inner(*argc, **argv):
text = ret.text if ret.status_code != 404 else "404 Not found"
raise get_exception_from_error(ret.status_code)(text)
return deserializer(ret.content) if deserializer else decode_text(ret.content)
+
return inner
+
return handle_return
@@ -169,19 +173,20 @@ def server_capabilities(self, user=None, password=None):
return [cap.strip() for cap in server_capabilities.split(",") if cap]
- def get_json(self, url, data=None):
- headers = self.custom_headers
+ def get_json(self, url, data=None, headers=None):
+ req_headers = self.custom_headers.copy()
+ req_headers.update(headers or {})
if data: # POST request
- headers.update({'Content-type': 'application/json',
- 'Accept': 'application/json'})
+ req_headers.update({'Content-type': 'application/json',
+ 'Accept': 'application/json'})
logger.debug("REST: post: %s" % url)
- response = self.requester.post(url, auth=self.auth, headers=headers,
+ response = self.requester.post(url, auth=self.auth, headers=req_headers,
verify=self.verify_ssl,
stream=True,
data=json.dumps(data))
else:
logger.debug("REST: get: %s" % url)
- response = self.requester.get(url, auth=self.auth, headers=headers,
+ response = self.requester.get(url, auth=self.auth, headers=req_headers,
verify=self.verify_ssl,
stream=True)
@@ -244,4 +249,3 @@ def search_packages(self, ref, query):
url = self.router.search_packages(ref, query)
package_infos = self.get_json(url)
return package_infos
-
diff --git a/conans/client/rest/rest_client_v1.py b/conans/client/rest/rest_client_v1.py
index 96fdbd43c84..ed6e34a384f 100644
--- a/conans/client/rest/rest_client_v1.py
+++ b/conans/client/rest/rest_client_v1.py
@@ -106,11 +106,11 @@ def get_package_manifest(self, pref):
logger.error(traceback.format_exc())
raise ConanException(msg)
- def get_package_info(self, pref):
+ def get_package_info(self, pref, headers):
"""Gets a ConanInfo file from a package"""
pref = pref.copy_with_revs(None, None)
url = self.router.package_download_urls(pref)
- urls = self._get_file_to_url_dict(url)
+ urls = self._get_file_to_url_dict(url, headers=headers)
if not urls:
raise PackageNotFoundException(pref)
@@ -124,10 +124,10 @@ def get_package_info(self, pref):
contents = {key: decode_text(value) for key, value in dict(contents).items()}
return ConanInfo.loads(contents[CONANINFO])
- def _get_file_to_url_dict(self, url, data=None):
+ def _get_file_to_url_dict(self, url, data=None, headers=None):
"""Call to url and decode the json returning a dict of {filepath: url} dict
converting the url to a complete url when needed"""
- urls = self.get_json(url, data=data)
+ urls = self.get_json(url, data=data, headers=headers)
return {filepath: complete_url(self.remote_url, url) for filepath, url in urls.items()}
def _upload_recipe(self, ref, files_to_upload, retry, retry_wait):
@@ -340,7 +340,7 @@ def get_package_revisions(self, pref):
def get_latest_recipe_revision(self, ref):
raise NoRestV2Available("The remote doesn't support revisions")
- def get_latest_package_revision(self, pref):
+ def get_latest_package_revision(self, pref, headers):
raise NoRestV2Available("The remote doesn't support revisions")
def _post_json(self, url, payload):
diff --git a/conans/client/rest/rest_client_v2.py b/conans/client/rest/rest_client_v2.py
index dd53f7b209f..8f49198e2ef 100644
--- a/conans/client/rest/rest_client_v2.py
+++ b/conans/client/rest/rest_client_v2.py
@@ -39,12 +39,12 @@ def _get_file_list_json(self, url):
data["files"] = list(data["files"].keys())
return data
- def _get_remote_file_contents(self, url, use_cache):
+ def _get_remote_file_contents(self, url, use_cache, headers=None):
# We don't want traces in output of these downloads, they are ugly in output
downloader = FileDownloader(self.requester, None, self.verify_ssl, self._config)
if use_cache and self._config.download_cache:
downloader = CachedFileDownloader(self._config.download_cache, downloader)
- contents = downloader.download(url, auth=self.auth)
+ contents = downloader.download(url, auth=self.auth, headers=headers)
return contents
def _get_snapshot(self, url):
@@ -77,10 +77,10 @@ def get_package_manifest(self, pref):
logger.error(traceback.format_exc())
raise ConanException(msg)
- def get_package_info(self, pref):
+ def get_package_info(self, pref, headers):
url = self.router.package_info(pref)
cache = (pref.revision != DEFAULT_REVISION_V1)
- content = self._get_remote_file_contents(url, use_cache=cache)
+ content = self._get_remote_file_contents(url, use_cache=cache, headers=headers)
return ConanInfo.loads(decode_text(content))
def get_recipe(self, ref, dest_folder):
@@ -329,9 +329,9 @@ def get_latest_recipe_revision(self, ref):
# Ignored data["time"]
return ref.copy_with_rev(rev)
- def get_latest_package_revision(self, pref):
+ def get_latest_package_revision(self, pref, headers):
url = self.router.package_latest(pref)
- data = self.get_json(url)
+ data = self.get_json(url, headers=headers)
prev = data["revision"]
# Ignored data["time"]
return pref.copy_with_revs(pref.ref.revision, prev)
diff --git a/conans/test/functional/remote/rest_api_test.py b/conans/test/functional/remote/rest_api_test.py
index 3fdd01e09b0..fd09f407a38 100644
--- a/conans/test/functional/remote/rest_api_test.py
+++ b/conans/test/functional/remote/rest_api_test.py
@@ -159,7 +159,7 @@ def test_get_package_info(self):
self._upload_package(pref, {CONANINFO: conan_info})
# Get the package info
- info = self.api.get_package_info(pref)
+ info = self.api.get_package_info(pref, headers=None)
self.assertIsInstance(info, ConanInfo)
self.assertEqual(info, ConanInfo.loads(conan_info))
diff --git a/conans/test/functional/remote/test_request_headers.py b/conans/test/functional/remote/test_request_headers.py
new file mode 100644
index 00000000000..15dc1cbdbc9
--- /dev/null
+++ b/conans/test/functional/remote/test_request_headers.py
@@ -0,0 +1,161 @@
+import textwrap
+import unittest
+
+from parameterized.parameterized import parameterized_class
+
+from conans.client.remote_manager import CONAN_REQUEST_HEADER_SETTINGS, CONAN_REQUEST_HEADER_OPTIONS
+from conans.test.assets.genconanfile import GenConanfile
+from conans.test.utils.tools import TestClient, TestServer, TestRequester
+from conans.util.env_reader import get_env
+
+
+class RequesterClass(TestRequester):
+ requests = None
+
+ def __init__(self, *args, **kwargs):
+ self.requests = []
+ super(RequesterClass, self).__init__(*args, **kwargs)
+
+ def get(self, url, headers=None, **kwargs):
+ self.requests.append((url, headers))
+ return super(RequesterClass, self).get(url, headers=headers, **kwargs)
+
+
+class RequestHeadersTestCase(unittest.TestCase):
+ """ Conan adds a header with the settings used to compute the package ID """
+ revs_enabled = get_env("TESTING_REVISIONS_ENABLED", False)
+
+ profile = textwrap.dedent("""
+ [settings]
+ os=Macos
+ arch=x86_64
+ compiler=apple-clang
+ compiler.version=11.0
+ compiler.libcxx=libc++
+ build_type=Release
+ """)
+
+ conanfile = GenConanfile().with_settings('os', 'arch', 'compiler') \
+ .with_option('opt1', [True, False]) \
+ .with_option('shared', [True, False]) \
+ .with_default_option('opt1', True) \
+ .with_default_option('shared', False)
+
+ def setUp(self):
+ test_server = TestServer(users={"user": "mypass"})
+ self.servers = {"default": test_server}
+ t = TestClient(servers=self.servers, users={"default": [("user", "mypass")]})
+ t.save({'conanfile.py': self.conanfile,
+ 'profile': self.profile})
+ t.run('create conanfile.py name/version@user/channel --profile:host=profile')
+ t.run('upload name/version@user/channel --all')
+
+ def _get_header(self, requester, header_name):
+ hits = sum([header_name in headers for _, headers in requester.requests])
+ self.assertEquals(hits, 2 if self.revs_enabled else 1)
+ for url, headers in requester.requests:
+ if header_name in headers:
+ if self.revs_enabled:
+ self.assertTrue(url.endswith('/latest'), msg=url)
+ else:
+ self.assertTrue(url.endswith('/download_urls'), msg=url)
+ return headers.get(header_name)
+
+ def _assert_settings_headers(self, settings_header, compiler_version='11.0'):
+ # It takes only the values that are relevant to the recipe
+ self.assertListEqual(
+ sorted(['os', 'arch', 'compiler', 'compiler.version', 'compiler.libcxx']),
+ sorted([it.split('=', 1)[0] for it in settings_header.split(';')]))
+ self.assertIn('os=Macos', settings_header)
+ self.assertIn('arch=x86_64', settings_header)
+ self.assertIn('compiler=apple-clang', settings_header)
+ self.assertIn('compiler.libcxx=libc++', settings_header)
+ self.assertIn('compiler.version={}'.format(compiler_version), settings_header)
+ self.assertNotIn('build_type', settings_header)
+
+ def _assert_options_headers(self, options_header, shared_value='False'):
+ self.assertListEqual(['shared'], [it.split('=', 1)[0] for it in options_header.split(';')])
+ self.assertIn('shared={}'.format(shared_value), options_header)
+
+ def _get_test_client(self):
+ t = TestClient(requester_class=RequesterClass, servers=self.servers,
+ users={"default": [("user", "mypass")]})
+ t.run('config set general.revisions_enabled={}'.format('1' if self.revs_enabled else '0'))
+ return t
+
+ def test_install_recipe_mismatch(self):
+ t = self._get_test_client()
+ t.save({'profile': self.profile})
+ t.run('install failing/version@user/channel --profile=profile', assert_error=True)
+ self.assertFalse(any([CONAN_REQUEST_HEADER_SETTINGS in headers for _, headers in
+ t.api.http_requester.requests]))
+ self.assertFalse(any([CONAN_REQUEST_HEADER_OPTIONS in headers for _, headers in
+ t.api.http_requester.requests]))
+
+ def test_install_package_match(self):
+ t = self._get_test_client()
+ t.save({'profile': self.profile})
+
+ # Package match
+ t.run('install name/version@user/channel --profile=profile')
+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)
+ self._assert_settings_headers(settings_header)
+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)
+ self._assert_options_headers(options_headers)
+
+ # Package mismatch (settings)
+ t.run('install name/version@user/channel --profile=profile -s compiler.version=12.0',
+ assert_error=True)
+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)
+ self._assert_settings_headers(settings_header, compiler_version='12.0')
+
+ # Package mismatch (options)
+ t.run('install name/version@user/channel --profile=profile -o shared=True',
+ assert_error=True)
+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)
+ self._assert_options_headers(options_headers, shared_value='True')
+
+ def test_info_package_match(self):
+ t = self._get_test_client()
+ t.save({'profile': self.profile})
+
+ # Package match
+ t.run('info name/version@user/channel --profile=profile')
+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)
+ self._assert_settings_headers(settings_header)
+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)
+ self._assert_options_headers(options_headers)
+
+ # Package mismatch (settings)
+ t.run('info name/version@user/channel --profile=profile -s compiler.version=12.0')
+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)
+ self._assert_settings_headers(settings_header, compiler_version='12.0')
+
+ # Package mismatch (options)
+ t.run('install name/version@user/channel --profile=profile -o shared=True',
+ assert_error=True)
+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)
+ self._assert_options_headers(options_headers, shared_value='True')
+
+ def test_install_as_requirement(self):
+ t = self._get_test_client()
+ t.save({'conanfile.py': GenConanfile().with_requires('name/version@user/channel'),
+ 'profile': self.profile})
+
+ # Requirement is found
+ t.run('install . consumer/version@ --profile=profile')
+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)
+ self._assert_settings_headers(settings_header)
+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)
+ self._assert_options_headers(options_headers)
+
+ # Requirement is not found (settings)
+ t.run('install . consumer/version@ --profile=profile -s compiler.version=12.0',
+ assert_error=True)
+ settings_header = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_SETTINGS)
+ self._assert_settings_headers(settings_header, compiler_version='12.0')
+
+ # Requirement is not found (options)
+ t.run('install . consumer/version@ --profile=profile -o name:shared=True', assert_error=True)
+ options_headers = self._get_header(t.api.http_requester, CONAN_REQUEST_HEADER_OPTIONS)
+ self._assert_options_headers(options_headers, shared_value='True')
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-7919@b953ac1
|
conan-io/conan
|
Python
| 7,919
|
[toolchain] CMake + Ninja
|
Changelog: Omit
Docs: omit
closes #7814
Testing CMakeToolchain with Ninja generator
CMake + Ninja only uses a specific Ninja var: [CMAKE_NINJA_OUTPUT_PATH_PREFIX](https://cmake.org/cmake/help/v3.7/variable/CMAKE_NINJA_OUTPUT_PATH_PREFIX.html), but it doesn't affect our jobs. It is used for extra ninja files dir, by default is present in CMakeFiles/ folder.
More info about CMake and Ninja: https://cmake.org/cmake/help/latest/generator/Ninja.html
- [ ] Refer to the issue that supports this Pull Request.
- [ ] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [ ] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [ ] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2020-10-21T19:14:54Z
|
[feature] CMakeToolchain + Ninja PoC
Implement a proof of concept of using the new CMakeToolchain feature for building with Ninja:
- From Linux
- From Windows
Notes:
- Ninja can be assumed installed.
- Opt-in to use Ninja instead of the default (MSBuild, Makefiles) build system, or hardcoded in the recipe, investigate alternatives
- Focused on creating/building 1 package, both local flow (conan install + native cmake) and ``conan create`` flow.
- Integration test.
- Test can be skipped for CI, but should work locally, annotate assumptions and installation details
- No env-var configuration at all
- All code must be private and local to the toolchains package
- New cross-build model with contexts must be used if necessary, not the old os_build, arch_build
- If enough time to implement reuse of a package, the ``cmake_find_package_multi`` generator must be used
|
I would like to take this one.
|
[
{
"body": "Implement a proof of concept of using the new CMakeToolchain feature for building with Ninja:\r\n\r\n- From Linux\r\n- From Windows\r\n\r\nNotes:\r\n- Ninja can be assumed installed. \r\n- Opt-in to use Ninja instead of the default (MSBuild, Makefiles) build system, or hardcoded in the recipe, investigate alternatives\r\n- Focused on creating/building 1 package, both local flow (conan install + native cmake) and ``conan create`` flow.\r\n- Integration test.\r\n- Test can be skipped for CI, but should work locally, annotate assumptions and installation details\r\n- No env-var configuration at all\r\n- All code must be private and local to the toolchains package\r\n- New cross-build model with contexts must be used if necessary, not the old os_build, arch_build\r\n- If enough time to implement reuse of a package, the ``cmake_find_package_multi`` generator must be used",
"number": 7814,
"title": "[feature] CMakeToolchain + Ninja PoC"
}
] |
8c7bc12106a78ea59ab2d01fe40dbed78aade0e1
|
{
"head_commit": "b953ac169ee17765b25bac40ac5ac9187e05ad4a",
"head_commit_message": "#7814 Add new integration test for cmake-ninja\n\nSigned-off-by: Uilian Ries <[email protected]>",
"patch_to_review": "diff --git a/conans/test/integration/toolchains/cmake/__init__.py b/conans/test/integration/toolchains/cmake/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/integration/toolchains/cmake/test_ninja.py b/conans/test/integration/toolchains/cmake/test_ninja.py\nnew file mode 100644\nindex 00000000000..15ba28580df\n--- /dev/null\n+++ b/conans/test/integration/toolchains/cmake/test_ninja.py\n@@ -0,0 +1,127 @@\n+import shutil\n+import textwrap\n+import unittest\n+import os\n+\n+from conans.test.utils.tools import TestClient\n+from conans.test.utils.test_files import temp_folder\n+from conans.client.tools import environment_append\n+\n+\n+class CppProject(object):\n+\n+ header = textwrap.dedent(\"\"\"\n+ #include <string>\n+ int bar(const std::string& str);\n+ \"\"\")\n+\n+ source = textwrap.dedent(\"\"\"\n+ #include \"foobar.hpp\"\n+ #include <iostream>\n+ int bar(const std::string& str) {\n+ std::cout << \"(BAR): \" << str << std::endl;\n+ return 0;\n+ }\n+ \"\"\")\n+\n+ cmakefile = textwrap.dedent(\"\"\"\n+ cmake_minimum_required(VERSION 2.8.12)\n+ project(foobar CXX)\n+ add_library(${CMAKE_PROJECT_NAME} foobar.hpp foobar.cpp)\n+ set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES PUBLIC_HEADER foobar.hpp)\n+ install(TARGETS ${CMAKE_PROJECT_NAME}\n+ RUNTIME DESTINATION bin\n+ LIBRARY DESTINATION lib\n+ ARCHIVE DESTINATION lib\n+ PUBLIC_HEADER DESTINATION include\n+ )\n+ \"\"\")\n+\n+ def create_project(self, testclient):\n+ testclient.save({\n+ \"foobar.hpp\": CppProject.header,\n+ \"foobar.cpp\": CppProject.source,\n+ \"CMakeLists.txt\": CppProject.cmakefile\n+ })\n+\n+\n+class CMakeNinjaTestCase(unittest.TestCase):\n+ # This test assumes that 'CMake' and 'Ninja' are available in the system\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, CMake, CMakeToolchain\n+\n+ class Foobar(ConanFile):\n+ name = \"foobar\"\n+ settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n+ exports_sources = \"CMakeLists.txt\", \"foobar.hpp\", \"foobar.cpp\"\n+ options = {\"shared\": [True, False]}\n+ default_options = {\"shared\": False}\n+\n+ def toolchain(self):\n+ tc = CMakeToolchain(self)\n+ # tc.preprocessor_definitions[\"CMAKE_NINJA_OUTPUT_PATH_PREFIX\"] = \"MyValue\"\n+ tc.write_toolchain_files()\n+\n+ def build(self):\n+ cmake = CMake(self)\n+ cmake.configure()\n+ cmake.build()\n+\n+ def package(self):\n+ cmake = CMake(self)\n+\n+ cmake.configure()\n+ cmake.install()\n+ \"\"\")\n+\n+ @classmethod\n+ def setUpClass(cls):\n+ if not shutil.which(\"ninja\"):\n+ raise unittest.SkipTest(\"Ninja expected in PATH\")\n+\n+ def setUp(self):\n+ folder = temp_folder(False)\n+ cpp_project = CppProject()\n+ self.client = TestClient(current_folder=folder)\n+ cpp_project.create_project(self.client)\n+ self.client.save({\n+ \"conanfile.py\": CMakeNinjaTestCase.conanfile,\n+ })\n+\n+ def test_regular_build(self):\n+ \"\"\" Ninja build must proceed using default profile and conan create\n+ \"\"\"\n+ with environment_append({\"CONAN_CMAKE_GENERATOR\": \"Ninja\"}):\n+ self.client.run(\"create . foobar/0.1.0@\")\n+ self.assertIn('CMake command: cmake -G \"Ninja\" '\n+ '-DCMAKE_TOOLCHAIN_FILE=\"conan_toolchain.cmake\"', self.client.out)\n+\n+ conanfile = CMakeNinjaTestCase.conanfile.replace(\"(self)\", \"(self, generator='Ninja')\")\n+ self.client.save({\n+ \"conanfile.py\": conanfile,\n+ })\n+ self.client.run(\"create . foobar/0.1.0@\")\n+ self.assertIn('CMake command: cmake -G \"Ninja\" '\n+ '-DCMAKE_TOOLCHAIN_FILE=\"conan_toolchain.cmake\"', self.client.out)\n+\n+ def test_devflow_build(self):\n+ \"\"\" Ninja build must proceed using default profile and conan development flow\n+ \"\"\"\n+ conanfile = CMakeNinjaTestCase.conanfile.replace(\"(self)\", \"(self, generator='Ninja')\")\n+ self.client.save({\n+ \"conanfile.py\": conanfile,\n+ })\n+\n+ build_folder = os.path.join(self.client.current_folder, \"build\")\n+ package_folder = os.path.join(self.client.current_folder, \"pkg\")\n+ with environment_append({\"CONAN_PRINT_RUN_COMMANDS\": \"1\"}):\n+ self.client.run(\"export . foobar/0.1.0@\")\n+ self.client.run(\"install . --install-folder={}\".format(build_folder))\n+ self.client.run(\"build . --build-folder={}\".format(build_folder))\n+ self.assertIn('CMake command: cmake -G \"Ninja\" '\n+ '-DCMAKE_TOOLCHAIN_FILE=\"conan_toolchain.cmake\"', self.client.out)\n+ # FIXME: conan package tries to install on /usr/local. CMAKE_PREFIX_PATH is empty\n+ self.client.run(\"package . --build-folder={} --package-folder={}\"\n+ .format(build_folder, package_folder), assert_error=True)\n+ self.assertIn('Permission denied.', self.client.out)\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,127 @@\n+import shutil\n+import textwrap\n+import unittest\n+import os\n+\n+from conans.test.utils.tools import TestClient\n+from conans.test.utils.test_files import temp_folder\n+from conans.client.tools import environment_append\n+\n+\n+class CppProject(object):\n+\n+ header = textwrap.dedent(\"\"\"\n+ #include <string>\n+ int bar(const std::string& str);\n+ \"\"\")\n+\n+ source = textwrap.dedent(\"\"\"\n+ #include \"foobar.hpp\"\n+ #include <iostream>\n+ int bar(const std::string& str) {\n+ std::cout << \"(BAR): \" << str << std::endl;\n+ return 0;\n+ }\n+ \"\"\")\n+\n+ cmakefile = textwrap.dedent(\"\"\"\n+ cmake_minimum_required(VERSION 2.8.12)\n+ project(foobar CXX)\n+ add_library(${CMAKE_PROJECT_NAME} foobar.hpp foobar.cpp)\n+ set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES PUBLIC_HEADER foobar.hpp)\n+ install(TARGETS ${CMAKE_PROJECT_NAME}\n+ RUNTIME DESTINATION bin\n+ LIBRARY DESTINATION lib\n+ ARCHIVE DESTINATION lib\n+ PUBLIC_HEADER DESTINATION include\n+ )\n+ \"\"\")\n+\n+ def create_project(self, testclient):\n+ testclient.save({\n+ \"foobar.hpp\": CppProject.header,\n+ \"foobar.cpp\": CppProject.source,\n+ \"CMakeLists.txt\": CppProject.cmakefile\n+ })\n+\n+\n+class CMakeNinjaTestCase(unittest.TestCase):\n+ # This test assumes that 'CMake' and 'Ninja' are available in the system\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, CMake, CMakeToolchain\n+\n+ class Foobar(ConanFile):\n+ name = \"foobar\"\n+ settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n+ exports_sources = \"CMakeLists.txt\", \"foobar.hpp\", \"foobar.cpp\"\n+ options = {\"shared\": [True, False]}\n+ default_options = {\"shared\": False}\n+\n+ def toolchain(self):\n+ tc = CMakeToolchain(self)\n+ # tc.preprocessor_definitions[\"CMAKE_NINJA_OUTPUT_PATH_PREFIX\"] = \"MyValue\"\n+ tc.write_toolchain_files()\n+\n+ def build(self):\n+ cmake = CMake(self)\n+ cmake.configure()\n+ cmake.build()\n+\n+ def package(self):\n+ cmake = CMake(self)\n+\n+ cmake.configure()\n+ cmake.install()\n+ \"\"\")\n+\n+ @classmethod\n+ def setUpClass(cls):\n+ if not shutil.which(\"ninja\"):\n+ raise unittest.SkipTest(\"Ninja expected in PATH\")\n+\n+ def setUp(self):\n+ folder = temp_folder(False)\n+ cpp_project = CppProject()\n+ self.client = TestClient(current_folder=folder)\n+ cpp_project.create_project(self.client)\n+ self.client.save({\n+ \"conanfile.py\": CMakeNinjaTestCase.conanfile,\n+ })\n+\n+ def test_regular_build(self):\n+ \"\"\" Ninja build must proceed using default profile and conan create\n+ \"\"\"\n+ with environment_append({\"CONAN_CMAKE_GENERATOR\": \"Ninja\"}):\n+ self.client.run(\"create . foobar/0.1.0@\")\n+ self.assertIn('CMake command: cmake -G \"Ninja\" '\n+ '-DCMAKE_TOOLCHAIN_FILE=\"conan_toolchain.cmake\"', self.client.out)\n+\n+ conanfile = CMakeNinjaTestCase.conanfile.replace(\"(self)\", \"(self, generator='Ninja')\")\n+ self.client.save({\n+ \"conanfile.py\": conanfile,\n+ })\n+ self.client.run(\"create . foobar/0.1.0@\")\n+ self.assertIn('CMake command: cmake -G \"Ninja\" '\n+ '-DCMAKE_TOOLCHAIN_FILE=\"conan_toolchain.cmake\"', self.client.out)\n+\n+ def test_devflow_build(self):\n+ \"\"\" Ninja build must proceed using default profile and conan development flow\n+ \"\"\"\n+ conanfile = CMakeNinjaTestCase.conanfile.replace(\"(self)\", \"(self, generator='Ninja')\")\n+ self.client.save({\n+ \"conanfile.py\": conanfile,\n+ })\n+\n+ build_folder = os.path.join(self.client.current_folder, \"build\")\n+ package_folder = os.path.join(self.client.current_folder, \"pkg\")\n+ with environment_append({\"CONAN_PRINT_RUN_COMMANDS\": \"1\"}):\n+ self.client.run(\"export . foobar/0.1.0@\")\n+ self.client.run(\"install . --install-folder={}\".format(build_folder))\n+ self.client.run(\"build . --build-folder={}\".format(build_folder))\n+ self.assertIn('CMake command: cmake -G \"Ninja\" '\n+ '-DCMAKE_TOOLCHAIN_FILE=\"conan_toolchain.cmake\"', self.client.out)\n+ # FIXME: conan package tries to install on /usr/local. CMAKE_PREFIX_PATH is empty\n+ self.client.run(\"package . --build-folder={} --package-folder={}\"",
"line": 199,
"original_line": 125,
"original_start_line": null,
"path": "conans/test/integration/toolchains/cmake/test_ninja.py",
"start_line": null,
"text": "@user1:\nDon't worry about local packaging at the moment, this can wait, can be removed."
}
] |
d3b6e78bd9ba0bd1f023089346f1eb7b08eec25e
|
diff --git a/conans/test/integration/toolchains/cmake/__init__.py b/conans/test/integration/toolchains/cmake/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/integration/toolchains/cmake/test_ninja.py b/conans/test/integration/toolchains/cmake/test_ninja.py
new file mode 100644
index 00000000000..255bbb9de79
--- /dev/null
+++ b/conans/test/integration/toolchains/cmake/test_ninja.py
@@ -0,0 +1,201 @@
+import shutil
+import textwrap
+import unittest
+import os
+import platform
+
+from conans.test.utils.tools import TestClient
+from conans.test.utils.test_files import temp_folder
+from conans.client.tools import environment_append
+from conans.client.toolchain.cmake.base import CMakeToolchainBase
+
+
+class CppProject(object):
+
+ header = textwrap.dedent("""
+ #include <string>
+ int bar(const std::string& str);
+ """)
+
+ source = textwrap.dedent("""
+ #include "foobar.hpp"
+ #include <iostream>
+ int bar(const std::string& str) {
+ std::cout << "(BAR): " << str << std::endl;
+ return 0;
+ }
+ """)
+
+ cmakefile = textwrap.dedent("""
+ cmake_minimum_required(VERSION 2.8.12)
+ project(foobar CXX)
+ set(CMAKE_VERBOSE_MAKEFILE ON)
+ add_library(${CMAKE_PROJECT_NAME} foobar.hpp foobar.cpp)
+ set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES
+ PUBLIC_HEADER foobar.hpp
+ DEBUG_POSTFIX "d")
+ install(TARGETS ${CMAKE_PROJECT_NAME}
+ RUNTIME DESTINATION bin
+ LIBRARY DESTINATION lib
+ ARCHIVE DESTINATION lib
+ PUBLIC_HEADER DESTINATION include
+ )
+ """)
+
+ def create_project(self, testclient):
+ testclient.save({
+ "foobar.hpp": CppProject.header,
+ "foobar.cpp": CppProject.source,
+ "CMakeLists.txt": CppProject.cmakefile
+ })
+
+
+class CMakeNinjaTestCase(unittest.TestCase):
+ # This test assumes that 'CMake' and 'Ninja' are available in the system
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile, CMake, CMakeToolchain
+
+ class Foobar(ConanFile):
+ name = "foobar"
+ settings = "os", "arch", "compiler", "build_type"
+ exports_sources = "CMakeLists.txt", "foobar.hpp", "foobar.cpp"
+ options = {"shared": [True, False]}
+ default_options = {"shared": False}
+
+ def toolchain(self):
+ tc = CMakeToolchain(self)
+ tc.write_toolchain_files()
+
+ def build(self):
+ cmake = CMake(self)
+ cmake.configure()
+ cmake.build()
+
+ def package(self):
+ cmake = CMake(self)
+
+ cmake.configure()
+ cmake.install()
+ """)
+
+ @classmethod
+ def setUpClass(cls):
+ if not shutil.which("ninja"):
+ raise unittest.SkipTest("Ninja expected in PATH")
+
+ def setUp(self):
+ folder = temp_folder(False)
+ cpp_project = CppProject()
+ self.client = TestClient(current_folder=folder)
+ cpp_project.create_project(self.client)
+ self.client.save({
+ "conanfile.py": CMakeNinjaTestCase.conanfile,
+ })
+
+ def test_local_cache_build(self):
+ """ Ninja build must proceed using default profile and conan create
+ """
+ with environment_append({"CONAN_CMAKE_GENERATOR": "Ninja"}):
+ self.client.run("create . foobar/0.1.0@ --profile:build=default --profile:host=default")
+ self.assertIn('CMake command: cmake -G "Ninja" '
+ '-DCMAKE_TOOLCHAIN_FILE="conan_toolchain.cmake"', self.client.out)
+
+ conanfile = CMakeNinjaTestCase.conanfile.replace("(self)", "(self, generator='Ninja')")
+ self.client.save({
+ "conanfile.py": conanfile,
+ })
+ self.client.run("create . foobar/0.1.0@ --profile:build=default --profile:host=default")
+ self.assertIn('CMake command: cmake -G "Ninja" '
+ '-DCMAKE_TOOLCHAIN_FILE="conan_toolchain.cmake"', self.client.out)
+
+ def _build_locally(self, profile="default", build_type="Release", shared=False):
+ self.client.run("export . foobar/0.1.0@")
+ self.client.run("install . -o foobar:shared={} -s build_type={} -pr:h={} -pr:b=default"
+ .format(shared, build_type, profile))
+ self.client.run_command('cmake . -G "Ninja" -DCMAKE_TOOLCHAIN_FILE={}'
+ .format(CMakeToolchainBase.filename))
+ self.client.run_command("cmake --build . --config {}".format(build_type))
+
+ @unittest.skipIf(platform.system() != "Linux", "Only linux")
+ def test_locally_build_linux(self):
+ """ Ninja build must proceed using default profile and cmake build (Linux)
+ """
+ self.client.save({"linux_host": textwrap.dedent("""
+ [settings]
+ os=Linux
+ arch=x86_64
+ compiler=gcc
+ compiler.version=10
+ compiler.libcxx=libstdc++11
+ build_type=Release
+ [env]
+ CONAN_CMAKE_GENERATOR=Ninja""")})
+ self._build_locally("linux_host")
+ self.client.run_command("objdump -f libfoobar.a")
+ self.assertIn("architecture: i386:x86-64", self.client.out)
+
+ self._build_locally("linux_host", "Debug", True)
+ self.client.run_command("objdump -f libfoobard.so")
+ self.assertIn("architecture: i386:x86-64", self.client.out)
+ self.assertIn("DYNAMIC", self.client.out)
+ self.client.run_command("file libfoobard.so")
+ self.assertIn("with debug_info", self.client.out)
+
+ @unittest.skipIf(platform.system() != "Windows", "Only Windows")
+ def test_locally_build_Windows(self):
+ """ Ninja build must proceed using default profile and cmake build (Windows)
+ """
+ win_host = textwrap.dedent("""[settings]
+ os=Windows
+ arch=x86_64
+ compiler=Visual Studio
+ compiler.version=16
+ compiler.runtime=MD
+ build_type=Release
+ [env]
+ CONAN_CMAKE_GENERATOR=Ninja""")
+ self.client.save({"win_host": win_host})
+ self._build_locally("win_host")
+ self.client.run_command("DUMPBIN /NOLOGO /DIRECTIVES foobar.lib")
+ self.assertIn("RuntimeLibrary=MD_Dynamic", self.client.out)
+ self.client.run_command("DUMPBIN /NOLOGO /HEADERS foobar.lib")
+ self.assertIn("machine (x64)", self.client.out)
+
+ win_host.replace("MD", "MDd")
+ self.client.save({"win_host": win_host})
+ self._build_locally("win_host", "Debug", False)
+ self.client.run_command("DUMPBIN /NOLOGO /DIRECTIVES foobard.lib")
+ self.assertIn("RuntimeLibrary=MDd_DynamicDebug", self.client.out)
+ self.client.run_command("DUMPBIN /NOLOGO /HEADERS foobard.lib")
+ self.assertIn("machine (x64)", self.client.out)
+
+ win_host.replace("MD", "MDd")
+ self.client.save({"win_host": win_host})
+ self._build_locally("win_host", "Debug", True)
+ self.client.run_command("DUMPBIN /NOLOGO /HEADERS foobard.dll")
+ self.assertIn("machine (x64)", self.client.out)
+ # TODO - How to detect Runtime library from a DLL (command line)?
+ # self.client.run_command("DUMPBIN /NOLOGO /DIRECTIVES foobard.dll")
+ # self.assertIn("RuntimeLibrary=MDd_DynamicDebug", self.client.out)
+
+ def test_devflow_build(self):
+ """ Ninja build must proceed using default profile and conan development flow
+ """
+ conanfile = CMakeNinjaTestCase.conanfile.replace("(self)", "(self, generator='Ninja')")
+ self.client.save({
+ "conanfile.py": conanfile,
+ })
+
+ build_folder = os.path.join(self.client.current_folder, "build")
+ package_folder = os.path.join(self.client.current_folder, "pkg")
+ with environment_append({"CONAN_PRINT_RUN_COMMANDS": "1"}):
+ self.client.run("export . foobar/0.1.0@")
+ self.client.run("install . --install-folder={}".format(build_folder))
+ self.client.run("build . --build-folder={}".format(build_folder))
+ self.assertIn('CMake command: cmake -G "Ninja" '
+ '-DCMAKE_TOOLCHAIN_FILE="conan_toolchain.cmake"', self.client.out)
+ # FIXME: conan package tries to install on /usr/local. CMAKE_PREFIX_PATH is empty
+ self.client.run("package . --build-folder={} --package-folder={}"
+ .format(build_folder, package_folder), assert_error=True)
+ self.assertIn('Permission denied.', self.client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-7941@10e7e82
|
conan-io/conan
|
Python
| 7,941
|
Feature/toolchain msbuild cmd
|
Changelog: Feature: Provide a ``MSBuildCmd`` helper class that encapsulates calling MSBuild.
Docs: https://github.com/conan-io/docs/pull/1907
Close https://github.com/conan-io/conan/issues/7824
Close https://github.com/conan-io/conan/issues/7606
|
2020-10-26T13:12:11Z
|
[bug] MSBuild calls devenv by default, even if not available
### Environment Details
* Operating System+version: Windows Server 2019
* Compiler+version: Visual Studio 2017 Build Tools 15.9.2.0
* Conan version:
* Python version: 3.7.0
### Steps to reproduce
On a system with only Visual Studio build tools installed, run a Conan build using the MSBuild build helper using the default setting for upgrade_project (Defaults to true)
Because Visual Studio build tools does not install devenv, this setting will fail the build. Disabling the setting (explicitly setting it to false) allows it to succeed.
[feature] MSBuild command line helper
So far the toolchain PoC is there, but there is no command line helper.
|
Hi @UnderSampled
I am not sure this is a bug, but instead by design. Conan tries to provide a consistent behavior. Running one thing or another different one based on the existence of a tool in the system is a bit fragile, and this is the reason the default behavior assumes that it exists and will error otherwise, and there is an opt-out to not use it.
We are now starting a new approach of integrating with the build systems, using toolchain files and trying to simplify what the build helpers do. I agree that maybe it makes sense to change the default, and only use ``/devenv upgrade`` when users explicitly opt in to use it. But we cannot change this default now in the current build helper, it will change in the new toolchain one.
This is the ongoing work around the toolchain for MSBuild: https://github.com/conan-io/conan/pull/7674
The build helper is not addressed yet, but it will be in following iterations. It will not run /devenv by default, unless explicitly requested. If it is explicitly requested, then, /devenv must be installed in the system.
Hi @UnderSampled
The new helper that is going to be released together with the toolchain will not call devenv by default: https://github.com/conan-io/conan/pull/7941
In fact it is not even implemented yet, it might be added later (opt-in? opt-out?) depending on users feedback.
**Discussion:**
The mapping of Conan settings to MSBuild is more or less:
- ``settings.arch`` => ``Platform``
- ``settings.build_type`` => ``Configuration``
Where ``Configuration|Platform`` is what can be selected in the IDE to switch config, and typically is ``Release|x64`` or ``Debug|x86`` or similar.
There are other things that cannot be switched from the IDE:
- ``compiler.runtime`` => MT/MD/MTd...
- ``compiler.toolset`` => v140, v141,....
But the previous ``MSBuild`` helper can use some command line switches to inject and change for example the toolset on the fly, with ``/p:PlatformToolset=xxx``.
This introduces an assymetry, that the Conan ``build()`` is able to build a configuration that the user is not able to build as a developer from their IDE. Or is it expected that developers will call Visual from the command line with ``msbuild...... /p:PlatformToolset``? I dont think so
I would suggest:
- What a developer cannot change from the IDE, is hardcoded at ``conan install`` time. Cannot be switched by the build-helper.
- If a project wants to make variability available to developers, they could introduce custom configurations. Conan toolchains can allow custom logic to map from Conan settings to custom configurations (PoC already in: https://github.com/conan-io/conan/pull/7754)
- Check how this applies to other toolchains, specially CMake one.
Feedback appreciated team!
Any command-line-wrapper for a build system should allow the passing of arbitrary build system variables (`/p` or `-D` or whatever). Otherwise, it's just a really bad helper and users would very frequently resort to constructing their own command-line strings and sending them to `self.run()`. If they do that, then the build helper has lost any protection and usefulness it intended to give.
In response to one of your bullets, no, not all build system variables are candidates for creating additional configurations in the project. While thinking about one priority and design challenge here (trying to deliver some safety and guarantees around Visual Studio's unique "configuration" model), it seems we lost sight of some fundamentals here.
If you need something concrete, "cosmetic string substitution" is a ubiquitous category of usage for `msbuild` properties which is perfectly reasonable and necessary and doesn't conflict with the other goals. People inject strings into builds and applications for all kinds of cosmetic reasons, such as putting the GIT commit hash or SVN revision in the "about" box of a program, or date built, etc. It's very common to pass this in as a `msbuild` property and then have the `vcxproj` pass convert this to a preprocessor definition which is rendered in the code. Very often, sometimes this is only passed in the CI builds, and the `vcxproj` has some default string value like `local` or `dev` or just an empty string.
So, this just leaves the question of:
- Should users be forced to put all such cases of build system variable definitions in the toolchain() method?
I think the answer is an obvious no. A command-line wrapper should stand on it's own and be able to do fundamental translations like this. Also, not all recipes will want to use the toolchain.
> Any command-line-wrapper for a build system should allow the passing of arbitrary build system variables (/p or -D or whatever). Otherwise, it's just a really bad helper and users would very frequently resort to constructing their own command-line strings and sending them to self.run(). If they do that, then the build helper has lost any protection and usefulness it intended to give.
I agree it is a good thing that command line wrappers allow passing all kind of things that are possible in the command line. And this should be doable with our new build helpers.
My objection is that this shouldn't be the default (and possibly only) mechanism for injecting Conan necessary configuration and variability that is mandatory in the developer flow. It is not that the developer can build, run and debug with a toolset defined or a runtime defined. And if there is something that fails for a package for a given configuration, they must be able to develop and test that configuration as easily as possible. That part should be in the toolchain, so developer native experience matches what will be created when they do a local ``conan create``. Then of course, there might be some extra information and somethings that might be only injected in CI, but the basic build configuration shouldn't be part of it.
It seems to me that build system variables should be supported in both the build helper and the toolchain, and I can see different uses for each.
However, the above statement is only true assuming that the `toolchain.props` files will one-day be produced in the users working directory when calling `conan install` just like generator files are today with `-g`. Otherwise (if `toolchain` files are never available outside the `build` directory in the conan cache) then there doesn't seem to be any difference between passing them using the `MSBuild` helper and the `toolchain.props` file.
After speaking with @memsharded it seems that the `toolchain.props` is generated in the users working directory when using `conan install`. So that's the important part. The ad-hoc addition of toolchains via some command line flag like `-g` is done for generators is a separate feature and not critical (although may still make sense)
|
[
{
"body": "### Environment Details\r\n * Operating System+version: Windows Server 2019\r\n * Compiler+version: Visual Studio 2017 Build Tools 15.9.2.0\r\n * Conan version: \r\n * Python version: 3.7.0\r\n\r\n### Steps to reproduce\r\nOn a system with only Visual Studio build tools installed, run a Conan build using the MSBuild build helper using the default setting for upgrade_project (Defaults to true)\r\n\r\nBecause Visual Studio build tools does not install devenv, this setting will fail the build. Disabling the setting (explicitly setting it to false) allows it to succeed.",
"number": 7606,
"title": "[bug] MSBuild calls devenv by default, even if not available"
},
{
"body": "So far the toolchain PoC is there, but there is no command line helper.\r\n",
"number": 7824,
"title": "[feature] MSBuild command line helper"
}
] |
c7f674942e1fd3c67c316444558d1d9a475f61e2
|
{
"head_commit": "10e7e8253ed97b845ec571857091e2817e69589b",
"head_commit_message": "Update conans/client/build/msbuild.py\n\nCo-authored-by: Javier G. Sogo <[email protected]>",
"patch_to_review": "diff --git a/conans/client/build/msbuild.py b/conans/client/build/msbuild.py\nindex 60ca59c0dc1..486ee5322e7 100644\n--- a/conans/client/build/msbuild.py\n+++ b/conans/client/build/msbuild.py\n@@ -5,6 +5,7 @@\n from conans.client import tools\n from conans.client.build.visual_environment import (VisualStudioBuildEnvironment,\n vs_build_type_flags, vs_std_cpp)\n+from conans.client.toolchain.msbuild import MSBuildCmd\n from conans.client.tools.env import environment_append, no_op\n from conans.client.tools.intel import intel_compilervars\n from conans.client.tools.oss import cpu_count\n@@ -19,6 +20,27 @@\n \n \n class MSBuild(object):\n+ def __new__(cls, conanfile, *args, **kwargs):\n+ \"\"\" Inject the proper MSBuild base class in the hierarchy \"\"\"\n+\n+ # If already injected, create and return\n+ if MSBuildHelper in cls.__bases__ or MSBuildCmd in cls.__bases__:\n+ return super(MSBuild, cls).__new__(cls)\n+\n+ # If not, add the proper CMake implementation\n+ if hasattr(conanfile, \"toolchain\"):\n+ msbuild_class = type(\"CustomMSBuildClass\", (cls, MSBuildCmd), {})\n+ else:\n+ msbuild_class = type(\"CustomMSBuildClass\", (cls, MSBuildHelper), {})\n+\n+ return msbuild_class.__new__(msbuild_class, conanfile, *args, **kwargs)\n+\n+ @staticmethod\n+ def get_version(settings):\n+ return MSBuildHelper.get_version(settings)\n+\n+\n+class MSBuildHelper(object):\n \n def __init__(self, conanfile):\n if isinstance(conanfile, ConanFile):\n@@ -156,7 +178,7 @@ def get_command(self, project_file, props_file_path=None, targets=None, upgrade_\n self._output.warn(\"Use 'platforms' argument to define your architectures\")\n \n if output_binary_log:\n- msbuild_version = MSBuild.get_version(self._settings)\n+ msbuild_version = MSBuildHelper.get_version(self._settings)\n if msbuild_version >= \"15.3\": # http://msbuildlog.com/\n command.append('/bl' if isinstance(output_binary_log, bool)\n else '/bl:\"%s\"' % output_binary_log)\ndiff --git a/conans/client/toolchain/msbuild.py b/conans/client/toolchain/msbuild.py\nindex 5b5784fb236..91a030fcdb9 100644\n--- a/conans/client/toolchain/msbuild.py\n+++ b/conans/client/toolchain/msbuild.py\n@@ -2,10 +2,48 @@\n import textwrap\n from xml.dom import minidom\n \n+from conans.client.toolchain.visual import vcvars_arch, vcvars_command\n+from conans.client.tools import msvs_toolset\n from conans.errors import ConanException\n from conans.util.files import save, load\n \n \n+class MSBuildCmd(object):\n+ def __init__(self, conanfile):\n+ self._conanfile = conanfile\n+ self.version = conanfile.settings.get_safe(\"compiler.version\")\n+ self.vcvars_arch = vcvars_arch(conanfile)\n+ self.build_type = conanfile.settings.get_safe(\"build_type\")\n+ msvc_arch = {'x86': 'x86',\n+ 'x86_64': 'x64',\n+ 'armv7': 'ARM',\n+ 'armv8': 'ARM64'}\n+ # if platforms:\n+ # msvc_arch.update(platforms)\n+ arch = conanfile.settings.get_safe(\"arch\")\n+ msvc_arch = msvc_arch.get(str(arch))\n+ if conanfile.settings.get_safe(\"os\") == \"WindowsCE\":\n+ msvc_arch = conanfile.settings.get_safe(\"os.platform\")\n+ self.platform = msvc_arch\n+\n+ def command(self, sln):\n+ vcvars = vcvars_command(self.version, architecture=self.vcvars_arch,\n+ platform_type=None, winsdk_version=None,\n+ vcvars_ver=None)\n+ cmd = ('%s && msbuild \"%s\" /p:Configuration=%s /p:Platform=%s '\n+ % (vcvars, sln, self.build_type, self.platform))\n+ return cmd\n+\n+ def build(self, sln):\n+ cmd = self.command(sln)\n+ self._conanfile.run(cmd)\n+\n+ @staticmethod\n+ def get_version(settings):\n+ return NotImplementedError(\"get_version() method is not supported in MSBuild \"\n+ \"toolchain helper\")\n+\n+\n class MSBuildToolchain(object):\n \n def __init__(self, conanfile):\n@@ -36,6 +74,7 @@ def format_macro(k, value):\n \n runtime = self._conanfile.settings.get_safe(\"compiler.runtime\")\n cppstd = self._conanfile.settings.get_safe(\"compiler.cppstd\")\n+ toolset = msvs_toolset(self._conanfile.settings)\n runtime_library = {\"MT\": \"MultiThreaded\",\n \"MTd\": \"MultiThreadedDebug\",\n \"MD\": \"MultiThreadedDLL\",\n@@ -53,13 +92,17 @@ def format_macro(k, value):\n <LanguageStandard>{}</LanguageStandard>\n </ClCompile>\n </ItemDefinitionGroup>\n+ <PropertyGroup Label=\"Configuration\">\n+ <PlatformToolset>{}</PlatformToolset>\n+ </PropertyGroup>\n </Project>\n \"\"\")\n preprocessor_definitions = \";\".join([format_macro(k, v)\n for k, v in self.preprocessor_definitions.items()])\n # It is useless to set PlatformToolset in the config file, because the conditional checks it\n cppstd = \"stdcpp%s\" % cppstd if cppstd else \"\"\n- config_props = content.format(preprocessor_definitions, runtime_library, cppstd)\n+ toolset = toolset or \"\"\n+ config_props = content.format(preprocessor_definitions, runtime_library, cppstd, toolset)\n config_filepath = os.path.abspath(config_filename)\n self._conanfile.output.info(\"MSBuildToolchain created %s\" % config_filename)\n save(config_filepath, config_props)\ndiff --git a/conans/client/toolchain/visual.py b/conans/client/toolchain/visual.py\nnew file mode 100644\nindex 00000000000..4f126ad8372\n--- /dev/null\n+++ b/conans/client/toolchain/visual.py\n@@ -0,0 +1,75 @@\n+import os\n+\n+from conans.client.tools.win import vs_installation_path\n+from conans.errors import ConanException\n+\n+\n+def vcvars_command(version, architecture=None, platform_type=None, winsdk_version=None,\n+ vcvars_ver=None, start_dir_cd=True):\n+ \"\"\" conan-agnostic construction of vcvars command\n+ https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line\n+ \"\"\"\n+ # TODO: This comes from conans/client/tools/win.py vcvars_command()\n+ cmd = []\n+ if start_dir_cd:\n+ cmd.append('set \"VSCMD_START_DIR=%%CD%%\" &&')\n+\n+ # The \"call\" is useful in case it is called from another .bat script\n+ cmd.append('call \"%s\" ' % vcvars_path(version))\n+ if architecture:\n+ cmd.append(architecture)\n+ if platform_type:\n+ cmd.append(platform_type)\n+ if winsdk_version:\n+ cmd.append(winsdk_version)\n+ if vcvars_ver:\n+ cmd.append(\"-vcvars_ver=%s\" % vcvars_ver)\n+ return \" \".join(cmd)\n+\n+\n+def vcvars_path(version):\n+ # TODO: This comes from conans/client/tools/win.py vcvars_command()\n+ vs_path = vs_installation_path(version)\n+ if not vs_path or not os.path.isdir(vs_path):\n+ raise ConanException(\"VS non-existing installation: Visual Studio %s\" % version)\n+\n+ if int(version) > 14:\n+ vcpath = os.path.join(vs_path, \"VC/Auxiliary/Build/vcvarsall.bat\")\n+ else:\n+ vcpath = os.path.join(vs_path, \"VC/vcvarsall.bat\")\n+ return vcpath\n+\n+\n+def vcvars_arch(conanfile):\n+ \"\"\"\n+ computes the vcvars command line architecture based on conanfile settings (host) and\n+ settings_build\n+ :param conanfile:\n+ :return:\n+ \"\"\"\n+ # TODO: This comes from conans/client/tools/win.py vcvars_command()\n+ settings_host = conanfile.settings\n+ try:\n+ settings_build = conanfile.settings_build\n+ except AttributeError:\n+ settings_build = settings_host\n+\n+ arch_host = str(settings_host.arch)\n+ arch_build = str(settings_build.arch)\n+\n+ arch = None\n+ if arch_build == 'x86_64':\n+ arch = {'x86': \"amd64_x86\",\n+ 'x86_64': 'amd64',\n+ 'armv7': 'amd64_arm',\n+ 'armv8': 'amd64_arm64'}.get(arch_host)\n+ elif arch_build == 'x86':\n+ arch = {'x86': 'x86',\n+ 'x86_64': 'x86_amd64',\n+ 'armv7': 'x86_arm',\n+ 'armv8': 'x86_arm64'}.get(arch_host)\n+\n+ if not arch:\n+ raise ConanException('vcvars unsupported architectures %s-%s' % (arch_build, arch_host))\n+\n+ return arch\ndiff --git a/conans/client/tools/win.py b/conans/client/tools/win.py\nindex 93328e37159..ce9b056332b 100644\n--- a/conans/client/tools/win.py\n+++ b/conans/client/tools/win.py\n@@ -180,8 +180,8 @@ def build_sln_command(settings, sln_path, targets=None, upgrade_project=True, bu\n self.run(command)\n \"\"\"\n conan_v2_behavior(\"'tools.build_sln_command' is deprecated, use 'MSBuild()' helper instead\")\n- from conans.client.build.msbuild import MSBuild\n- tmp = MSBuild(settings)\n+ from conans.client.build.msbuild import MSBuildHelper\n+ tmp = MSBuildHelper(settings)\n output = default_output(output, fn_name='conans.client.tools.win.build_sln_command')\n tmp._output = output\n \ndiff --git a/conans/test/functional/toolchain/test_msbuild.py b/conans/test/functional/toolchain/test_msbuild.py\nindex 224ae1a0bec..6b2716d3f8a 100644\n--- a/conans/test/functional/toolchain/test_msbuild.py\n+++ b/conans/test/functional/toolchain/test_msbuild.py\n@@ -3,7 +3,7 @@\n import textwrap\n import unittest\n \n-\n+from conans.client.toolchain.visual import vcvars_command\n from conans.client.tools import vs_installation_path\n from conans.test.utils.tools import TestClient\n \n@@ -96,15 +96,16 @@\n <WholeProgramOptimization>true</WholeProgramOptimization>\n <CharacterSet>Unicode</CharacterSet>\n </PropertyGroup>\n+ <!-- Very IMPORTANT this should go BEFORE the Microsoft.Cpp.props -->\n+ <ImportGroup Label=\"PropertySheets\">\n+ <Import Project=\"..\\conan\\conan_Hello.props\" />\n+ <Import Project=\"..\\conan\\conan_toolchain.props\" />\n+ </ImportGroup>\n <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n <ImportGroup Label=\"ExtensionSettings\">\n </ImportGroup>\n <ImportGroup Label=\"Shared\">\n </ImportGroup>\n- <ImportGroup Label=\"PropertySheets\">\n- <Import Project=\"..\\conan\\conan_Hello.props\" />\n- <Import Project=\"..\\conan\\conan_toolchain.props\" />\n- </ImportGroup>\n <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\"\n Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\"\n@@ -218,7 +219,7 @@\n class WinTest(unittest.TestCase):\n \n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, MSBuildToolchain\n+ from conans import ConanFile, MSBuildToolchain, MSBuild\n class App(ConanFile):\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n requires = \"hello/0.1\"\n@@ -233,6 +234,10 @@ def toolchain(self):\n else:\n tc.preprocessor_definitions[\"DEFINITIONS_CONFIG\"] = \"Release\"\n tc.write_toolchain_files()\n+\n+ def build(self):\n+ msbuild = MSBuild(self)\n+ msbuild.build(\"MyProject.sln\")\n \"\"\")\n \n app = textwrap.dedent(\"\"\"\n@@ -316,22 +321,18 @@ def test_toolchain_win(self):\n \n # Run the configure corresponding to this test case\n client.run(\"install . %s -if=conan\" % (settings, ))\n- self.assertIn(\"conanfile.py: MSBuildToolchain created \"\n- \"conan_toolchain_release_win32.props\", client.out)\n- vs_path = vs_installation_path(\"15\")\n- vcvars_path = os.path.join(vs_path, \"VC/Auxiliary/Build/vcvarsall.bat\")\n+ self.assertIn(\"conanfile.py: MSBuildToolchain created conan_toolchain_release_win32.props\",\n+ client.out)\n+ client.run(\"build . -if=conan\")\n \n- cmd = ('set \"VSCMD_START_DIR=%%CD%%\" && '\n- '\"%s\" x86 && msbuild \"MyProject.sln\" /p:Configuration=Release' % vcvars_path)\n- client.run_command(cmd)\n self.assertIn(\"Visual Studio 2017\", client.out)\n self.assertIn(\"[vcvarsall.bat] Environment initialized for: 'x86'\", client.out)\n self._run_app(client, \"x86\", \"Release\")\n self.assertIn(\"AppMSCVER 17!!\", client.out)\n self.assertIn(\"AppCppStd 17!!!\", client.out)\n \n- cmd = ('set \"VSCMD_START_DIR=%%CD%%\" && '\n- '\"%s\" x86 && dumpbin /dependents \"Release\\\\MyApp.exe\"' % vcvars_path)\n+ vcvars = vcvars_command(version=\"15\", architecture=\"x86\")\n+ cmd = ('%s && dumpbin /dependents \"Release\\\\MyApp.exe\"' % vcvars)\n client.run_command(cmd)\n # No other DLLs dependencies rather than kernel, it was MT, statically linked\n self.assertIn(\"KERNEL32.dll\", client.out)\n@@ -363,23 +364,15 @@ def test_toolchain_win_debug(self):\n client.run(\"install . %s -if=conan\" % (settings, ))\n self.assertIn(\"conanfile.py: MSBuildToolchain created conan_toolchain_debug_x64.props\",\n client.out)\n- vs_path = vs_installation_path(\"15\")\n- vcvars_path = os.path.join(vs_path, \"VC/Auxiliary/Build/vcvarsall.bat\")\n-\n- # FIXME: This is cheating, pass the toolset on the command line, nothing that devs would do\n- cmd = ('set \"VSCMD_START_DIR=%%CD%%\" && '\n- '\"%s\" x64 && '\n- 'msbuild \"MyProject.sln\" /p:Configuration=Debug /p:PlatformToolset=\"v140\"'\n- % vcvars_path)\n- client.run_command(cmd)\n+ client.run(\"build . -if=conan\")\n self.assertIn(\"Visual Studio 2017\", client.out)\n self.assertIn(\"[vcvarsall.bat] Environment initialized for: 'x64'\", client.out)\n self._run_app(client, \"x64\", \"Debug\")\n self.assertIn(\"AppMSCVER 15!!\", client.out)\n self.assertIn(\"AppCppStd 14!!!\", client.out)\n \n- cmd = ('set \"VSCMD_START_DIR=%%CD%%\" && '\n- '\"%s\" x64 && dumpbin /dependents \"x64\\\\Debug\\\\MyApp.exe\"' % vcvars_path)\n+ vcvars = vcvars_command(version=\"15\", architecture=\"amd64\")\n+ cmd = ('%s && dumpbin /dependents \"x64\\\\Debug\\\\MyApp.exe\"' % vcvars)\n client.run_command(cmd)\n self.assertIn(\"MSVCP140D.dll\", client.out)\n self.assertIn(\"VCRUNTIME140D.dll\", client.out)\n"
}
|
[
{
"diff_hunk": "@@ -96,15 +96,16 @@\n <WholeProgramOptimization>true</WholeProgramOptimization>\n <CharacterSet>Unicode</CharacterSet>\n </PropertyGroup>\n+ <!-- Very IMPORTANT this should go BEFORE the Microsoft.Cpp.props -->",
"line": null,
"original_line": 99,
"original_start_line": null,
"path": "conans/test/functional/toolchain/test_msbuild.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n <!-- Very IMPORTANT this should go BEFORE the Microsoft.Cpp.props. If it goes after, the Toolset definition is ignored -->\r\n```"
}
] |
48a954b389a6f2a33e739d7d933b22f065b319cd
|
diff --git a/conans/client/build/msbuild.py b/conans/client/build/msbuild.py
index 60ca59c0dc1..486ee5322e7 100644
--- a/conans/client/build/msbuild.py
+++ b/conans/client/build/msbuild.py
@@ -5,6 +5,7 @@
from conans.client import tools
from conans.client.build.visual_environment import (VisualStudioBuildEnvironment,
vs_build_type_flags, vs_std_cpp)
+from conans.client.toolchain.msbuild import MSBuildCmd
from conans.client.tools.env import environment_append, no_op
from conans.client.tools.intel import intel_compilervars
from conans.client.tools.oss import cpu_count
@@ -19,6 +20,27 @@
class MSBuild(object):
+ def __new__(cls, conanfile, *args, **kwargs):
+ """ Inject the proper MSBuild base class in the hierarchy """
+
+ # If already injected, create and return
+ if MSBuildHelper in cls.__bases__ or MSBuildCmd in cls.__bases__:
+ return super(MSBuild, cls).__new__(cls)
+
+ # If not, add the proper CMake implementation
+ if hasattr(conanfile, "toolchain"):
+ msbuild_class = type("CustomMSBuildClass", (cls, MSBuildCmd), {})
+ else:
+ msbuild_class = type("CustomMSBuildClass", (cls, MSBuildHelper), {})
+
+ return msbuild_class.__new__(msbuild_class, conanfile, *args, **kwargs)
+
+ @staticmethod
+ def get_version(settings):
+ return MSBuildHelper.get_version(settings)
+
+
+class MSBuildHelper(object):
def __init__(self, conanfile):
if isinstance(conanfile, ConanFile):
@@ -156,7 +178,7 @@ def get_command(self, project_file, props_file_path=None, targets=None, upgrade_
self._output.warn("Use 'platforms' argument to define your architectures")
if output_binary_log:
- msbuild_version = MSBuild.get_version(self._settings)
+ msbuild_version = MSBuildHelper.get_version(self._settings)
if msbuild_version >= "15.3": # http://msbuildlog.com/
command.append('/bl' if isinstance(output_binary_log, bool)
else '/bl:"%s"' % output_binary_log)
diff --git a/conans/client/toolchain/msbuild.py b/conans/client/toolchain/msbuild.py
index 5b5784fb236..91a030fcdb9 100644
--- a/conans/client/toolchain/msbuild.py
+++ b/conans/client/toolchain/msbuild.py
@@ -2,10 +2,48 @@
import textwrap
from xml.dom import minidom
+from conans.client.toolchain.visual import vcvars_arch, vcvars_command
+from conans.client.tools import msvs_toolset
from conans.errors import ConanException
from conans.util.files import save, load
+class MSBuildCmd(object):
+ def __init__(self, conanfile):
+ self._conanfile = conanfile
+ self.version = conanfile.settings.get_safe("compiler.version")
+ self.vcvars_arch = vcvars_arch(conanfile)
+ self.build_type = conanfile.settings.get_safe("build_type")
+ msvc_arch = {'x86': 'x86',
+ 'x86_64': 'x64',
+ 'armv7': 'ARM',
+ 'armv8': 'ARM64'}
+ # if platforms:
+ # msvc_arch.update(platforms)
+ arch = conanfile.settings.get_safe("arch")
+ msvc_arch = msvc_arch.get(str(arch))
+ if conanfile.settings.get_safe("os") == "WindowsCE":
+ msvc_arch = conanfile.settings.get_safe("os.platform")
+ self.platform = msvc_arch
+
+ def command(self, sln):
+ vcvars = vcvars_command(self.version, architecture=self.vcvars_arch,
+ platform_type=None, winsdk_version=None,
+ vcvars_ver=None)
+ cmd = ('%s && msbuild "%s" /p:Configuration=%s /p:Platform=%s '
+ % (vcvars, sln, self.build_type, self.platform))
+ return cmd
+
+ def build(self, sln):
+ cmd = self.command(sln)
+ self._conanfile.run(cmd)
+
+ @staticmethod
+ def get_version(settings):
+ return NotImplementedError("get_version() method is not supported in MSBuild "
+ "toolchain helper")
+
+
class MSBuildToolchain(object):
def __init__(self, conanfile):
@@ -36,6 +74,7 @@ def format_macro(k, value):
runtime = self._conanfile.settings.get_safe("compiler.runtime")
cppstd = self._conanfile.settings.get_safe("compiler.cppstd")
+ toolset = msvs_toolset(self._conanfile.settings)
runtime_library = {"MT": "MultiThreaded",
"MTd": "MultiThreadedDebug",
"MD": "MultiThreadedDLL",
@@ -53,13 +92,17 @@ def format_macro(k, value):
<LanguageStandard>{}</LanguageStandard>
</ClCompile>
</ItemDefinitionGroup>
+ <PropertyGroup Label="Configuration">
+ <PlatformToolset>{}</PlatformToolset>
+ </PropertyGroup>
</Project>
""")
preprocessor_definitions = ";".join([format_macro(k, v)
for k, v in self.preprocessor_definitions.items()])
# It is useless to set PlatformToolset in the config file, because the conditional checks it
cppstd = "stdcpp%s" % cppstd if cppstd else ""
- config_props = content.format(preprocessor_definitions, runtime_library, cppstd)
+ toolset = toolset or ""
+ config_props = content.format(preprocessor_definitions, runtime_library, cppstd, toolset)
config_filepath = os.path.abspath(config_filename)
self._conanfile.output.info("MSBuildToolchain created %s" % config_filename)
save(config_filepath, config_props)
diff --git a/conans/client/toolchain/visual.py b/conans/client/toolchain/visual.py
new file mode 100644
index 00000000000..4f126ad8372
--- /dev/null
+++ b/conans/client/toolchain/visual.py
@@ -0,0 +1,75 @@
+import os
+
+from conans.client.tools.win import vs_installation_path
+from conans.errors import ConanException
+
+
+def vcvars_command(version, architecture=None, platform_type=None, winsdk_version=None,
+ vcvars_ver=None, start_dir_cd=True):
+ """ conan-agnostic construction of vcvars command
+ https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line
+ """
+ # TODO: This comes from conans/client/tools/win.py vcvars_command()
+ cmd = []
+ if start_dir_cd:
+ cmd.append('set "VSCMD_START_DIR=%%CD%%" &&')
+
+ # The "call" is useful in case it is called from another .bat script
+ cmd.append('call "%s" ' % vcvars_path(version))
+ if architecture:
+ cmd.append(architecture)
+ if platform_type:
+ cmd.append(platform_type)
+ if winsdk_version:
+ cmd.append(winsdk_version)
+ if vcvars_ver:
+ cmd.append("-vcvars_ver=%s" % vcvars_ver)
+ return " ".join(cmd)
+
+
+def vcvars_path(version):
+ # TODO: This comes from conans/client/tools/win.py vcvars_command()
+ vs_path = vs_installation_path(version)
+ if not vs_path or not os.path.isdir(vs_path):
+ raise ConanException("VS non-existing installation: Visual Studio %s" % version)
+
+ if int(version) > 14:
+ vcpath = os.path.join(vs_path, "VC/Auxiliary/Build/vcvarsall.bat")
+ else:
+ vcpath = os.path.join(vs_path, "VC/vcvarsall.bat")
+ return vcpath
+
+
+def vcvars_arch(conanfile):
+ """
+ computes the vcvars command line architecture based on conanfile settings (host) and
+ settings_build
+ :param conanfile:
+ :return:
+ """
+ # TODO: This comes from conans/client/tools/win.py vcvars_command()
+ settings_host = conanfile.settings
+ try:
+ settings_build = conanfile.settings_build
+ except AttributeError:
+ settings_build = settings_host
+
+ arch_host = str(settings_host.arch)
+ arch_build = str(settings_build.arch)
+
+ arch = None
+ if arch_build == 'x86_64':
+ arch = {'x86': "amd64_x86",
+ 'x86_64': 'amd64',
+ 'armv7': 'amd64_arm',
+ 'armv8': 'amd64_arm64'}.get(arch_host)
+ elif arch_build == 'x86':
+ arch = {'x86': 'x86',
+ 'x86_64': 'x86_amd64',
+ 'armv7': 'x86_arm',
+ 'armv8': 'x86_arm64'}.get(arch_host)
+
+ if not arch:
+ raise ConanException('vcvars unsupported architectures %s-%s' % (arch_build, arch_host))
+
+ return arch
diff --git a/conans/client/tools/win.py b/conans/client/tools/win.py
index 93328e37159..ce9b056332b 100644
--- a/conans/client/tools/win.py
+++ b/conans/client/tools/win.py
@@ -180,8 +180,8 @@ def build_sln_command(settings, sln_path, targets=None, upgrade_project=True, bu
self.run(command)
"""
conan_v2_behavior("'tools.build_sln_command' is deprecated, use 'MSBuild()' helper instead")
- from conans.client.build.msbuild import MSBuild
- tmp = MSBuild(settings)
+ from conans.client.build.msbuild import MSBuildHelper
+ tmp = MSBuildHelper(settings)
output = default_output(output, fn_name='conans.client.tools.win.build_sln_command')
tmp._output = output
diff --git a/conans/test/integration/toolchains/test_msbuild.py b/conans/test/integration/toolchains/test_msbuild.py
index 224ae1a0bec..eda0e1ea27f 100644
--- a/conans/test/integration/toolchains/test_msbuild.py
+++ b/conans/test/integration/toolchains/test_msbuild.py
@@ -3,7 +3,7 @@
import textwrap
import unittest
-
+from conans.client.toolchain.visual import vcvars_command
from conans.client.tools import vs_installation_path
from conans.test.utils.tools import TestClient
@@ -96,15 +96,16 @@
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
+ <!-- Very IMPORTANT this should go BEFORE the Microsoft.Cpp.props. If it goes after, the Toolset definition is ignored -->
+ <ImportGroup Label="PropertySheets">
+ <Import Project="..\conan\conan_Hello.props" />
+ <Import Project="..\conan\conan_toolchain.props" />
+ </ImportGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
- <ImportGroup Label="PropertySheets">
- <Import Project="..\conan\conan_Hello.props" />
- <Import Project="..\conan\conan_toolchain.props" />
- </ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props"
Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')"
@@ -218,7 +219,7 @@
class WinTest(unittest.TestCase):
conanfile = textwrap.dedent("""
- from conans import ConanFile, MSBuildToolchain
+ from conans import ConanFile, MSBuildToolchain, MSBuild
class App(ConanFile):
settings = "os", "arch", "compiler", "build_type"
requires = "hello/0.1"
@@ -233,6 +234,10 @@ def toolchain(self):
else:
tc.preprocessor_definitions["DEFINITIONS_CONFIG"] = "Release"
tc.write_toolchain_files()
+
+ def build(self):
+ msbuild = MSBuild(self)
+ msbuild.build("MyProject.sln")
""")
app = textwrap.dedent("""
@@ -316,22 +321,18 @@ def test_toolchain_win(self):
# Run the configure corresponding to this test case
client.run("install . %s -if=conan" % (settings, ))
- self.assertIn("conanfile.py: MSBuildToolchain created "
- "conan_toolchain_release_win32.props", client.out)
- vs_path = vs_installation_path("15")
- vcvars_path = os.path.join(vs_path, "VC/Auxiliary/Build/vcvarsall.bat")
+ self.assertIn("conanfile.py: MSBuildToolchain created conan_toolchain_release_win32.props",
+ client.out)
+ client.run("build . -if=conan")
- cmd = ('set "VSCMD_START_DIR=%%CD%%" && '
- '"%s" x86 && msbuild "MyProject.sln" /p:Configuration=Release' % vcvars_path)
- client.run_command(cmd)
self.assertIn("Visual Studio 2017", client.out)
self.assertIn("[vcvarsall.bat] Environment initialized for: 'x86'", client.out)
self._run_app(client, "x86", "Release")
self.assertIn("AppMSCVER 17!!", client.out)
self.assertIn("AppCppStd 17!!!", client.out)
- cmd = ('set "VSCMD_START_DIR=%%CD%%" && '
- '"%s" x86 && dumpbin /dependents "Release\\MyApp.exe"' % vcvars_path)
+ vcvars = vcvars_command(version="15", architecture="x86")
+ cmd = ('%s && dumpbin /dependents "Release\\MyApp.exe"' % vcvars)
client.run_command(cmd)
# No other DLLs dependencies rather than kernel, it was MT, statically linked
self.assertIn("KERNEL32.dll", client.out)
@@ -363,23 +364,15 @@ def test_toolchain_win_debug(self):
client.run("install . %s -if=conan" % (settings, ))
self.assertIn("conanfile.py: MSBuildToolchain created conan_toolchain_debug_x64.props",
client.out)
- vs_path = vs_installation_path("15")
- vcvars_path = os.path.join(vs_path, "VC/Auxiliary/Build/vcvarsall.bat")
-
- # FIXME: This is cheating, pass the toolset on the command line, nothing that devs would do
- cmd = ('set "VSCMD_START_DIR=%%CD%%" && '
- '"%s" x64 && '
- 'msbuild "MyProject.sln" /p:Configuration=Debug /p:PlatformToolset="v140"'
- % vcvars_path)
- client.run_command(cmd)
+ client.run("build . -if=conan")
self.assertIn("Visual Studio 2017", client.out)
self.assertIn("[vcvarsall.bat] Environment initialized for: 'x64'", client.out)
self._run_app(client, "x64", "Debug")
self.assertIn("AppMSCVER 15!!", client.out)
self.assertIn("AppCppStd 14!!!", client.out)
- cmd = ('set "VSCMD_START_DIR=%%CD%%" && '
- '"%s" x64 && dumpbin /dependents "x64\\Debug\\MyApp.exe"' % vcvars_path)
+ vcvars = vcvars_command(version="15", architecture="amd64")
+ cmd = ('%s && dumpbin /dependents "x64\\Debug\\MyApp.exe"' % vcvars)
client.run_command(cmd)
self.assertIn("MSVCP140D.dll", client.out)
self.assertIn("VCRUNTIME140D.dll", client.out)
diff --git a/conans/test/unittests/client/build/msbuild_test.py b/conans/test/unittests/client/build/msbuild_test.py
index 7ff8c1109d1..0370f74d107 100644
--- a/conans/test/unittests/client/build/msbuild_test.py
+++ b/conans/test/unittests/client/build/msbuild_test.py
@@ -120,7 +120,7 @@ def test_binary_logging_off_implicit(self):
self.assertNotIn("/bl", command)
@unittest.skipUnless(platform.system() == "Windows", "Requires MSBuild")
- @mock.patch("conans.client.build.msbuild.MSBuild.get_version")
+ @mock.patch("conans.client.build.msbuild.MSBuildHelper.get_version")
def test_binary_logging_not_supported(self, mock_get_version):
mock_get_version.return_value = Version("14")
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7867@00ee154
|
conan-io/conan
|
Python
| 7,867
|
fix cpp_info.names and cpp_info.filenames in the local flow
|
Changelog: Bugfix: Fix local flow (conan install + build) support for ``cpp_info.names`` and ``cpp_info.filenames``.
Docs: Omit
Close https://github.com/conan-io/conan/issues/7854
|
2020-10-12T22:03:42Z
|
[bug] deps_cpp_info.get_name returns different results with `create` and `build`
### Environment Details (include every applicable attribute)
* Operating System+version: CentOS 7
* Compiler+version: gcc 4.8.5
* Conan version: 1.30.0
* Python version: 3.6.8
### Steps to reproduce (Include if Applicable)
The following conanfile.py
```
import conans
class Package(conans.ConanFile):
name = 'package-name'
version = '0.0.1'
requires = 'gtest/1.10.0'
generators = 'cmake_find_package'
def build(self):
print(self.deps_cpp_info['gtest'].get_name('cmake_find_package'))
```
prints different gtest package name depending on which conan commands I use:
`conan create .` prints `GTest`
`conan install . && conan build .` prints `gtest`
I expect the second command to print `GTest` like the first one.
|
Thanks @mvoelkle-cern for reporting! Indeed a bug ( a missing thing for this new experimental feature)
I have submitted a fix in https://github.com/conan-io/conan/pull/7867 for next Conan 1.31 release.
|
[
{
"body": "### Environment Details (include every applicable attribute)\r\n * Operating System+version: CentOS 7\r\n * Compiler+version: gcc 4.8.5\r\n * Conan version: 1.30.0\r\n * Python version: 3.6.8\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\nThe following conanfile.py\r\n\r\n```\r\nimport conans\r\n\r\nclass Package(conans.ConanFile):\r\n\tname = 'package-name'\r\n\tversion = '0.0.1'\r\n\trequires = 'gtest/1.10.0'\r\n\tgenerators = 'cmake_find_package'\r\n\r\n\tdef build(self):\r\n\t\tprint(self.deps_cpp_info['gtest'].get_name('cmake_find_package'))\r\n```\r\n\r\nprints different gtest package name depending on which conan commands I use:\r\n\r\n`conan create .` prints `GTest`\r\n`conan install . && conan build .` prints `gtest`\r\n\r\nI expect the second command to print `GTest` like the first one.",
"number": 7854,
"title": "[bug] deps_cpp_info.get_name returns different results with `create` and `build`"
}
] |
3609faaea4b56a1a3945978214eb571418762600
|
{
"head_commit": "00ee154e742b4786bc4ad5096e0a6564701965ca",
"head_commit_message": "fix tests",
"patch_to_review": "diff --git a/conans/client/generators/text.py b/conans/client/generators/text.py\nindex 0e333a35114..f3bbeed4d7c 100644\n--- a/conans/client/generators/text.py\n+++ b/conans/client/generators/text.py\n@@ -43,6 +43,8 @@ def __init__(self, cpp_info):\n self.version = cpp_info.version\n self.name = cpp_info.get_name(TXTGenerator.name)\n self.rootpath = \"%s\" % cpp_info.rootpath.replace(\"\\\\\", \"/\")\n+ self.generatornames = \"\\n\".join(\"%s:%s\" % (k, v) for k, v in cpp_info.names.items())\n+ self.generatorfilenames = \"\\n\".join(\"%s:%s\" % (k, v) for k, v in cpp_info.filenames.items())\n \n \n class TXTGenerator(Generator):\n@@ -148,11 +150,11 @@ def _relativize_path(p, _rootpath):\n return p\n \n def _populate_cpp_info(_cpp_info, _data, _rootpath):\n- for key, value in _data.items():\n+ for key, v in _data.items():\n if key.endswith('dirs'):\n- value = [_relativize_path(it, _rootpath) for it in value]\n- value = ['' if it == '.' else it for it in value]\n- setattr(_cpp_info, key, value)\n+ v = [_relativize_path(it, _rootpath) for it in v]\n+ v = ['' if it == '.' else it for it in v]\n+ setattr(_cpp_info, key, v)\n \n if None in data:\n del data[None]\n@@ -164,7 +166,17 @@ def _populate_cpp_info(_cpp_info, _data, _rootpath):\n rootpath = no_config_data.pop('rootpath')[0]\n dep_cpp_info = CppInfo(dep, rootpath)\n dep_cpp_info.filter_empty = filter_empty\n- dep_cpp_info.names[TXTGenerator.name] = no_config_data.pop('name')[0]\n+ _ = no_config_data.pop('name')[0]\n+ version = no_config_data.pop('version', [\"\"])[0]\n+ dep_cpp_info.version = version\n+ generatornames = no_config_data.pop(\"generatornames\", []) # can be empty\n+ for n in generatornames:\n+ gen, value = n.split(\":\")\n+ dep_cpp_info.names[gen] = value\n+ generatorfilenames = no_config_data.pop(\"generatorfilenames\", []) # can be empty\n+ for n in generatorfilenames:\n+ gen, value = n.split(\":\")\n+ dep_cpp_info.filenames[gen] = value\n dep_cpp_info.sysroot = no_config_data.pop('sysroot', [\"\"])[0]\n _populate_cpp_info(dep_cpp_info, no_config_data, rootpath)\n \n@@ -174,8 +186,6 @@ def _populate_cpp_info(_cpp_info, _data, _rootpath):\n _populate_cpp_info(cpp_info_config, config_data, rootpath)\n \n # Add to the dependecy list\n- version = no_config_data.pop('version', [\"\"])[0]\n- dep_cpp_info.version = version\n deps_cpp_info.add(dep, DepCppInfo(dep_cpp_info))\n \n return deps_cpp_info\n@@ -216,7 +226,9 @@ def content(self):\n # Makes no sense to have an accumulated rootpath\n template_deps = (template + '[rootpath{dep}]\\n{deps.rootpath}\\n\\n' +\n '[name{dep}]\\n{deps.name}\\n\\n' +\n- '[version{dep}]\\n{deps.version}\\n\\n')\n+ '[version{dep}]\\n{deps.version}\\n\\n' +\n+ '[generatornames{dep}]\\n{deps.generatornames}\\n\\n' +\n+ '[generatorfilenames{dep}]\\n{deps.generatorfilenames}\\n\\n')\n \n for dep_name, dep_cpp_info in self.deps_build_info.dependencies:\n dep = \"_\" + dep_name\ndiff --git a/conans/test/functional/generators/package_info/package_info_test.py b/conans/test/functional/generators/package_info/package_info_test.py\nindex e019bc1a428..c6577a4b462 100644\n--- a/conans/test/functional/generators/package_info/package_info_test.py\n+++ b/conans/test/functional/generators/package_info/package_info_test.py\n@@ -470,3 +470,39 @@ def package_requires_in_components_requires_test(self):\n env_info={})\n client.save({\"conanfile.py\": conanfile})\n client.run(\"create conanfile.py\") # Correct usage\n+\n+ def test_get_name_local_flow(self):\n+ # https://github.com/conan-io/conan/issues/7854\n+ client = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class Package(ConanFile):\n+ def package_info(self):\n+ self.cpp_info.names[\"cmake_find_package\"] = \"GTest\"\n+ self.cpp_info.filenames[\"cmake_find_package\"] = \"GtesT\"\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . gtest/1.0@\")\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class Package(ConanFile):\n+ requires = 'gtest/1.0'\n+ generators = 'cmake_find_package'\n+\n+ def build(self):\n+ info = self.deps_cpp_info['gtest'].get_name('cmake_find_package')\n+ self.output.info(\"GTEST_INFO: %s\" % info)\n+ fileinfo = self.deps_cpp_info['gtest'].get_filename('cmake_find_package')\n+ self.output.info(\"GTEST_FILEINFO: %s\" % fileinfo)\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . pkg/1.0@\")\n+ self.assertIn(\"pkg/1.0: GTEST_INFO: GTest\", client.out)\n+ self.assertIn(\"pkg/1.0: GTEST_FILEINFO: GtesT\", client.out)\n+ client.run(\"install . pkg/1.0@\")\n+ self.assertIn(\"Generator cmake_find_package created FindGtesT.cmake\", client.out)\n+ client.run(\"build .\")\n+ self.assertIn(\"conanfile.py (pkg/1.0): GTEST_INFO: GTest\", client.out)\n+ self.assertIn(\"conanfile.py (pkg/1.0): GTEST_FILEINFO: GtesT\", client.out)\n+\ndiff --git a/conans/test/unittests/client/generators/txt/test_dump_load.py b/conans/test/unittests/client/generators/txt/test_dump_load.py\nindex 7b5b17c7e06..f73fed90c93 100644\n--- a/conans/test/unittests/client/generators/txt/test_dump_load.py\n+++ b/conans/test/unittests/client/generators/txt/test_dump_load.py\n@@ -17,7 +17,7 @@ def test_names_per_generator(self):\n cpp_info = CppInfo(\"pkg_name\", \"root\")\n cpp_info.name = \"name\"\n cpp_info.names[\"txt\"] = \"txt_name\"\n- cpp_info.names[\"cmake_find_package\"] = \"cmake_find_package\"\n+ cpp_info.names[\"cmake_find_package\"] = \"SpecialName\"\n conanfile = ConanFile(TestBufferConanOutput(), None)\n conanfile.initialize(Settings({}), EnvValues())\n conanfile.deps_cpp_info.add(\"pkg_name\", DepCppInfo(cpp_info))\n@@ -25,9 +25,8 @@ def test_names_per_generator(self):\n parsed_deps_cpp_info, _, _, _ = TXTGenerator.loads(content, filter_empty=False)\n \n parsed_cpp_info = parsed_deps_cpp_info[\"pkg_name\"]\n- # FIXME: Conan v2: Remove 'txt' generator or serialize all the names\n self.assertEqual(parsed_cpp_info.get_name(\"txt\"), \"txt_name\")\n- self.assertEqual(parsed_cpp_info.get_name(\"cmake_find_package\"), \"pkg_name\")\n+ self.assertEqual(parsed_cpp_info.get_name(\"cmake_find_package\"), \"SpecialName\")\n self.assertEqual(parsed_cpp_info.get_name(\"pkg_config\"), \"pkg_name\")\n \n def test_idempotent(self):\ndiff --git a/conans/test/unittests/model/build_info_test.py b/conans/test/unittests/model/build_info_test.py\nindex 7f2d4775c40..6c3931141f5 100644\n--- a/conans/test/unittests/model/build_info_test.py\n+++ b/conans/test/unittests/model/build_info_test.py\n@@ -3,7 +3,7 @@\n from collections import defaultdict, namedtuple\n \n from conans.client.generators import TXTGenerator\n-from conans.model.build_info import CppInfo, DepsCppInfo\n+from conans.model.build_info import DepsCppInfo\n from conans.model.env_info import DepsEnvInfo, EnvInfo\n from conans.model.user_info import DepsUserInfo\n from conans.test.utils.test_files import temp_folder\n"
}
|
[
{
"diff_hunk": "@@ -164,7 +166,17 @@ def _populate_cpp_info(_cpp_info, _data, _rootpath):\n rootpath = no_config_data.pop('rootpath')[0]\n dep_cpp_info = CppInfo(dep, rootpath)\n dep_cpp_info.filter_empty = filter_empty\n- dep_cpp_info.names[TXTGenerator.name] = no_config_data.pop('name')[0]\n+ _ = no_config_data.pop('name')[0]\n+ version = no_config_data.pop('version', [\"\"])[0]\n+ dep_cpp_info.version = version\n+ generatornames = no_config_data.pop(\"generatornames\", []) # can be empty\n+ for n in generatornames:\n+ gen, value = n.split(\":\")\n+ dep_cpp_info.names[gen] = value\n+ generatorfilenames = no_config_data.pop(\"generatorfilenames\", []) # can be empty\n+ for n in generatorfilenames:\n+ gen, value = n.split(\":\")",
"line": null,
"original_line": 178,
"original_start_line": null,
"path": "conans/client/generators/text.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n gen, value = n.split(\":\", 1)\r\n```"
},
{
"diff_hunk": "@@ -164,7 +166,17 @@ def _populate_cpp_info(_cpp_info, _data, _rootpath):\n rootpath = no_config_data.pop('rootpath')[0]\n dep_cpp_info = CppInfo(dep, rootpath)\n dep_cpp_info.filter_empty = filter_empty\n- dep_cpp_info.names[TXTGenerator.name] = no_config_data.pop('name')[0]\n+ _ = no_config_data.pop('name')[0]\n+ version = no_config_data.pop('version', [\"\"])[0]\n+ dep_cpp_info.version = version\n+ generatornames = no_config_data.pop(\"generatornames\", []) # can be empty\n+ for n in generatornames:\n+ gen, value = n.split(\":\")",
"line": null,
"original_line": 174,
"original_start_line": null,
"path": "conans/client/generators/text.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n gen, value = n.split(\":\", 1)\r\n```"
},
{
"diff_hunk": "@@ -43,6 +43,8 @@ def __init__(self, cpp_info):\n self.version = cpp_info.version\n self.name = cpp_info.get_name(TXTGenerator.name)\n self.rootpath = \"%s\" % cpp_info.rootpath.replace(\"\\\\\", \"/\")\n+ self.generatornames = \"\\n\".join(\"%s:%s\" % (k, v) for k, v in cpp_info.names.items())\n+ self.generatorfilenames = \"\\n\".join(\"%s:%s\" % (k, v) for k, v in cpp_info.filenames.items())",
"line": null,
"original_line": 47,
"original_start_line": null,
"path": "conans/client/generators/text.py",
"start_line": null,
"text": "@user1:\nBetter to use `=` as separator (more ini-like format)? Or is there any reason to use `:`?"
}
] |
fb0b769c00659257654d59a231b0586574529413
|
diff --git a/conans/client/generators/text.py b/conans/client/generators/text.py
index 0e333a35114..0fdcb07fb6f 100644
--- a/conans/client/generators/text.py
+++ b/conans/client/generators/text.py
@@ -43,6 +43,8 @@ def __init__(self, cpp_info):
self.version = cpp_info.version
self.name = cpp_info.get_name(TXTGenerator.name)
self.rootpath = "%s" % cpp_info.rootpath.replace("\\", "/")
+ self.generatornames = "\n".join("%s=%s" % (k, v) for k, v in cpp_info.names.items())
+ self.generatorfilenames = "\n".join("%s=%s" % (k, v) for k, v in cpp_info.filenames.items())
class TXTGenerator(Generator):
@@ -148,11 +150,11 @@ def _relativize_path(p, _rootpath):
return p
def _populate_cpp_info(_cpp_info, _data, _rootpath):
- for key, value in _data.items():
+ for key, v in _data.items():
if key.endswith('dirs'):
- value = [_relativize_path(it, _rootpath) for it in value]
- value = ['' if it == '.' else it for it in value]
- setattr(_cpp_info, key, value)
+ v = [_relativize_path(it, _rootpath) for it in v]
+ v = ['' if it == '.' else it for it in v]
+ setattr(_cpp_info, key, v)
if None in data:
del data[None]
@@ -164,7 +166,17 @@ def _populate_cpp_info(_cpp_info, _data, _rootpath):
rootpath = no_config_data.pop('rootpath')[0]
dep_cpp_info = CppInfo(dep, rootpath)
dep_cpp_info.filter_empty = filter_empty
- dep_cpp_info.names[TXTGenerator.name] = no_config_data.pop('name')[0]
+ _ = no_config_data.pop('name')[0]
+ version = no_config_data.pop('version', [""])[0]
+ dep_cpp_info.version = version
+ generatornames = no_config_data.pop("generatornames", []) # can be empty
+ for n in generatornames:
+ gen, value = n.split("=", 1)
+ dep_cpp_info.names[gen] = value
+ generatorfilenames = no_config_data.pop("generatorfilenames", []) # can be empty
+ for n in generatorfilenames:
+ gen, value = n.split("=", 1)
+ dep_cpp_info.filenames[gen] = value
dep_cpp_info.sysroot = no_config_data.pop('sysroot', [""])[0]
_populate_cpp_info(dep_cpp_info, no_config_data, rootpath)
@@ -174,8 +186,6 @@ def _populate_cpp_info(_cpp_info, _data, _rootpath):
_populate_cpp_info(cpp_info_config, config_data, rootpath)
# Add to the dependecy list
- version = no_config_data.pop('version', [""])[0]
- dep_cpp_info.version = version
deps_cpp_info.add(dep, DepCppInfo(dep_cpp_info))
return deps_cpp_info
@@ -216,7 +226,9 @@ def content(self):
# Makes no sense to have an accumulated rootpath
template_deps = (template + '[rootpath{dep}]\n{deps.rootpath}\n\n' +
'[name{dep}]\n{deps.name}\n\n' +
- '[version{dep}]\n{deps.version}\n\n')
+ '[version{dep}]\n{deps.version}\n\n' +
+ '[generatornames{dep}]\n{deps.generatornames}\n\n' +
+ '[generatorfilenames{dep}]\n{deps.generatorfilenames}\n\n')
for dep_name, dep_cpp_info in self.deps_build_info.dependencies:
dep = "_" + dep_name
diff --git a/conans/test/functional/generators/package_info/package_info_test.py b/conans/test/functional/generators/package_info/package_info_test.py
index e019bc1a428..c6577a4b462 100644
--- a/conans/test/functional/generators/package_info/package_info_test.py
+++ b/conans/test/functional/generators/package_info/package_info_test.py
@@ -470,3 +470,39 @@ def package_requires_in_components_requires_test(self):
env_info={})
client.save({"conanfile.py": conanfile})
client.run("create conanfile.py") # Correct usage
+
+ def test_get_name_local_flow(self):
+ # https://github.com/conan-io/conan/issues/7854
+ client = TestClient()
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class Package(ConanFile):
+ def package_info(self):
+ self.cpp_info.names["cmake_find_package"] = "GTest"
+ self.cpp_info.filenames["cmake_find_package"] = "GtesT"
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("create . gtest/1.0@")
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class Package(ConanFile):
+ requires = 'gtest/1.0'
+ generators = 'cmake_find_package'
+
+ def build(self):
+ info = self.deps_cpp_info['gtest'].get_name('cmake_find_package')
+ self.output.info("GTEST_INFO: %s" % info)
+ fileinfo = self.deps_cpp_info['gtest'].get_filename('cmake_find_package')
+ self.output.info("GTEST_FILEINFO: %s" % fileinfo)
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("create . pkg/1.0@")
+ self.assertIn("pkg/1.0: GTEST_INFO: GTest", client.out)
+ self.assertIn("pkg/1.0: GTEST_FILEINFO: GtesT", client.out)
+ client.run("install . pkg/1.0@")
+ self.assertIn("Generator cmake_find_package created FindGtesT.cmake", client.out)
+ client.run("build .")
+ self.assertIn("conanfile.py (pkg/1.0): GTEST_INFO: GTest", client.out)
+ self.assertIn("conanfile.py (pkg/1.0): GTEST_FILEINFO: GtesT", client.out)
+
diff --git a/conans/test/unittests/client/generators/txt/test_dump_load.py b/conans/test/unittests/client/generators/txt/test_dump_load.py
index 7b5b17c7e06..2f472c6700e 100644
--- a/conans/test/unittests/client/generators/txt/test_dump_load.py
+++ b/conans/test/unittests/client/generators/txt/test_dump_load.py
@@ -17,7 +17,8 @@ def test_names_per_generator(self):
cpp_info = CppInfo("pkg_name", "root")
cpp_info.name = "name"
cpp_info.names["txt"] = "txt_name"
- cpp_info.names["cmake_find_package"] = "cmake_find_package"
+ cpp_info.names["cmake_find_package"] = "SpecialName"
+ cpp_info.filenames["cmake_find_package"] = "SpecialFileName"
conanfile = ConanFile(TestBufferConanOutput(), None)
conanfile.initialize(Settings({}), EnvValues())
conanfile.deps_cpp_info.add("pkg_name", DepCppInfo(cpp_info))
@@ -25,9 +26,9 @@ def test_names_per_generator(self):
parsed_deps_cpp_info, _, _, _ = TXTGenerator.loads(content, filter_empty=False)
parsed_cpp_info = parsed_deps_cpp_info["pkg_name"]
- # FIXME: Conan v2: Remove 'txt' generator or serialize all the names
self.assertEqual(parsed_cpp_info.get_name("txt"), "txt_name")
- self.assertEqual(parsed_cpp_info.get_name("cmake_find_package"), "pkg_name")
+ self.assertEqual(parsed_cpp_info.get_name("cmake_find_package"), "SpecialName")
+ self.assertEqual(parsed_cpp_info.get_filename("cmake_find_package"), "SpecialFileName")
self.assertEqual(parsed_cpp_info.get_name("pkg_config"), "pkg_name")
def test_idempotent(self):
diff --git a/conans/test/unittests/model/build_info_test.py b/conans/test/unittests/model/build_info_test.py
index 7f2d4775c40..6c3931141f5 100644
--- a/conans/test/unittests/model/build_info_test.py
+++ b/conans/test/unittests/model/build_info_test.py
@@ -3,7 +3,7 @@
from collections import defaultdict, namedtuple
from conans.client.generators import TXTGenerator
-from conans.model.build_info import CppInfo, DepsCppInfo
+from conans.model.build_info import DepsCppInfo
from conans.model.env_info import DepsEnvInfo, EnvInfo
from conans.model.user_info import DepsUserInfo
from conans.test.utils.test_files import temp_folder
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-8353@b442e5a
|
conan-io/conan
|
Python
| 8,353
|
Fix: add preprocessor_definitions to Meson + CC/CXX from build requirements
|
Changelog: Fix: The new `MesonToolchain` now takes the declared environment variables (`CC`, `CXX`...) from build-requires and profiles to set the variables `c`, `cpp`, `c_ld`, `cpp_ld` etc, into the `conan_meson_native.ini`
Changelog: Fix: Added new `preprocessor_definitions` to new Meson build helper.
Changelog: Fix: The new `MesonToolchain` now allows adjusting any variable before generating the `conan_meson_native.ini` file.
Docs: https://github.com/conan-io/docs/pull/2139
Closes https://github.com/conan-io/conan/issues/8311
this is mostly to align interface with `CMake`, `Make` and `MSBuild` (feature parity)
important side change: `c_args`, `c_link_args`, `cpp_args` and `cpp_link_args` are now arrays as documented at https://mesonbuild.com/Builtin-options.html#compiler-options:
```
c_args | | free-form comma-separated list | C compile arguments to use
-- | -- | -- | --
c_link_args | | free-form comma-separated list | C link arguments to use
cpp_args | | free-form comma-separated list | C++ compile arguments to use
-- | -- | -- | --
cpp_link_args | | free-form comma-separated list | C++ link arguments to use
```
|
2021-01-18T12:18:42Z
|
[bug] MesonToolchain does not pick up CC/CXX from build requirements (in build profile)
The compiler paths of a build requirement with a cross compiler that sets the CC/CXX environment variables
are not available in `conan_meson_cross.ini`.
I have a build requirement that sets some compiler variables:
```
def package_info(self):
self.env_info.CC = os.path.join(self.package_folder, "bin", "aarch64-none-elf-gcc")
self.env_info.CC_LD = os.path.join(self.package_folder, "bin", "aarch64-none-elf-gcc")
self.env_info.CXX = os.path.join(self.package_folder, "bin", "aarch64-none-elf-g++")
self.env_info.CXX_LD = os.path.join(self.package_folder, "bin", "aarch64-none-elf-g++")
self.env_info.AR = os.path.join(self.package_folder, "bin", "aarch64-none-elf-ar")
```
But the `[binaries]` section of the cross file remains empty.
When I run conan with `CC=abc conan create .... -pr:h ... -pr:b ...`, then `c = 'abc' is present in the `binaries` section.
`CFLAGS/CXXFLAGS` from a build requirement are also not propagated.
### Environment Details (include every applicable attribute)
* Operating System+version: Fedora Linux 30
* Compiler+version: native: gcc 9.3, cross: gcc 10.2
* Conan version: 1.32.1
* Python version: 3.7.7
### Steps to reproduce (Include if Applicable)
1. Create a conan recipe that packages a compiler (sets `C`/`CXX`/...). Let's name it `crosscompiler/0.1`.
2. Create a host profile that adds the package created in step 1 as a build requirement, let's name the profile: `cross`.
3. Create a build profile for your native compiler. Let's name it `native`.
4. Cross build a meson project (using `MesonToolchain` like: `conan create . meson-project/0.1@ -pr:h cross -pr:b native`
The generated `conan_meson_cross.ini` won't contain the `properties.c` values, set by `crosscompiler/0.1`.
If you'd run `CC=abc conan create . meson-project/0.1@ -pr:h cross -pr:b native`, then it would.
CC'ing @SSE4 for being the author of `MesonToolchain`.
|
@jgsogo do you have an idea why could it happen? e.g. meson tests use CC from the conan profile, and it works flawlessly
Hi!
The environment from the build requires is populated before entering the `build()` method. The `MesonToolchain` takes all these values from the environment, but they are not available in the `generate()` function.
I'm not sure how we want to pass these variables from now on (ping @memsharded ), we can populate the environment during `generate()` the same way we do during `build()` or we can leverage on https://github.com/conan-io/conan/pull/8266
Not a bug because it's all about experimental features, but something to define and implement for sure.
🤔 With the toolchains and generators, if everything is written down during the `generate()` call, then we no longer need the environment in the `build()` call.
I would say that if we want to be very transparent and have a better developer experience, the way to go might be to generate virtualenv files for everything in the environment. After a ``conan install`` a developer should be able to build and run, with their native tools (cmake, make, meson), without relying on ``conan build``. The only way is that Conan translate whatever is in its environment, its build_requires, etc into a virtualenv file that can be used. I would say that the developer experience should be something similar to:
```bash
$ git clone .... & cd ...
$ conan install ...
$ (if necessary) source conantoolchainenv.sh
$ meson ... (using Conan generated files with MesonToolchain)
# or
$ cmake .... -DCMAKE_TOOLCHAIN_FILE=conantoolchain.cmake
```
And yes, if the environment becomes something explicit to the build, then the Conan ``build()`` method do not need to change the environment before launching, but instead it should be explicit in some way:
```python
def build(self):
with activate_env("conantoolchainenv.sh"):
meson = Meson(self)
meson.build()
```
This is a very preliminary vision, sure we need to work on it, but the main principles would be:
- Developer should be able to have a native build experience without invoking ``conan build``, achieving the same build as in with ``conan create`` without much pain
- Making the process and the environment more explicit and decoupled, so it is kind of easier to debug things, and probably easy to have greater configurability.
Right now `MesonToolchain` writes a file with values taken from the environment. I don't know if that `conan_meson_cross.ini` file can read the environment in runtime (so it can leverage on the user/conan running `source conantoolchainenv.sh` to populate the environment), but it would be a completely different implementation of that toolchain class.
Before thinking about the local workflow, we need `conan create` to work. As it is implemented now, we need to pass configuration to `generate()` function, it is where `MesonToolchain` is used:
> I'm not sure how we want to pass these variables from now on (ping @memsharded ), we can populate the environment during `generate()` the same way we do during `build()` or we can leverage on #8266
...or use a completely different approach for MesonToolchain.
Ok, now I understand better the issue. It is not about the transfer between the generate() and the build(), but how the generate() populate things. Which would be related to what @madebr was talking about the new ``conf``, if it was a way of propagating information from the build_requires to the consumers of those build_requires.
I would like if we could try to explicit define the interfaces that will be available for Conan 2.0 about the dependencies, so the ``generate()`` method uses them, of course being the environment one of them. But maybe that doesn't mean to automatically activate the environment for the ``generate()`` method, so it can capture those env-vars and write them to files, because that makes also difficult to separate things defined by Conan, potentially making the generated environment files very noisy.
that seems like `conanfile` has an `env` attribute, which is populated from the `build_requires` (method `_propagate_info`).
this is used at least by the `VirtualEnvGenerator` right now.
however, the build method seems to be very special (see `run_build_method` which does env management).
moreover, it seems like by design, `build_requires` are only applied to the `build` method?
at least I see `self._conanfile.env` is empty, as well as `self._conanfile.build_requires` is also empty during the `generate` method.
addition from #8311 - CC from the profile aren't captured by `MesonToolchain`, they are captured by `meson` executable itself.
it turns out it's problematic to implement `preprocessor_defininitions` because of this.
|
[
{
"body": "The compiler paths of a build requirement with a cross compiler that sets the CC/CXX environment variables\r\nare not available in `conan_meson_cross.ini`.\r\n\r\nI have a build requirement that sets some compiler variables:\r\n```\r\ndef package_info(self):\r\n self.env_info.CC = os.path.join(self.package_folder, \"bin\", \"aarch64-none-elf-gcc\")\r\n self.env_info.CC_LD = os.path.join(self.package_folder, \"bin\", \"aarch64-none-elf-gcc\")\r\n self.env_info.CXX = os.path.join(self.package_folder, \"bin\", \"aarch64-none-elf-g++\")\r\n self.env_info.CXX_LD = os.path.join(self.package_folder, \"bin\", \"aarch64-none-elf-g++\")\r\n self.env_info.AR = os.path.join(self.package_folder, \"bin\", \"aarch64-none-elf-ar\")\r\n```\r\n\r\nBut the `[binaries]` section of the cross file remains empty.\r\n\r\nWhen I run conan with `CC=abc conan create .... -pr:h ... -pr:b ...`, then `c = 'abc' is present in the `binaries` section.\r\n\r\n`CFLAGS/CXXFLAGS` from a build requirement are also not propagated.\r\n\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Fedora Linux 30\r\n * Compiler+version: native: gcc 9.3, cross: gcc 10.2\r\n * Conan version: 1.32.1\r\n * Python version: 3.7.7\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n1. Create a conan recipe that packages a compiler (sets `C`/`CXX`/...). Let's name it `crosscompiler/0.1`.\r\n2. Create a host profile that adds the package created in step 1 as a build requirement, let's name the profile: `cross`.\r\n3. Create a build profile for your native compiler. Let's name it `native`.\r\n4. Cross build a meson project (using `MesonToolchain` like: `conan create . meson-project/0.1@ -pr:h cross -pr:b native`\r\n\r\nThe generated `conan_meson_cross.ini` won't contain the `properties.c` values, set by `crosscompiler/0.1`.\r\nIf you'd run `CC=abc conan create . meson-project/0.1@ -pr:h cross -pr:b native`, then it would.\r\n\r\n\r\nCC'ing @SSE4 for being the author of `MesonToolchain`.\r\n\r\n",
"number": 8311,
"title": "[bug] MesonToolchain does not pick up CC/CXX from build requirements (in build profile)"
}
] |
80f344c101f892a18104fa12fedc2feeb993fdd1
|
{
"head_commit": "b442e5a8ea04e7d213f29bf981c63a32a10cad67",
"head_commit_message": "- fix env propagation\n\nSigned-off-by: SSE4 <[email protected]>",
"patch_to_review": "diff --git a/conan/tools/meson/toolchain.py b/conan/tools/meson/toolchain.py\nindex 40c7bd8203a..0e0800d6fa6 100644\n--- a/conan/tools/meson/toolchain.py\n+++ b/conan/tools/meson/toolchain.py\n@@ -28,6 +28,8 @@ class MesonToolchain(object):\n {% if pkgconfig %}pkgconfig = {{pkgconfig}}{% endif %}\n \n [built-in options]\n+ preprocessor_definitions = [{% for it, value in preprocessor_definitions.items() -%}\n+ '-D{{ it }}=\"{{ value}}\"'{%- if not loop.last %}, {% endif %}{% endfor %}]\n {% if buildtype %}buildtype = {{buildtype}}{% endif %}\n {% if debug %}debug = {{debug}}{% endif %}\n {% if default_library %}default_library = {{default_library}}{% endif %}\n@@ -35,10 +37,10 @@ class MesonToolchain(object):\n {% if b_ndebug %}b_ndebug = {{b_ndebug}}{% endif %}\n {% if b_staticpic %}b_staticpic = {{b_staticpic}}{% endif %}\n {% if cpp_std %}cpp_std = {{cpp_std}}{% endif %}\n- {% if c_args %}c_args = {{c_args}}{% endif %}\n- {% if c_link_args %}c_link_args = {{c_link_args}}{% endif %}\n- {% if cpp_args %}cpp_args = {{cpp_args}}{% endif %}\n- {% if cpp_link_args %}cpp_link_args = {{cpp_link_args}}{% endif %}\n+ c_args = {{c_args}} + preprocessor_definitions\n+ c_link_args = {{c_link_args}}\n+ cpp_args = {{cpp_args}} + preprocessor_definitions\n+ cpp_link_args = {{cpp_link_args}}\n {% if pkg_config_path %}pkg_config_path = {{pkg_config_path}}{% endif %}\n \"\"\")\n \n@@ -71,7 +73,9 @@ def __init__(self, conanfile, env=os.environ):\n self._shared = self._conanfile.options.get_safe(\"shared\")\n self._fpic = self._conanfile.options.get_safe(\"fPIC\")\n self.definitions = dict()\n- self._env = env\n+ self.preprocessor_definitions = dict()\n+ self._env = env.copy()\n+ self._env.update(self._conanfile.env)\n \n @staticmethod\n def _to_meson_value(value):\n@@ -137,6 +141,10 @@ def _to_meson_cppstd(self, cppstd):\n def _none_if_empty(value):\n return \"'%s'\" % value if value.strip() else None\n \n+ def _env_array(self, name):\n+ import shlex\n+ return shlex.split(self._env.get(name, ''))\n+\n @property\n def _context(self):\n project_options = []\n@@ -171,13 +179,12 @@ def _context(self):\n \"b_ndebug\": self._to_meson_value(self._ndebug) if self._build_type else None,\n # https://mesonbuild.com/Builtin-options.html#compiler-options\n \"cpp_std\": self._to_meson_cppstd(self._cppstd) if self._cppstd else None,\n- \"c_args\": self._none_if_empty(self._env.get(\"CPPFLAGS\", '') +\n- self._env.get(\"CFLAGS\", '')),\n- \"c_link_args\": self._env.get(\"LDFLAGS\", None),\n- \"cpp_args\": self._none_if_empty(self._env.get(\"CPPFLAGS\", '') +\n- self._env.get(\"CXXFLAGS\", '')),\n- \"cpp_link_args\": self._env.get(\"LDFLAGS\", None),\n- \"pkg_config_path\": \"'%s'\" % os.getcwd()\n+ \"c_args\": self._to_meson_value(self._env_array('CPPFLAGS') + self._env_array('CFLAGS')),\n+ \"c_link_args\": self._to_meson_value(self._env_array('LDFLAGS')),\n+ \"cpp_args\": self._to_meson_value(self._env_array('CPPFLAGS') + self._env_array('CXXFLAGS')),\n+ \"cpp_link_args\": self._to_meson_value(self._env_array('LDFLAGS')),\n+ \"pkg_config_path\": \"'%s'\" % os.getcwd(),\n+ \"preprocessor_definitions\": self.preprocessor_definitions\n }\n return context\n \ndiff --git a/conans/test/functional/toolchains/meson/test_android.py b/conans/test/functional/toolchains/meson/test_android.py\nindex a4fbd549728..9b38188f8b2 100644\n--- a/conans/test/functional/toolchains/meson/test_android.py\n+++ b/conans/test/functional/toolchains/meson/test_android.py\n@@ -101,12 +101,14 @@ def env(self):\n ar = self._tool('ar')\n cflags = '--target=%s' % self._target\n cxxflags = '--target=%s' % self._target\n+ ldflags = '--target=%s' % self._target\n \n return {'CC': cc,\n 'CXX': cxx,\n 'AR': ar,\n 'CFLAGS': cflags,\n- 'CXXFLAGS': cxxflags}\n+ 'CXXFLAGS': cxxflags,\n+ 'LDFLAGS': ldflags}\n \n def profile(self):\n template = textwrap.dedent(\"\"\"\ndiff --git a/conans/test/functional/toolchains/meson/test_ios.py b/conans/test/functional/toolchains/meson/test_ios.py\nindex a2fcf3fe5fc..5f1bd010fbe 100644\n--- a/conans/test/functional/toolchains/meson/test_ios.py\n+++ b/conans/test/functional/toolchains/meson/test_ios.py\n@@ -71,11 +71,13 @@ def env(self):\n cflags += \" -isysroot \" + self.xcrun.sdk_path\n cflags += \" -arch \" + to_apple_arch(self.arch)\n cxxflags = cflags\n+ ldflags = cflags\n \n return {'CC': cc,\n 'CXX': cxx,\n 'CFLAGS': cflags,\n- 'CXXFLAGS': cxxflags}\n+ 'CXXFLAGS': cxxflags,\n+ 'LDFLAGS': ldflags}\n \n def profile(self):\n template = textwrap.dedent(\"\"\"\ndiff --git a/conans/test/functional/toolchains/meson/test_preprocessor_definitions.py b/conans/test/functional/toolchains/meson/test_preprocessor_definitions.py\nnew file mode 100644\nindex 00000000000..d90f1e0303f\n--- /dev/null\n+++ b/conans/test/functional/toolchains/meson/test_preprocessor_definitions.py\n@@ -0,0 +1,67 @@\n+import os\n+import textwrap\n+\n+from conans.test.assets.sources import gen_function_cpp, gen_function_h\n+from conans.test.functional.toolchains.meson._base import TestMesonBase\n+\n+\n+class MesonPreprocessorDefinitionsTest(TestMesonBase):\n+ _conanfile_py = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, tools\n+ from conan.tools.meson import Meson, MesonToolchain\n+\n+\n+ class App(ConanFile):\n+ settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n+ options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n+ default_options = {\"shared\": False, \"fPIC\": True}\n+\n+ def config_options(self):\n+ if self.settings.os == \"Windows\":\n+ del self.options.fPIC\n+\n+ def generate(self):\n+ tc = MesonToolchain(self)\n+ tc.preprocessor_definitions[\"TEST_DEFINITION1\"] = \"TestPpdValue1\"\n+ tc.preprocessor_definitions[\"TEST_DEFINITION2\"] = \"TestPpdValue2\"\n+ tc.generate()\n+\n+ def build(self):\n+ meson = Meson(self)\n+ meson.configure()\n+ meson.build()\n+ \"\"\")\n+\n+ _meson_build = textwrap.dedent(\"\"\"\n+ project('tutorial', 'cpp')\n+ hello = library('hello', 'hello.cpp')\n+ executable('demo', 'main.cpp', link_with: hello)\n+ \"\"\")\n+\n+ def test_build(self):\n+ hello_h = gen_function_h(name=\"hello\")\n+ hello_cpp = gen_function_cpp(name=\"hello\",\n+ preprocessor=[\"TEST_DEFINITION1\", \"TEST_DEFINITION2\"])\n+ app = gen_function_cpp(name=\"main\", includes=[\"hello\"], calls=[\"hello\"])\n+\n+ self.t.save({\"conanfile.py\": self._conanfile_py,\n+ \"meson.build\": self._meson_build,\n+ \"hello.h\": hello_h,\n+ \"hello.cpp\": hello_cpp,\n+ \"main.cpp\": app})\n+\n+ self.t.run(\"install . %s\" % self._settings_str)\n+\n+ content = self.t.load(\"conan_meson_native.ini\")\n+\n+ self.assertIn(\"[built-in options]\", content)\n+ self.assertIn(\"buildtype = 'release'\", content)\n+\n+ self.t.run(\"build .\")\n+ self.t.run_command(os.path.join(\"build\", \"demo\"))\n+\n+ self.assertIn(\"hello: Release!\", self.t.out)\n+ self.assertIn(\"TEST_DEFINITION1: TestPpdValue1\", self.t.out)\n+ self.assertIn(\"TEST_DEFINITION2: TestPpdValue2\", self.t.out)\n+\n+ self._check_binary()\n"
}
|
[
{
"diff_hunk": "@@ -71,11 +71,13 @@ def env(self):\n cflags += \" -isysroot \" + self.xcrun.sdk_path\n cflags += \" -arch \" + to_apple_arch(self.arch)\n cxxflags = cflags\n+ ldflags = cflags",
"line": null,
"original_line": 74,
"original_start_line": null,
"path": "conans/test/functional/toolchains/meson/test_ios.py",
"start_line": null,
"text": "@user1:\nThis looks wrong on the surface.\n\n@user1:\naccording to @author apparently this is correct\n\n@author:\nanyway, changed the code to be more accurate"
}
] |
303bd4fb383cd7f7dcfbae3d44b162635a9c2183
|
diff --git a/conan/tools/env/environment.py b/conan/tools/env/environment.py
index 76cd69d3825..c1021979eba 100644
--- a/conan/tools/env/environment.py
+++ b/conan/tools/env/environment.py
@@ -236,6 +236,12 @@ def _get_final_value(self, name):
def __getitem__(self, name):
return self._get_final_value(name)
+ def get(self, name, default=None):
+ try:
+ return self._get_final_value(name)
+ except KeyError:
+ return default
+
def keys(self):
return self._values.keys()
diff --git a/conan/tools/meson/toolchain.py b/conan/tools/meson/toolchain.py
index 86841e88ba2..3d2bea54381 100644
--- a/conan/tools/meson/toolchain.py
+++ b/conan/tools/meson/toolchain.py
@@ -1,5 +1,6 @@
import os
+from conan.tools.env import VirtualEnv
from conan.tools.microsoft.toolchain import write_conanvcvars
from conans.client.build.cppstd_flags import cppstd_from_settings
from conans.client.tools.oss import cross_building, get_cross_building_settings
@@ -29,6 +30,8 @@ class MesonToolchain(object):
{% if pkgconfig %}pkgconfig = {{pkgconfig}}{% endif %}
[built-in options]
+ preprocessor_definitions = [{% for it, value in preprocessor_definitions.items() -%}
+ '-D{{ it }}="{{ value}}"'{%- if not loop.last %}, {% endif %}{% endfor %}]
{% if buildtype %}buildtype = {{buildtype}}{% endif %}
{% if debug %}debug = {{debug}}{% endif %}
{% if default_library %}default_library = {{default_library}}{% endif %}
@@ -36,10 +39,10 @@ class MesonToolchain(object):
{% if b_ndebug %}b_ndebug = {{b_ndebug}}{% endif %}
{% if b_staticpic %}b_staticpic = {{b_staticpic}}{% endif %}
{% if cpp_std %}cpp_std = {{cpp_std}}{% endif %}
- {% if c_args %}c_args = {{c_args}}{% endif %}
- {% if c_link_args %}c_link_args = {{c_link_args}}{% endif %}
- {% if cpp_args %}cpp_args = {{cpp_args}}{% endif %}
- {% if cpp_link_args %}cpp_link_args = {{cpp_link_args}}{% endif %}
+ c_args = {{c_args}} + preprocessor_definitions
+ c_link_args = {{c_link_args}}
+ cpp_args = {{cpp_args}} + preprocessor_definitions
+ cpp_link_args = {{cpp_link_args}}
{% if pkg_config_path %}pkg_config_path = {{pkg_config_path}}{% endif %}
""")
@@ -71,8 +74,44 @@ def __init__(self, conanfile, env=os.environ):
self._cppstd = cppstd_from_settings(self._conanfile.settings)
self._shared = self._conanfile.options.get_safe("shared")
self._fpic = self._conanfile.options.get_safe("fPIC")
+ self._build_env = VirtualEnv(self._conanfile).build_environment()
+
self.definitions = dict()
- self._env = env
+ self.preprocessor_definitions = dict()
+
+ def from_build_env(name):
+ return self._to_meson_value(self._build_env.get(name, None))
+
+ self.c = from_build_env("CC")
+ self.cpp = from_build_env("CXX")
+ self.c_ld = from_build_env("CC_LD") or from_build_env("LD")
+ self.cpp_ld = from_build_env("CXX_LD") or from_build_env("LD")
+ self.ar = from_build_env("AR")
+ self.strip = from_build_env("STRIP")
+ self.as_ = from_build_env("AS")
+ self.windres = from_build_env("WINDRES")
+ self.pkgconfig = from_build_env("PKG_CONFIG")
+
+ # https://mesonbuild.com/Builtin-options.html#core-options
+ # Do not adjust "debug" if already adjusted "buildtype"
+ self.buildtype = self._to_meson_build_type(self._build_type) if self._build_type else None
+ self.default_library = self._to_meson_shared(self._shared) \
+ if self._shared is not None else None
+
+ # https://mesonbuild.com/Builtin-options.html#base-options
+ self.b_vscrt = self._to_meson_vscrt(self._vscrt)
+ self.b_staticpic = self._to_meson_value(self._fpic) \
+ if (self._shared is False and self._fpic is not None) else None
+ self.b_ndebug = self._to_meson_value(self._ndebug) if self._build_type else None
+
+ # https://mesonbuild.com/Builtin-options.html#compiler-options
+ self.cpp_std = self._to_meson_cppstd(self._cppstd) if self._cppstd else None
+ self.c_args = self._to_meson_value(self._env_array('CPPFLAGS') + self._env_array('CFLAGS'))
+ self.c_link_args = self._to_meson_value(self._env_array('LDFLAGS'))
+ self.cpp_args = self._to_meson_value(self._env_array('CPPFLAGS') +
+ self._env_array('CXXFLAGS'))
+ self.cpp_link_args = self._to_meson_value(self._env_array('LDFLAGS'))
+ self.pkg_config_path = "'%s'" % self._conanfile.generators_folder
@staticmethod
def _to_meson_value(value):
@@ -138,6 +177,10 @@ def _to_meson_cppstd(self, cppstd):
def _none_if_empty(value):
return "'%s'" % value if value.strip() else None
+ def _env_array(self, name):
+ import shlex
+ return shlex.split(self._build_env.get(name, ''))
+
@property
def _context(self):
project_options = []
@@ -151,34 +194,31 @@ def _context(self):
# https://mesonbuild.com/Builtin-options.html#directories
# TODO : we don't manage paths like libdir here (yet?)
# https://mesonbuild.com/Machine-files.html#binaries
- "c": self._to_meson_value(self._env.get("CC", None)),
- "cpp": self._to_meson_value(self._env.get("CXX", None)),
- "c_ld": self._to_meson_value(self._env.get("LD", None)),
- "cpp_ld": self._to_meson_value(self._env.get("LD", None)),
- "ar": self._to_meson_value(self._env.get("AR", None)),
- "strip": self._to_meson_value(self._env.get("STRIP", None)),
- "as": self._to_meson_value(self._env.get("AS", None)),
- "windres": self._to_meson_value(self._env.get("WINDRES", None)),
- "pkgconfig": self._to_meson_value(self._env.get("PKG_CONFIG", None)),
+ # https://mesonbuild.com/Reference-tables.html#compiler-and-linker-selection-variables
+ "c": self.c,
+ "cpp": self.cpp,
+ "c_ld": self.c_ld,
+ "cpp_ld": self.cpp_ld,
+ "ar": self.ar,
+ "strip": self.strip,
+ "as": self.as_,
+ "windres": self.windres,
+ "pkgconfig": self.pkgconfig,
# https://mesonbuild.com/Builtin-options.html#core-options
- "buildtype": self._to_meson_build_type(self._build_type) if self._build_type else None,
- "debug": self._to_meson_value(self._debug) if self._build_type else None,
- "default_library": self._to_meson_shared(
- self._shared) if self._shared is not None else None,
+ "buildtype": self.buildtype,
+ "default_library": self.default_library,
# https://mesonbuild.com/Builtin-options.html#base-options
- "b_vscrt": self._to_meson_vscrt(self._vscrt),
- "b_staticpic": self._to_meson_value(self._fpic) if (self._shared is False and self._fpic
- is not None) else None,
- "b_ndebug": self._to_meson_value(self._ndebug) if self._build_type else None,
+ "b_vscrt": self.b_vscrt,
+ "b_staticpic": self.b_staticpic,
+ "b_ndebug": self.b_ndebug,
# https://mesonbuild.com/Builtin-options.html#compiler-options
- "cpp_std": self._to_meson_cppstd(self._cppstd) if self._cppstd else None,
- "c_args": self._none_if_empty(self._env.get("CPPFLAGS", '') +
- self._env.get("CFLAGS", '')),
- "c_link_args": self._env.get("LDFLAGS", None),
- "cpp_args": self._none_if_empty(self._env.get("CPPFLAGS", '') +
- self._env.get("CXXFLAGS", '')),
- "cpp_link_args": self._env.get("LDFLAGS", None),
- "pkg_config_path": "'%s'" % os.getcwd()
+ "cpp_std": self.cpp_std,
+ "c_args": self.c_args,
+ "c_link_args": self.c_link_args,
+ "cpp_args": self.cpp_args,
+ "cpp_link_args": self.cpp_link_args,
+ "pkg_config_path": self.pkg_config_path,
+ "preprocessor_definitions": self.preprocessor_definitions
}
return context
diff --git a/conans/test/functional/toolchains/meson/test_android.py b/conans/test/functional/toolchains/meson/test_android.py
index a4fbd549728..9b38188f8b2 100644
--- a/conans/test/functional/toolchains/meson/test_android.py
+++ b/conans/test/functional/toolchains/meson/test_android.py
@@ -101,12 +101,14 @@ def env(self):
ar = self._tool('ar')
cflags = '--target=%s' % self._target
cxxflags = '--target=%s' % self._target
+ ldflags = '--target=%s' % self._target
return {'CC': cc,
'CXX': cxx,
'AR': ar,
'CFLAGS': cflags,
- 'CXXFLAGS': cxxflags}
+ 'CXXFLAGS': cxxflags,
+ 'LDFLAGS': ldflags}
def profile(self):
template = textwrap.dedent("""
diff --git a/conans/test/functional/toolchains/meson/test_ios.py b/conans/test/functional/toolchains/meson/test_ios.py
index a2fcf3fe5fc..ddc9ad34a29 100644
--- a/conans/test/functional/toolchains/meson/test_ios.py
+++ b/conans/test/functional/toolchains/meson/test_ios.py
@@ -1,11 +1,10 @@
import os
import platform
-import pytest
import textwrap
import unittest
-from parameterized import parameterized
import pytest
+from parameterized import parameterized
from conans.client.tools.apple import XCRun, apple_deployment_target_flag, to_apple_arch
from conans.test.assets.sources import gen_function_cpp, gen_function_h
@@ -67,26 +66,27 @@ def env(self):
cc = self.xcrun.cc
cxx = self.xcrun.cxx
- cflags = apple_deployment_target_flag(self.os, self.os_version)
- cflags += " -isysroot " + self.xcrun.sdk_path
- cflags += " -arch " + to_apple_arch(self.arch)
- cxxflags = cflags
+ deployment_flag = apple_deployment_target_flag(self.os, self.os_version)
+ sysroot_flag = " -isysroot " + self.xcrun.sdk_path
+ arch_flag = " -arch " + to_apple_arch(self.arch)
+ flags = deployment_flag + sysroot_flag + arch_flag
return {'CC': cc,
'CXX': cxx,
- 'CFLAGS': cflags,
- 'CXXFLAGS': cxxflags}
+ 'CFLAGS': flags,
+ 'CXXFLAGS': flags,
+ 'LDFLAGS': flags}
def profile(self):
template = textwrap.dedent("""
include(default)
[settings]
{settings}
- [env]
+ [buildenv]
{env}
""")
settings = '\n'.join(["%s = %s" % (s[0], s[1]) for s in self.settings()])
- env = '\n'.join(["%s = %s" % (k, v) for k, v in self.env().items()])
+ env = '\n'.join(["%s=%s" % (k, v) for k, v in self.env().items()])
return template.format(settings=settings, env=env)
@parameterized.expand([('armv8', 'iOS', '10.0', 'iphoneos'),
diff --git a/conans/test/functional/toolchains/meson/test_meson_build_require.py b/conans/test/functional/toolchains/meson/test_meson_build_require.py
new file mode 100644
index 00000000000..4f73d8f83cc
--- /dev/null
+++ b/conans/test/functional/toolchains/meson/test_meson_build_require.py
@@ -0,0 +1,53 @@
+import pytest
+
+from conans.test.assets.genconanfile import GenConanfile
+from conans.test.functional.toolchains.meson._base import get_meson_version
+from conans.test.utils.tools import TestClient
+
[email protected]
[email protected]_meson
[email protected](get_meson_version() < "0.56.0", reason="requires meson >= 0.56.0")
+def test_env_vars_from_build_require():
+ br = str(GenConanfile().with_name("hello_compiler").with_version("1.0").with_import("import os"))
+ br += """
+ def package_info(self):
+ {}
+ """
+ vars = ["CC", "CC_LD", "CXX", "CXX_LD", "AR", "STRIP", "AS", "WINDRES", "PKG_CONFIG", "LD"]
+ lines = "\n ".join(['self.buildenv_info.define("{var}", "{var}_VALUE")'.format(var=var)
+ for var in vars])
+ cf = br.format(lines)
+
+ client = TestClient()
+ client.save({"conanfile.py": cf})
+ client.run("create .")
+
+ conanfile = GenConanfile().with_settings("os", "arch", "compiler", "build_type")\
+ .with_name("consumer").with_version("1.0").with_generator("MesonToolchain")\
+ .with_build_requirement("hello_compiler/1.0")
+ client.save({"conanfile.py": conanfile})
+ client.run("install . -pr:h=default -pr:b=default")
+ content = client.load("conan_meson_native.ini")
+ assert "c = 'CC_VALUE'" in content
+ assert "cpp = 'CXX_VALUE'" in content
+ assert "c_ld = 'CC_LD_VALUE'" in content
+ assert "cpp_ld = 'CXX_LD_VALUE'" in content
+ assert "ar = 'AR_VALUE'" in content
+ assert "strip = 'STRIP_VALUE'" in content
+ assert "as = 'AS_VALUE'" in content
+ assert "windres = 'WINDRES_VALUE'" in content
+ assert "pkgconfig = 'PKG_CONFIG_VALUE'" in content
+
+ # Now change the build require to declare only LD
+ lines = '\n self.buildenv_info.define("LD", "LD_VALUE")'
+ cf = br.format(lines)
+ client = TestClient()
+ client.save({"conanfile.py": cf})
+ client.run("create .")
+
+ # Create the consumer again, now the LD env var will be applied
+ client.save({"conanfile.py": conanfile})
+ client.run("install . -pr:h=default -pr:b=default")
+ content = client.load("conan_meson_native.ini")
+ assert "c_ld = 'LD_VALUE'" in content
+ assert "cpp_ld = 'LD_VALUE'" in content
diff --git a/conans/test/functional/toolchains/meson/test_preprocessor_definitions.py b/conans/test/functional/toolchains/meson/test_preprocessor_definitions.py
new file mode 100644
index 00000000000..d90f1e0303f
--- /dev/null
+++ b/conans/test/functional/toolchains/meson/test_preprocessor_definitions.py
@@ -0,0 +1,67 @@
+import os
+import textwrap
+
+from conans.test.assets.sources import gen_function_cpp, gen_function_h
+from conans.test.functional.toolchains.meson._base import TestMesonBase
+
+
+class MesonPreprocessorDefinitionsTest(TestMesonBase):
+ _conanfile_py = textwrap.dedent("""
+ from conans import ConanFile, tools
+ from conan.tools.meson import Meson, MesonToolchain
+
+
+ class App(ConanFile):
+ settings = "os", "arch", "compiler", "build_type"
+ options = {"shared": [True, False], "fPIC": [True, False]}
+ default_options = {"shared": False, "fPIC": True}
+
+ def config_options(self):
+ if self.settings.os == "Windows":
+ del self.options.fPIC
+
+ def generate(self):
+ tc = MesonToolchain(self)
+ tc.preprocessor_definitions["TEST_DEFINITION1"] = "TestPpdValue1"
+ tc.preprocessor_definitions["TEST_DEFINITION2"] = "TestPpdValue2"
+ tc.generate()
+
+ def build(self):
+ meson = Meson(self)
+ meson.configure()
+ meson.build()
+ """)
+
+ _meson_build = textwrap.dedent("""
+ project('tutorial', 'cpp')
+ hello = library('hello', 'hello.cpp')
+ executable('demo', 'main.cpp', link_with: hello)
+ """)
+
+ def test_build(self):
+ hello_h = gen_function_h(name="hello")
+ hello_cpp = gen_function_cpp(name="hello",
+ preprocessor=["TEST_DEFINITION1", "TEST_DEFINITION2"])
+ app = gen_function_cpp(name="main", includes=["hello"], calls=["hello"])
+
+ self.t.save({"conanfile.py": self._conanfile_py,
+ "meson.build": self._meson_build,
+ "hello.h": hello_h,
+ "hello.cpp": hello_cpp,
+ "main.cpp": app})
+
+ self.t.run("install . %s" % self._settings_str)
+
+ content = self.t.load("conan_meson_native.ini")
+
+ self.assertIn("[built-in options]", content)
+ self.assertIn("buildtype = 'release'", content)
+
+ self.t.run("build .")
+ self.t.run_command(os.path.join("build", "demo"))
+
+ self.assertIn("hello: Release!", self.t.out)
+ self.assertIn("TEST_DEFINITION1: TestPpdValue1", self.t.out)
+ self.assertIn("TEST_DEFINITION2: TestPpdValue2", self.t.out)
+
+ self._check_binary()
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7840@920c2f3
|
conan-io/conan
|
Python
| 7,840
|
Feature/6207 config single file
|
Changelog: Feature: Allow ``conan config install`` of a single file
Docs: https://github.com/conan-io/docs/pull/1908
Close https://github.com/conan-io/conan/issues/6207
* Add the possibility to specify a configuration for the command `conan config install`
* Copy only the configuration files in the directory, for the command `conan config install dirname`
Question :
* For what purpose, do we need the files copy, when the directory is pass by parameter ?
- [X] Refer to the issue that supports this Pull Request.
- [ ] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [ ] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
|
2020-10-07T11:59:31Z
|
[feature] conan config install a single file
When using `conan config install`, allow a single configuration file to be installed.
Normally, the command requires a git repository, a local folder, or a zip file that contains the file(s) to be installed. If I have a single file I want installed, I would have to put it into a folder or zip it into a file.
The current result:
```
conan config install my_settings\settings.yml
Traceback (most recent call last):
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\command.py", line 1859, in run
method(args[0][1:])
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\command.py", line 554, in config
target_folder=args.target_folder)
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\conan_api.py", line 78, in wrapper
return f(*args, **kwargs)
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\conan_api.py", line 611, in config_install
source_folder=source_folder, target_folder=target_folder)
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\conf\config_installer.py", line 235, in configuration_install
_process_config(config, cache, output, requester)
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\conf\config_installer.py", line 190, in _process_config
_process_zip_file(config, config.uri, cache, output, tmp_folder)
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\conf\config_installer.py", line 63, in _process_zip_file
unzip(zippath, tmp_folder, output=output)
File "C:\Users\Kyle.Kaja\AppData\Roaming\Python\Python37\site-packages\conans\client\tools\files.py", line 104, in unzip
with zipfile.ZipFile(filename, "r") as z:
File "C:\Program Files\Python37\lib\zipfile.py", line 1225, in __init__
self._RealGetContents()
File "C:\Program Files\Python37\lib\zipfile.py", line 1292, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
ERROR: File is not a zip file
```
This feature is useful in the case where you only want to install specific files from a single folder.
For example,
```
my_settings\
settings.yml
remotes.txt
conan config install my_settings (Installs settings.yml and remotes.txt)
conan config install my_settings\settings.yml (Installs settings.yml and ignores remotes.txt)
```
It is also useful when the current folder structure does not facilitate creating a folder specifically for configuration files (e.g. a git repository).
```
my_git_repo\
settings.yml
unrelated.txt
...
conan config install .\
Installing settings.yml
Copying file unrelated.txt to C:\Users\Kyle.Kaja\.conan\. (I don't want this installed here!!)
...
OR
mkdir temp_settings
copy settings.yml temp_settings
conan config install temp_settings
Installing settings.yml
rmdir temp_settings /s (Remove the directory so Git doesn't try to track that new file.)
(So much to do when all I want to do is install a single configuration file.)
PREFERRED
conan config install settings.yml (Ahh! Simple, short, and intuitive!)
Installing settings.yml
```
|
Hi @kkaja123
I think this can make sense and shouldn't be very difficult to implement. The only thing is that our backlog is already totally full, so will label it accordingly and lets see if this feature gets contributed by the community. Thanks for the suggestion.
Hi,
I would like to contribute to this feature.
Hi @sagarafr
Great! Assigned to you. Don't hesitate to ask if you have any question, good luck.
|
[
{
"body": "When using `conan config install`, allow a single configuration file to be installed. \r\n\r\nNormally, the command requires a git repository, a local folder, or a zip file that contains the file(s) to be installed. If I have a single file I want installed, I would have to put it into a folder or zip it into a file.\r\n\r\nThe current result:\r\n```\r\nconan config install my_settings\\settings.yml\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\command.py\", line 1859, in run\r\n method(args[0][1:])\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\command.py\", line 554, in config\r\n target_folder=args.target_folder)\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\conan_api.py\", line 78, in wrapper\r\n return f(*args, **kwargs)\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\conan_api.py\", line 611, in config_install\r\n source_folder=source_folder, target_folder=target_folder)\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\conf\\config_installer.py\", line 235, in configuration_install\r\n _process_config(config, cache, output, requester)\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\conf\\config_installer.py\", line 190, in _process_config\r\n _process_zip_file(config, config.uri, cache, output, tmp_folder)\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\conf\\config_installer.py\", line 63, in _process_zip_file\r\n unzip(zippath, tmp_folder, output=output)\r\n File \"C:\\Users\\Kyle.Kaja\\AppData\\Roaming\\Python\\Python37\\site-packages\\conans\\client\\tools\\files.py\", line 104, in unzip\r\n with zipfile.ZipFile(filename, \"r\") as z:\r\n File \"C:\\Program Files\\Python37\\lib\\zipfile.py\", line 1225, in __init__\r\n self._RealGetContents()\r\n File \"C:\\Program Files\\Python37\\lib\\zipfile.py\", line 1292, in _RealGetContents\r\n raise BadZipFile(\"File is not a zip file\")\r\nzipfile.BadZipFile: File is not a zip file\r\n\r\nERROR: File is not a zip file\r\n```\r\n\r\nThis feature is useful in the case where you only want to install specific files from a single folder.\r\nFor example,\r\n```\r\nmy_settings\\\r\n settings.yml\r\n remotes.txt\r\n\r\nconan config install my_settings (Installs settings.yml and remotes.txt)\r\nconan config install my_settings\\settings.yml (Installs settings.yml and ignores remotes.txt)\r\n```\r\n\r\nIt is also useful when the current folder structure does not facilitate creating a folder specifically for configuration files (e.g. a git repository).\r\n```\r\nmy_git_repo\\\r\n settings.yml\r\n unrelated.txt\r\n ...\r\n\r\nconan config install .\\\r\nInstalling settings.yml\r\nCopying file unrelated.txt to C:\\Users\\Kyle.Kaja\\.conan\\. (I don't want this installed here!!)\r\n...\r\n\r\nOR\r\n\r\nmkdir temp_settings\r\ncopy settings.yml temp_settings\r\nconan config install temp_settings\r\nInstalling settings.yml\r\nrmdir temp_settings /s (Remove the directory so Git doesn't try to track that new file.)\r\n(So much to do when all I want to do is install a single configuration file.)\r\n\r\nPREFERRED\r\n\r\nconan config install settings.yml (Ahh! Simple, short, and intuitive!)\r\nInstalling settings.yml\r\n```",
"number": 6207,
"title": "[feature] conan config install a single file"
}
] |
013c1aea5f5f953fb6dc36f9cee38ce82c7be73a
|
{
"head_commit": "920c2f3458bd55b5ab14ceb8dd5b69b1859a34c0",
"head_commit_message": "chore(conf): Keep custom files install and specify compressed files\n\n* Keep custom files installation\n* Test if the uri is a compressed file and use decompression\n configuration function\n* Test if the uri is a regular file and use file configuration function",
"patch_to_review": "diff --git a/conans/client/conf/config_installer.py b/conans/client/conf/config_installer.py\nindex 72ea8f0c7e7..9e06cfcdf4f 100644\n--- a/conans/client/conf/config_installer.py\n+++ b/conans/client/conf/config_installer.py\n@@ -12,7 +12,7 @@\n from conans.client import tools\n from conans.client.cache.remote_registry import load_registry_txt, migrate_registry_file\n from conans.client.tools import Git\n-from conans.client.tools.files import unzip\n+from conans.client.tools.files import unzip, is_compressed_file\n from conans.errors import ConanException\n from conans.util.files import mkdir, rmdir, walk, save, touch, remove\n from conans.client.cache.cache import ClientCache\n@@ -86,6 +86,43 @@ def _filecopy(src, filename, dst):\n shutil.copyfile(src, dst)\n \n \n+def _process_file(directory, filename, config, cache, output, folder):\n+ if filename == \"settings.yml\":\n+ output.info(\"Installing settings.yml\")\n+ _filecopy(directory, filename, cache.cache_folder)\n+ elif filename == \"conan.conf\":\n+ output.info(\"Processing conan.conf\")\n+ _handle_conan_conf(cache.config, os.path.join(directory, filename))\n+ elif filename == \"remotes.txt\":\n+ output.info(\"Defining remotes from remotes.txt\")\n+ _handle_remotes(cache, os.path.join(directory, filename))\n+ elif filename in (\"registry.txt\", \"registry.json\"):\n+ try:\n+ os.remove(cache.remotes_path)\n+ except OSError:\n+ pass\n+ finally:\n+ _filecopy(directory, filename, cache.cache_folder)\n+ migrate_registry_file(cache, output)\n+ elif filename == \"remotes.json\":\n+ # Fix for Conan 2.0\n+ raise ConanException(\"remotes.json install is not supported yet. Use 'remotes.txt'\")\n+ else:\n+ # This is ugly, should be removed in Conan 2.0\n+ if filename in (\"README.md\", \"LICENSE.txt\"):\n+ output.info(\"Skip %s\" % filename)\n+ else:\n+ relpath = os.path.relpath(directory, folder)\n+ if config.target_folder:\n+ target_folder = os.path.join(cache.cache_folder, config.target_folder,\n+ relpath)\n+ else:\n+ target_folder = os.path.join(cache.cache_folder, relpath)\n+ mkdir(target_folder)\n+ output.info(\"Copying file %s to %s\" % (filename, target_folder))\n+ _filecopy(directory, filename, target_folder)\n+\n+\n def _process_folder(config, folder, cache, output):\n if not os.path.isdir(folder):\n raise ConanException(\"No such directory: '%s'\" % str(folder))\n@@ -96,40 +133,7 @@ def _process_folder(config, folder, cache, output):\n if \".git\" in root:\n continue\n for f in files:\n- if f == \"settings.yml\":\n- output.info(\"Installing settings.yml\")\n- _filecopy(root, f, cache.cache_folder)\n- elif f == \"conan.conf\":\n- output.info(\"Processing conan.conf\")\n- _handle_conan_conf(cache.config, os.path.join(root, f))\n- elif f == \"remotes.txt\":\n- output.info(\"Defining remotes from remotes.txt\")\n- _handle_remotes(cache, os.path.join(root, f))\n- elif f in (\"registry.txt\", \"registry.json\"):\n- try:\n- os.remove(cache.remotes_path)\n- except OSError:\n- pass\n- finally:\n- _filecopy(root, f, cache.cache_folder)\n- migrate_registry_file(cache, output)\n- elif f == \"remotes.json\":\n- # Fix for Conan 2.0\n- raise ConanException(\"remotes.json install is not supported yet. Use 'remotes.txt'\")\n- else:\n- # This is ugly, should be removed in Conan 2.0\n- if root == folder and f in (\"README.md\", \"LICENSE.txt\"):\n- output.info(\"Skip %s\" % f)\n- continue\n- relpath = os.path.relpath(root, folder)\n- if config.target_folder:\n- target_folder = os.path.join(cache.cache_folder, config.target_folder,\n- relpath)\n- else:\n- target_folder = os.path.join(cache.cache_folder, relpath)\n- mkdir(target_folder)\n- output.info(\"Copying file %s to %s\" % (f, target_folder))\n- _filecopy(root, f, target_folder)\n+ _process_file(root, f, config, cache, output, folder)\n \n \n def _process_download(config, cache, output, requester):\n@@ -179,6 +183,8 @@ def from_item(uri, config_type, verify_ssl, args, source_folder, target_folder):\n config.type = \"git\"\n elif os.path.isdir(uri):\n config.type = \"dir\"\n+ elif is_compressed_file(uri):\n+ config.type = \"compressed\"\n elif os.path.isfile(uri):\n config.type = \"file\"\n elif uri.startswith(\"http\"):\n@@ -201,9 +207,12 @@ def _process_config(config, cache, output, requester):\n _process_git_repo(config, cache, output)\n elif config.type == \"dir\":\n _process_folder(config, config.uri, cache, output)\n- elif config.type == \"file\":\n+ elif config.type == \"compressed\":\n with tmp_config_install_folder(cache) as tmp_folder:\n _process_zip_file(config, config.uri, cache, output, tmp_folder)\n+ elif config.type == \"file\":\n+ dirname, filename = os.path.dirname(config.uri), os.path.basename(config.uri)\n+ _process_file(dirname, filename, config, cache, output, dirname)\n elif config.type == \"url\":\n _process_download(config, cache, output, requester=requester)\n else:\ndiff --git a/conans/client/tools/files.py b/conans/client/tools/files.py\nindex 01f0a327c7e..cbcd3b78b6e 100644\n--- a/conans/client/tools/files.py\n+++ b/conans/client/tools/files.py\n@@ -53,6 +53,19 @@ def human_size(size_bytes):\n return \"%s%s\" % (formatted_size, suffix)\n \n \n+def is_compressed_file(filename):\n+ import zipfile\n+ import tarfile\n+ import binascii\n+ if zipfile.is_zipfile(filename) or tarfile.is_tarfile(filename):\n+ return True\n+ # test gzip magic number\n+ with open(filename, 'rb') as fd:\n+ if binascii.hexlify(fd.read(2)) == b'1f8b':\n+ return True\n+ return False\n+\n+\n def unzip(filename, destination=\".\", keep_permissions=False, pattern=None, output=None):\n \"\"\"\n Unzip a zipped file\n"
}
|
[
{
"diff_hunk": "@@ -201,9 +207,12 @@ def _process_config(config, cache, output, requester):\n _process_git_repo(config, cache, output)\n elif config.type == \"dir\":\n _process_folder(config, config.uri, cache, output)\n- elif config.type == \"file\":\n+ elif config.type == \"compressed\":\n with tmp_config_install_folder(cache) as tmp_folder:\n _process_zip_file(config, config.uri, cache, output, tmp_folder)\n+ elif config.type == \"file\":",
"line": 208,
"original_line": 213,
"original_start_line": null,
"path": "conans/client/conf/config_installer.py",
"start_line": null,
"text": "@user1:\nHi @author\r\n\r\nI am having a look, and there is a problem: we cannot change existing behavior unless it is declared a bug. At the moment the type for a zipped file, the type is \"file\", and that cannot be changed. It is not good enough to introduce a ``compressed`` type for differentiating this. The current ``file`` behavior has to be respected and maintained.\r\n\r\nSo I think it would be better to try to process files capturing the exception when the decompression fails.\n\n@author:\nIndeed, I understand my mistake when I have push. I just try to find the right place to make the check in order to have a cleaner code and have all tests pass."
}
] |
10a598b6f95b9f50247f32da9a48478859c010da
|
diff --git a/conans/client/conf/config_installer.py b/conans/client/conf/config_installer.py
index 72ea8f0c7e7..51b709271e1 100644
--- a/conans/client/conf/config_installer.py
+++ b/conans/client/conf/config_installer.py
@@ -12,7 +12,7 @@
from conans.client import tools
from conans.client.cache.remote_registry import load_registry_txt, migrate_registry_file
from conans.client.tools import Git
-from conans.client.tools.files import unzip
+from conans.client.tools.files import unzip, is_compressed_file
from conans.errors import ConanException
from conans.util.files import mkdir, rmdir, walk, save, touch, remove
from conans.client.cache.cache import ClientCache
@@ -86,6 +86,43 @@ def _filecopy(src, filename, dst):
shutil.copyfile(src, dst)
+def _process_file(directory, filename, config, cache, output, folder):
+ if filename == "settings.yml":
+ output.info("Installing settings.yml")
+ _filecopy(directory, filename, cache.cache_folder)
+ elif filename == "conan.conf":
+ output.info("Processing conan.conf")
+ _handle_conan_conf(cache.config, os.path.join(directory, filename))
+ elif filename == "remotes.txt":
+ output.info("Defining remotes from remotes.txt")
+ _handle_remotes(cache, os.path.join(directory, filename))
+ elif filename in ("registry.txt", "registry.json"):
+ try:
+ os.remove(cache.remotes_path)
+ except OSError:
+ pass
+ finally:
+ _filecopy(directory, filename, cache.cache_folder)
+ migrate_registry_file(cache, output)
+ elif filename == "remotes.json":
+ # Fix for Conan 2.0
+ raise ConanException("remotes.json install is not supported yet. Use 'remotes.txt'")
+ else:
+ # This is ugly, should be removed in Conan 2.0
+ if filename in ("README.md", "LICENSE.txt"):
+ output.info("Skip %s" % filename)
+ else:
+ relpath = os.path.relpath(directory, folder)
+ if config.target_folder:
+ target_folder = os.path.join(cache.cache_folder, config.target_folder,
+ relpath)
+ else:
+ target_folder = os.path.join(cache.cache_folder, relpath)
+ mkdir(target_folder)
+ output.info("Copying file %s to %s" % (filename, target_folder))
+ _filecopy(directory, filename, target_folder)
+
+
def _process_folder(config, folder, cache, output):
if not os.path.isdir(folder):
raise ConanException("No such directory: '%s'" % str(folder))
@@ -96,40 +133,7 @@ def _process_folder(config, folder, cache, output):
if ".git" in root:
continue
for f in files:
- if f == "settings.yml":
- output.info("Installing settings.yml")
- _filecopy(root, f, cache.cache_folder)
- elif f == "conan.conf":
- output.info("Processing conan.conf")
- _handle_conan_conf(cache.config, os.path.join(root, f))
- elif f == "remotes.txt":
- output.info("Defining remotes from remotes.txt")
- _handle_remotes(cache, os.path.join(root, f))
- elif f in ("registry.txt", "registry.json"):
- try:
- os.remove(cache.remotes_path)
- except OSError:
- pass
- finally:
- _filecopy(root, f, cache.cache_folder)
- migrate_registry_file(cache, output)
- elif f == "remotes.json":
- # Fix for Conan 2.0
- raise ConanException("remotes.json install is not supported yet. Use 'remotes.txt'")
- else:
- # This is ugly, should be removed in Conan 2.0
- if root == folder and f in ("README.md", "LICENSE.txt"):
- output.info("Skip %s" % f)
- continue
- relpath = os.path.relpath(root, folder)
- if config.target_folder:
- target_folder = os.path.join(cache.cache_folder, config.target_folder,
- relpath)
- else:
- target_folder = os.path.join(cache.cache_folder, relpath)
- mkdir(target_folder)
- output.info("Copying file %s to %s" % (f, target_folder))
- _filecopy(root, f, target_folder)
+ _process_file(root, f, config, cache, output, folder)
def _process_download(config, cache, output, requester):
@@ -202,8 +206,12 @@ def _process_config(config, cache, output, requester):
elif config.type == "dir":
_process_folder(config, config.uri, cache, output)
elif config.type == "file":
- with tmp_config_install_folder(cache) as tmp_folder:
- _process_zip_file(config, config.uri, cache, output, tmp_folder)
+ if is_compressed_file(config.uri):
+ with tmp_config_install_folder(cache) as tmp_folder:
+ _process_zip_file(config, config.uri, cache, output, tmp_folder)
+ else:
+ dirname, filename = os.path.dirname(config.uri), os.path.basename(config.uri)
+ _process_file(dirname, filename, config, cache, output, dirname)
elif config.type == "url":
_process_download(config, cache, output, requester=requester)
else:
diff --git a/conans/client/tools/files.py b/conans/client/tools/files.py
index 01f0a327c7e..878158a9cec 100644
--- a/conans/client/tools/files.py
+++ b/conans/client/tools/files.py
@@ -53,6 +53,19 @@ def human_size(size_bytes):
return "%s%s" % (formatted_size, suffix)
+def is_compressed_file(filename):
+ import zipfile
+ import tarfile
+ import binascii
+ # test gzip magic number
+ with open(filename, 'rb') as fd:
+ if binascii.hexlify(fd.read(2)) == b'1f8b':
+ return True
+ if zipfile.is_zipfile(filename) or tarfile.is_tarfile(filename):
+ return True
+ return False
+
+
def unzip(filename, destination=".", keep_permissions=False, pattern=None, output=None):
"""
Unzip a zipped file
diff --git a/conans/test/functional/command/config_install_test.py b/conans/test/functional/command/config_install_test.py
index 1effcbf91de..935c9fcd583 100644
--- a/conans/test/functional/command/config_install_test.py
+++ b/conans/test/functional/command/config_install_test.py
@@ -142,7 +142,7 @@ def _create_zip(self, zippath=None):
def _check(self, params):
typ, uri, verify, args = [p.strip() for p in params.split(",")]
configs = json.loads(load(self.client.cache.config_install_file))
- config = _ConfigOrigin(configs[0])
+ config = _ConfigOrigin(configs[-1])
self.assertEqual(config.type, typ)
self.assertEqual(config.uri, uri)
self.assertEqual(str(config.verify_ssl), verify)
@@ -214,6 +214,31 @@ def test_install_file_test(self):
self._check("file, %s, True, None" % zippath)
self.assertTrue(os.path.exists(zippath))
+ def test_install_config_file_test(self):
+ """ should install from a settings and remotes file in configuration directory
+ """
+ import tempfile
+ profile_folder = self._create_profile_folder()
+ self.assertTrue(os.path.isdir(profile_folder))
+ src_setting_file = os.path.join(profile_folder, "settings.yml")
+ src_remote_file = os.path.join(profile_folder, "remotes.txt")
+
+ # Install profile_folder without settings.yml + remotes.txt in order to install them manually
+ tmp_dir = tempfile.mkdtemp()
+ dest_setting_file = os.path.join(tmp_dir, "settings.yml")
+ dest_remote_file = os.path.join(tmp_dir, "remotes.txt")
+ shutil.move(src_setting_file, dest_setting_file)
+ shutil.move(src_remote_file, dest_remote_file)
+ self.client.run('config install "%s"' % profile_folder)
+ shutil.move(dest_setting_file, src_setting_file)
+ shutil.move(dest_remote_file, src_remote_file)
+ shutil.rmtree(tmp_dir)
+
+ for cmd_option in ["", "--type=file"]:
+ self.client.run('config install "%s" %s' % (src_setting_file, cmd_option))
+ self.client.run('config install "%s" %s' % (src_remote_file, cmd_option))
+ self._check("file, %s, True, None" % src_remote_file)
+
def test_install_dir_test(self):
""" should install from a dir in current dir
"""
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-7781@b2c84dd
|
conan-io/conan
|
Python
| 7,781
|
[bugfix] multiple remotes with single conan upload
|
Changelog: Bugfix: Fixed bug where uploading to multiple remotes in a single conan upload command would fail.
Docs: omit
Fixes #7780
This bug was due to closing and reusing a thread pool in a loop. Instead, a separate thread pool will be created at each loop iteration.
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2020-09-30T22:26:13Z
|
[bug] Upload fails with multiple remotes in the same command
When uploading several packages at once (without specifying a remote) and some packages are uploaded to different remotes, the command will fail.
The bug appears to be due to reusing a closed thread pool in the loop in CmdUploader.upload() in uploader.py. Once thread pools are closed they can't be reused, so any iterations after the first will cause an error.
### Environment Details (include every applicable attribute)
* Operating System+version: RHEL 7.6
* Conan version: Tested with 1.29.0 and 1.30.dev0 (develop)
* Python version: 3.8.5
### Steps to reproduce (Include if Applicable)
1. Set up two remotes
2. Create 2 recipes and add_ref one recipe to each remote
4. Upload using `conan upload "*" --all`
### Logs (Executed commands with output) (Include/Attach if Applicable)
I can't get the logs off my machine, but the stack trace looks like this:
command.py:2103
command.py:1477
conan_api.py:94
conan_api.py:965
uploader.py:109
pool.py:364
pool.py:473
pool.py:350
ValueError: Pool not running
|
[
{
"body": "When uploading several packages at once (without specifying a remote) and some packages are uploaded to different remotes, the command will fail.\r\n\r\nThe bug appears to be due to reusing a closed thread pool in the loop in CmdUploader.upload() in uploader.py. Once thread pools are closed they can't be reused, so any iterations after the first will cause an error.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: RHEL 7.6\r\n * Conan version: Tested with 1.29.0 and 1.30.dev0 (develop)\r\n * Python version: 3.8.5\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n1. Set up two remotes\r\n2. Create 2 recipes and add_ref one recipe to each remote\r\n4. Upload using `conan upload \"*\" --all`\r\n\r\n### Logs (Executed commands with output) (Include/Attach if Applicable)\r\nI can't get the logs off my machine, but the stack trace looks like this:\r\ncommand.py:2103\r\ncommand.py:1477\r\nconan_api.py:94\r\nconan_api.py:965\r\nuploader.py:109\r\npool.py:364\r\npool.py:473\r\npool.py:350\r\nValueError: Pool not running\r\n",
"number": 7780,
"title": "[bug] Upload fails with multiple remotes in the same command"
}
] |
f767fa175a7708ba82f5816362a43ece8ff8fb0a
|
{
"head_commit": "b2c84ddd6ffaf3b4d5aaf4cf3a4cd0470f4f651a",
"head_commit_message": "Fixed bug where uploading to multiple remotes in a single conan upload command would fail",
"patch_to_review": "diff --git a/conans/client/cmd/uploader.py b/conans/client/cmd/uploader.py\nindex f47631f397b..2e7e877f47f 100644\n--- a/conans/client/cmd/uploader.py\n+++ b/conans/client/cmd/uploader.py\n@@ -201,12 +201,11 @@ def upload(self, reference_or_pattern, remotes, upload_recorder, package_id=None\n all_packages, query)\n \n if parallel_upload:\n- self._upload_thread_pool = ThreadPool(8)\n self._user_io.disable_input()\n- else:\n- self._upload_thread_pool = ThreadPool(1)\n \n for remote, refs in refs_by_remote.items():\n+ self._upload_thread_pool = ThreadPool(8 if parallel_upload else 1)\n+\n self._output.info(\"Uploading to remote '{}':\".format(remote.name))\n \n def upload_ref(ref_conanfile_prefs):\ndiff --git a/conans/test/functional/remote/multi_remote_checks_test.py b/conans/test/functional/remote/multi_remote_checks_test.py\nindex 85e2f36e334..41eb04787f8 100644\n--- a/conans/test/functional/remote/multi_remote_checks_test.py\n+++ b/conans/test/functional/remote/multi_remote_checks_test.py\n@@ -79,6 +79,22 @@ def package_info(self):\n client.run(\"remote list_ref\")\n self.assertIn(\"Pkg/0.1@lasote/testing: server3\", client.out)\n \n+ def test_multiple_remotes_single_upload(self):\n+ servers = OrderedDict([(\"server1\", TestServer()),\n+ (\"server2\", TestServer())])\n+ client = TestClient(servers=servers, users={\"server1\": [(\"lasote\", \"mypass\")],\n+ \"server2\": [(\"lasote\", \"mypass\")]})\n+ conanfile = \"\"\"from conans import ConanFile\n+class Pkg(ConanFile):\n+ settings = \"build_type\"\n+ \"\"\"\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . Pkg/0.1@lasote/testing -s build_type=Release\")\n+ client.run(\"create . Pkg2/0.1@lasote/testing -s build_type=Release\")\n+ client.run(\"remote add_ref Pkg/0.1@lasote/testing server1\")\n+ client.run(\"remote add_ref Pkg2/0.1@lasote/testing server2\")\n+ client.run(\"upload Pkg* --all --confirm\")\n+\n def test_binary_packages_mixed(self):\n servers = OrderedDict([(\"server1\", TestServer()),\n (\"server2\", TestServer()),\n"
}
|
[
{
"diff_hunk": "@@ -201,12 +201,11 @@ def upload(self, reference_or_pattern, remotes, upload_recorder, package_id=None\n all_packages, query)\n \n if parallel_upload:\n- self._upload_thread_pool = ThreadPool(8)\n self._user_io.disable_input()\n- else:\n- self._upload_thread_pool = ThreadPool(1)\n \n for remote, refs in refs_by_remote.items():\n+ self._upload_thread_pool = ThreadPool(8 if parallel_upload else 1)",
"line": null,
"original_line": 207,
"original_start_line": null,
"path": "conans/client/cmd/uploader.py",
"start_line": null,
"text": "@author:\nMaybe this should use tools.cpu_count()?\n\n@user1:\nThis doesn't look right either, as the ``self._upload_thread_pool`` variable will be overwritten, losing access to the other pool from the previous remote.\n\n@author:\nEach iteration of this loop will complete before the next starts, since map() will block, then it will close the thread pool and start the next iteration.\n\n@author:\nWait, we can definitely reuse the thread pool here. Keeping it alive the whole time makes a lot more sense."
}
] |
d80fd6a0bf9a8b7f69c14185169720ec27a6122a
|
diff --git a/conans/client/cmd/uploader.py b/conans/client/cmd/uploader.py
index f47631f397b..24c662a91e8 100644
--- a/conans/client/cmd/uploader.py
+++ b/conans/client/cmd/uploader.py
@@ -21,6 +21,7 @@
gzopen_without_timestamps, set_dirty_context_manager)
from conans.util.log import logger
from conans.util.tracer import log_recipe_upload, log_compressed_files, log_package_upload
+from conans.tools import cpu_count
UPLOAD_POLICY_FORCE = "force-upload"
@@ -201,12 +202,12 @@ def upload(self, reference_or_pattern, remotes, upload_recorder, package_id=None
all_packages, query)
if parallel_upload:
- self._upload_thread_pool = ThreadPool(8)
self._user_io.disable_input()
- else:
- self._upload_thread_pool = ThreadPool(1)
+ self._upload_thread_pool = ThreadPool(
+ cpu_count() if parallel_upload else 1)
for remote, refs in refs_by_remote.items():
+
self._output.info("Uploading to remote '{}':".format(remote.name))
def upload_ref(ref_conanfile_prefs):
@@ -221,17 +222,18 @@ def upload_ref(ref_conanfile_prefs):
self._upload_thread_pool.map(upload_ref,
[(ref, conanfile, prefs) for (ref, conanfile, prefs) in
refs])
- self._upload_thread_pool.close()
- self._upload_thread_pool.join()
-
- if len(self._exceptions_list) > 0:
- for exc, ref, trace in self._exceptions_list:
- t = "recipe" if isinstance(ref, ConanFileReference) else "package"
- msg = "%s: Upload %s to '%s' failed: %s\n" % (str(ref), t, remote.name, str(exc))
- if get_env("CONAN_VERBOSE_TRACEBACK", False):
- msg += trace
- self._output.error(msg)
- raise ConanException("Errors uploading some packages")
+
+ self._upload_thread_pool.close()
+ self._upload_thread_pool.join()
+
+ if len(self._exceptions_list) > 0:
+ for exc, ref, trace in self._exceptions_list:
+ t = "recipe" if isinstance(ref, ConanFileReference) else "package"
+ msg = "%s: Upload %s to '%s' failed: %s\n" % (str(ref), t, remote.name, str(exc))
+ if get_env("CONAN_VERBOSE_TRACEBACK", False):
+ msg += trace
+ self._output.error(msg)
+ raise ConanException("Errors uploading some packages")
logger.debug("UPLOAD: Time manager upload: %f" % (time.time() - t1))
diff --git a/conans/test/functional/remote/multi_remote_checks_test.py b/conans/test/functional/remote/multi_remote_checks_test.py
index 85e2f36e334..41eb04787f8 100644
--- a/conans/test/functional/remote/multi_remote_checks_test.py
+++ b/conans/test/functional/remote/multi_remote_checks_test.py
@@ -79,6 +79,22 @@ def package_info(self):
client.run("remote list_ref")
self.assertIn("Pkg/0.1@lasote/testing: server3", client.out)
+ def test_multiple_remotes_single_upload(self):
+ servers = OrderedDict([("server1", TestServer()),
+ ("server2", TestServer())])
+ client = TestClient(servers=servers, users={"server1": [("lasote", "mypass")],
+ "server2": [("lasote", "mypass")]})
+ conanfile = """from conans import ConanFile
+class Pkg(ConanFile):
+ settings = "build_type"
+ """
+ client.save({"conanfile.py": conanfile})
+ client.run("create . Pkg/0.1@lasote/testing -s build_type=Release")
+ client.run("create . Pkg2/0.1@lasote/testing -s build_type=Release")
+ client.run("remote add_ref Pkg/0.1@lasote/testing server1")
+ client.run("remote add_ref Pkg2/0.1@lasote/testing server2")
+ client.run("upload Pkg* --all --confirm")
+
def test_binary_packages_mixed(self):
servers = OrderedDict([("server1", TestServer()),
("server2", TestServer()),
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-7763@bd556b9
|
conan-io/conan
|
Python
| 7,763
|
Feature/lockfiles partial
|
Changelog: Fix: Removed check in lockfiles computed from other lockfile that it should be part of it. Users can check the resulting lockfile themselves if they want to.
Docs: https://github.com/conan-io/docs/pull/1868
Fix https://github.com/conan-io/conan/issues/7739
|
2020-09-28T14:17:57Z
|
[bug][question] Conan lockfile behavior change between conan 1.26 and conan 1.29.
We used conan.lock files from a CI build for a big set of comonents where all component versions were synchronized. Later developers used these lockfiles to build new versions of one of those components in the context of that lockfile locally, so they were not affected by new versions of upstreams that some other developer might have created in the meantime.
Simple example:
CA/1.0.0 <- CB/1.0.0 (read CB uses CA)
This combination was built in a CI job and the lockfile was generated. Then someone created a CA/2.0.0 package.
Now another developer works on a CB/2.0.0 package, but still wants to work in the context of the last successfull CI build, so he uses the lockfile and creates a CB/2.0.0 locally on his machine, but still wants to use CA/1.0.0 (which was locked in the CI lockfile).
This scenario used to work perfectly with conan 1.26, but fails with conan 1.29 with the error message:
`ERROR: Couldn't find 'CB/2.0.0@local/testing' in lockfile`.
We need an alternative for the old behavior with the new conan version, but are at a loss how to do that, as lockfiles have become quite restrictive about building new versions of packages that were not previously locked.
I have attached a simple python script, that reproduces the case explained above. The script works with conan 1.26, but fails with the above error message with conan 1.29.
(Of course, our real word scenario involves a complete tree of sometimes dozens of upstream dependencies with potentially several version conflicts, not just one upstream package. That is why we need a way to fix the local environment to a known-good state until we push the new version of CB to the CI which then builds a new consistent state of all components.)
|
Here ist the attachment:
[use_segment_build_lockfile_for_local_component_builds.py.gz](https://github.com/conan-io/conan/files/5262083/use_segment_build_lockfile_for_local_component_builds.py.gz)
Hi @fourbft
The lockfiles have become not more restrictive, but their checks more thorough and complete.
It is true, that now, if you capture a lockfile, and it locks ``CB/1.0.0``, you will not be able to change that, because it is locked. This was a bug fix, because if the lockfile contained other packages that depended on ``CB/1.0.0``, changing it will be automatically violating the locked version.
The new lockfiles should allow this flow, but in a more controlled way. I have hit a limitation of the commands to allow it, but the logic is almost there. I will be providing a pull request with this flow implemented and update here.
> I will be providing a pull request with this flow implemented and update here.
Thanks @memsharded
Apart from the workflow described above we also have implemented the workflow described in the former conan docs here:
https://docs.conan.io/en/1.26/versioning/lockfiles.html
in the chapter "How to use lockfiles in CI". Especially this step:
> Now we can safely create the new version of pkga/0.2, that will resolve to use pkgz/0.1 instead of the latest 0.2, if we use the lockfile:
> cd pkga && conan create . pkga/0.2@user/testing --lockfile=../release
> \# lockfile in release/conan.lock is modified to contain pkga/0.2
This workflow is different from the one above in that it modifies the lockfile during the "conan create". It would be great, if such a workflow would be possible again, too.
|
[
{
"body": "We used conan.lock files from a CI build for a big set of comonents where all component versions were synchronized. Later developers used these lockfiles to build new versions of one of those components in the context of that lockfile locally, so they were not affected by new versions of upstreams that some other developer might have created in the meantime.\r\n\r\nSimple example:\r\nCA/1.0.0 <- CB/1.0.0 (read CB uses CA)\r\n\r\nThis combination was built in a CI job and the lockfile was generated. Then someone created a CA/2.0.0 package.\r\nNow another developer works on a CB/2.0.0 package, but still wants to work in the context of the last successfull CI build, so he uses the lockfile and creates a CB/2.0.0 locally on his machine, but still wants to use CA/1.0.0 (which was locked in the CI lockfile).\r\n\r\nThis scenario used to work perfectly with conan 1.26, but fails with conan 1.29 with the error message: \r\n`ERROR: Couldn't find 'CB/2.0.0@local/testing' in lockfile`.\r\n\r\nWe need an alternative for the old behavior with the new conan version, but are at a loss how to do that, as lockfiles have become quite restrictive about building new versions of packages that were not previously locked.\r\n\r\nI have attached a simple python script, that reproduces the case explained above. The script works with conan 1.26, but fails with the above error message with conan 1.29.\r\n\r\n(Of course, our real word scenario involves a complete tree of sometimes dozens of upstream dependencies with potentially several version conflicts, not just one upstream package. That is why we need a way to fix the local environment to a known-good state until we push the new version of CB to the CI which then builds a new consistent state of all components.)",
"number": 7739,
"title": "[bug][question] Conan lockfile behavior change between conan 1.26 and conan 1.29. "
}
] |
25419158ada66ee0119092ffbdc5067153d19272
|
{
"head_commit": "bd556b94660c15dfc0afc8c143703fcf2871d5c7",
"head_commit_message": "removing overlap/contained check",
"patch_to_review": "diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex ca387242949..e4ac5a514d1 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -1317,6 +1317,7 @@ def lock_create(self, path, lockfile_out,\n if path and reference:\n raise ConanException(\"Both path and reference arguments were provided. Please provide \"\n \"only one of them\")\n+\n if path:\n ref_or_path = _make_abs_path(path, cwd)\n if not os.path.isfile(ref_or_path):\n@@ -1362,8 +1363,6 @@ def lock_create(self, path, lockfile_out,\n graph_lock_file = GraphLockFile(phost, pbuild, graph_lock)\n if lockfile:\n new_graph_lock = GraphLock(deps_graph, self.app.config.revisions_enabled)\n- # check if the lockfile provided was used or not\n- new_graph_lock.check_contained(graph_lock)\n graph_lock_file = GraphLockFile(phost, pbuild, new_graph_lock)\n if base:\n graph_lock_file.only_recipes()\ndiff --git a/conans/model/graph_lock.py b/conans/model/graph_lock.py\nindex e303b26f384..b30a126cad8 100644\n--- a/conans/model/graph_lock.py\n+++ b/conans/model/graph_lock.py\n@@ -474,15 +474,6 @@ def update_lock(self, new_lock):\n if current.prev is None:\n current.prev = node.prev\n \n- def check_contained(self, other):\n- \"\"\" if lock create is provided a lockfile, it should be used, and it should contain it\n- otherwise, it was useless to pass it, and it is dangerous to continue, recommended to\n- create a fresh lockfile\"\"\"\n- other_root_id = other.root_node_id()\n- if other_root_id not in self._nodes:\n- raise ConanException(\"The provided lockfile was not used, there is no overlap. You \"\n- \"might want to create a fresh lockfile\")\n-\n def pre_lock_node(self, node):\n if node.recipe == RECIPE_VIRTUAL:\n return\ndiff --git a/conans/test/functional/graph_lock/dynamic_test.py b/conans/test/functional/graph_lock/dynamic_test.py\nindex 286ba4d9e32..e41d815aaae 100644\n--- a/conans/test/functional/graph_lock/dynamic_test.py\n+++ b/conans/test/functional/graph_lock/dynamic_test.py\n@@ -128,8 +128,10 @@ def partial_lock_root_unused_test(self):\n self.assertIn(\"Couldn't find 'LibC/1.0' in lockfile\", client.out)\n \n client.run(\"lock create conanfile.py --name=LibC --version=1.0 --lockfile=libb.lock \"\n- \"--lockfile-out=libc.lock\", assert_error=True)\n- self.assertIn(\"ERROR: The provided lockfile was not used, there is no overlap.\", client.out)\n+ \"--lockfile-out=libc.lock\")\n+ # Users can validate themselves if relevant package is in the lockfile or not\n+ libc_lock = client.load(\"libc.lock\")\n+ self.assertNotIn(\"LibB/1.0\", libc_lock)\n \n def remove_dep_test(self):\n client = TestClient()\n@@ -254,3 +256,35 @@ def augment_test_package_requires(self):\n else:\n self.assertEqual(dep[\"ref\"], \"dep/0.1\")\n self.assertEqual(dep[\"prev\"], \"0\")\n+\n+ def partial_intermediate_package_lock_test(self):\n+ client = TestClient()\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . LibA/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require(\"LibA/[>=1.0]\")})\n+ client.run(\"create . LibB/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require(\"LibB/[>=1.0]\")})\n+ client.run(\"create . LibC/1.0@\")\n+ client.run(\"lock create --reference=LibC/1.0 --lockfile-out=libc.lock\")\n+\n+ # New version of LibA/1.0.1, that should never be used\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . LibA/1.0.1@\")\n+\n+ # Go back to B, we want to develop but keep depending on LibA/1.0.0\n+ client.save({\"conanfile.py\": GenConanfile().with_require(\"LibA/[>=1.0]\")})\n+ client.run(\"create . LibB/1.1@ --lockfile=libc.lock\", assert_error=True)\n+ self.assertIn(\"Couldn't find 'LibB/1.1' in lockfile\", client.out)\n+\n+ client.run(\"lock create conanfile.py --name=LibB --version=1.1 --lockfile=libc.lock \"\n+ \"--lockfile-out=libb.lock\")\n+ self.assertIn(\"LibA/1.0 from local cache\", client.out)\n+ self.assertNotIn(\"LibA/1.0.1\", client.out)\n+ libb_lock = client.load(\"libb.lock\")\n+ self.assertIn(\"LibA/1.0\", libb_lock)\n+ self.assertNotIn(\"LibA/1.0.1\", libb_lock)\n+\n+ client.run(\"create . LibB/1.1@\")\n+ self.assertIn(\"LibA/1.0.1 from local cache - Cache\", client.out)\n+ client.run(\"create . LibB/1.1@ --lockfile=libb.lock\")\n+ self.assertIn(\"LibA/1.0 from local cache - Cache\", client.out)\ndiff --git a/conans/test/functional/graph_lock/graph_lock_ci_test.py b/conans/test/functional/graph_lock/graph_lock_ci_test.py\nindex 84ea55dbe8c..4f0617cbd06 100644\n--- a/conans/test/functional/graph_lock/graph_lock_ci_test.py\n+++ b/conans/test/functional/graph_lock/graph_lock_ci_test.py\n@@ -526,11 +526,13 @@ def test_version_ranges_partial_unused(self):\n self.assertNotIn(\"pyreq/0.2\", client.out)\n \n # Go back to main orchestrator\n- # This should fail, as PkgB/0.2 is not involved in the new resolution\n+ # This should fail, as PkgB/1.0 is not involved in the new resolution\n client.run(\"lock create --reference=PkgD/0.1@user/channel \"\n- \"--lockfile=buildb.lock --lockfile-out=conan.lock\", assert_error=True)\n- self.assertIn(\"ERROR: The provided lockfile was not used, there is no overlap\",\n- client.out)\n+ \"--lockfile=buildb.lock --lockfile-out=error.lock\")\n+ # User can perfectly go and check the resulting lockfile and check if PkgB/0.1 is there\n+ # We can probably help automate this with a \"conan lock find\" subcommand\n+ error_lock = client.load(\"error.lock\")\n+ self.assertNotIn(\"PkgB/1.0@user/channel\", error_lock)\n \n client.run(\"lock build-order conan.lock --json=build_order.json\")\n json_file = client.load(\"build_order.json\")\n"
}
|
[
{
"diff_hunk": "@@ -254,3 +256,35 @@ def augment_test_package_requires(self):\n else:\n self.assertEqual(dep[\"ref\"], \"dep/0.1\")\n self.assertEqual(dep[\"prev\"], \"0\")\n+\n+ def partial_intermediate_package_lock_test(self):\n+ client = TestClient()\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . LibA/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require(\"LibA/[>=1.0]\")})\n+ client.run(\"create . LibB/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require(\"LibB/[>=1.0]\")})\n+ client.run(\"create . LibC/1.0@\")\n+ client.run(\"lock create --reference=LibC/1.0 --lockfile-out=libc.lock\")\n+\n+ # New version of LibA/1.0.1, that should never be used\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . LibA/1.0.1@\")\n+\n+ # Go back to B, we want to develop but keep depending on LibA/1.0.0",
"line": null,
"original_line": 274,
"original_start_line": null,
"path": "conans/test/functional/graph_lock/dynamic_test.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n # Go back to B, we want to develop but keep depending on LibA/1.0\r\n```"
}
] |
e41c4730196263d8f8113f96f191c0eaf984d705
|
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index d9f2f89be4d..e5bf26fb67b 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -1321,6 +1321,7 @@ def lock_create(self, path, lockfile_out,
if path and reference:
raise ConanException("Both path and reference arguments were provided. Please provide "
"only one of them")
+
if path:
ref_or_path = _make_abs_path(path, cwd)
if not os.path.isfile(ref_or_path):
@@ -1366,8 +1367,6 @@ def lock_create(self, path, lockfile_out,
graph_lock_file = GraphLockFile(phost, pbuild, graph_lock)
if lockfile:
new_graph_lock = GraphLock(deps_graph, self.app.config.revisions_enabled)
- # check if the lockfile provided was used or not
- new_graph_lock.check_contained(graph_lock)
graph_lock_file = GraphLockFile(phost, pbuild, new_graph_lock)
if base:
graph_lock_file.only_recipes()
diff --git a/conans/model/graph_lock.py b/conans/model/graph_lock.py
index be5b0e0ee29..0fc13af832f 100644
--- a/conans/model/graph_lock.py
+++ b/conans/model/graph_lock.py
@@ -478,15 +478,6 @@ def update_lock(self, new_lock):
if current.prev is None:
current.prev = node.prev
- def check_contained(self, other):
- """ if lock create is provided a lockfile, it should be used, and it should contain it
- otherwise, it was useless to pass it, and it is dangerous to continue, recommended to
- create a fresh lockfile"""
- other_root_id = other.root_node_id()
- if other_root_id not in self._nodes:
- raise ConanException("The provided lockfile was not used, there is no overlap. You "
- "might want to create a fresh lockfile")
-
def pre_lock_node(self, node):
if node.recipe == RECIPE_VIRTUAL:
return
diff --git a/conans/test/functional/graph_lock/dynamic_test.py b/conans/test/functional/graph_lock/dynamic_test.py
index 0c8d4ec4553..361da4935c6 100644
--- a/conans/test/functional/graph_lock/dynamic_test.py
+++ b/conans/test/functional/graph_lock/dynamic_test.py
@@ -129,8 +129,10 @@ def partial_lock_root_unused_test(self):
self.assertIn("Couldn't find 'LibC/1.0' in lockfile", client.out)
client.run("lock create conanfile.py --name=LibC --version=1.0 --lockfile=libb.lock "
- "--lockfile-out=libc.lock", assert_error=True)
- self.assertIn("ERROR: The provided lockfile was not used, there is no overlap.", client.out)
+ "--lockfile-out=libc.lock")
+ # Users can validate themselves if relevant package is in the lockfile or not
+ libc_lock = client.load("libc.lock")
+ self.assertNotIn("LibB/1.0", libc_lock)
def remove_dep_test(self):
client = TestClient()
@@ -256,6 +258,38 @@ def augment_test_package_requires(self):
self.assertEqual(dep["ref"], "dep/0.1")
self.assertEqual(dep["prev"], "0")
+ def partial_intermediate_package_lock_test(self):
+ client = TestClient()
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . LibA/1.0@")
+ client.save({"conanfile.py": GenConanfile().with_require("LibA/[>=1.0]")})
+ client.run("create . LibB/1.0@")
+ client.save({"conanfile.py": GenConanfile().with_require("LibB/[>=1.0]")})
+ client.run("create . LibC/1.0@")
+ client.run("lock create --reference=LibC/1.0 --lockfile-out=libc.lock")
+
+ # New version of LibA/1.0.1, that should never be used
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . LibA/1.0.1@")
+
+ # Go back to B, we want to develop but keep depending on LibA/1.0
+ client.save({"conanfile.py": GenConanfile().with_require("LibA/[>=1.0]")})
+ client.run("create . LibB/1.1@ --lockfile=libc.lock", assert_error=True)
+ self.assertIn("Couldn't find 'LibB/1.1' in lockfile", client.out)
+
+ client.run("lock create conanfile.py --name=LibB --version=1.1 --lockfile=libc.lock "
+ "--lockfile-out=libb.lock")
+ self.assertIn("LibA/1.0 from local cache", client.out)
+ self.assertNotIn("LibA/1.0.1", client.out)
+ libb_lock = client.load("libb.lock")
+ self.assertIn("LibA/1.0", libb_lock)
+ self.assertNotIn("LibA/1.0.1", libb_lock)
+
+ client.run("create . LibB/1.1@")
+ self.assertIn("LibA/1.0.1 from local cache - Cache", client.out)
+ client.run("create . LibB/1.1@ --lockfile=libb.lock")
+ self.assertIn("LibA/1.0 from local cache - Cache", client.out)
+
class PartialOptionsTest(unittest.TestCase):
"""
diff --git a/conans/test/functional/graph_lock/graph_lock_ci_test.py b/conans/test/functional/graph_lock/graph_lock_ci_test.py
index 84ea55dbe8c..4f0617cbd06 100644
--- a/conans/test/functional/graph_lock/graph_lock_ci_test.py
+++ b/conans/test/functional/graph_lock/graph_lock_ci_test.py
@@ -526,11 +526,13 @@ def test_version_ranges_partial_unused(self):
self.assertNotIn("pyreq/0.2", client.out)
# Go back to main orchestrator
- # This should fail, as PkgB/0.2 is not involved in the new resolution
+ # This should fail, as PkgB/1.0 is not involved in the new resolution
client.run("lock create --reference=PkgD/0.1@user/channel "
- "--lockfile=buildb.lock --lockfile-out=conan.lock", assert_error=True)
- self.assertIn("ERROR: The provided lockfile was not used, there is no overlap",
- client.out)
+ "--lockfile=buildb.lock --lockfile-out=error.lock")
+ # User can perfectly go and check the resulting lockfile and check if PkgB/0.1 is there
+ # We can probably help automate this with a "conan lock find" subcommand
+ error_lock = client.load("error.lock")
+ self.assertNotIn("PkgB/1.0@user/channel", error_lock)
client.run("lock build-order conan.lock --json=build_order.json")
json_file = client.load("build_order.json")
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7577@c51a88b
|
conan-io/conan
|
Python
| 7,577
|
Handle unknown statement while parsing profile. Fixes #6931
|
Changelog: Bugfix: Provide a more descriptive error when an unknown statement is added to a profile
Docs: omit
Closes: #6931
Tested with the example provided in the [examples repository](https://github.com/conan-io/examples/tree/master/features/emscripten)
I replaced the first line:
```
include(default)
```
By:
```
includes(default)
```
When running `conan` before this change, the command output is:
```sh
✗ ./build.sh
+ conan remove 'conan-hello-emscripten/*' -f
WARN: No package recipe matches 'conan-hello-emscripten/*'
+ conan create . conan/testing -pr emscripten.profile --build missing
ERROR: Error reading 'emscripten.profile' profile: Error parsing the profile text file: not enough values to unpack (expected 2, got 1)
```
After this change:
```sh
✗ ./build.sh
+ conan remove 'conan-hello-emscripten/*' -f
WARN: No package recipe matches 'conan-hello-emscripten/*'
+ conan create . conan/testing -pr emscripten.profile --build missing
ERROR: Error reading 'emscripten.profile' profile: Error while parsing line 0: 'includes(default)'
```
|
2020-08-22T19:40:15Z
|
[feature] Confusing error message when typo in profile
I had a typo in my profile file:
```
includes(cmake_3_16)
[build_requires]
doxygen/1.8.15@company/stable
```
Where it should have been `include` instead of `includes`.
When running Conan, I got the following error message:
```
ERROR: Error reading 'documentation' profile: Error parsing the profile text file: not enough values to unpack (expected 2, got 1)
```
At least I knew that something is wrong with the profile, but the error message was a bit misleading
A more helpful message could have been:
```
ERROR: Error reading 'documentation' profile: Error parsing the profile text file: includes is not a valid keyword
```
or something similar.
### Environment Details (include every applicable attribute)
* Conan version: 1.24.1
|
Thanks for reporting it. I think it should be easy to identify this error and provide a better message. Thanks!
|
[
{
"body": "I had a typo in my profile file:\r\n\r\n```\r\nincludes(cmake_3_16)\r\n\r\n[build_requires]\r\ndoxygen/1.8.15@company/stable\r\n```\r\n\r\nWhere it should have been `include` instead of `includes`. \r\nWhen running Conan, I got the following error message:\r\n\r\n```\r\nERROR: Error reading 'documentation' profile: Error parsing the profile text file: not enough values to unpack (expected 2, got 1)\r\n```\r\n\r\nAt least I knew that something is wrong with the profile, but the error message was a bit misleading\r\nA more helpful message could have been:\r\n\r\n```\r\nERROR: Error reading 'documentation' profile: Error parsing the profile text file: includes is not a valid keyword\r\n```\r\nor something similar.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Conan version: 1.24.1\r\n",
"number": 6931,
"title": "[feature] Confusing error message when typo in profile"
}
] |
47928cbcd3db1ff05798c5f36fc900412f9fa7c9
|
{
"head_commit": "c51a88bc1d5306bb52affa189eb015c04faccfc3",
"head_commit_message": "Handle unknown statement while parsing profile. Fixes #6931",
"patch_to_review": "diff --git a/conans/client/profile_loader.py b/conans/client/profile_loader.py\nindex 431dbad2965..21c8193eb26 100644\n--- a/conans/client/profile_loader.py\n+++ b/conans/client/profile_loader.py\n@@ -1,7 +1,7 @@\n import os\n from collections import OrderedDict, defaultdict\n \n-from conans.errors import ConanException, ConanV2Exception\n+from conans.errors import ConanException, ConanV2Exception, ConanParsingError\n from conans.model.env_info import EnvValues, unquote\n from conans.model.options import OptionsValues\n from conans.model.profile import Profile\n@@ -38,7 +38,10 @@ def __init__(self, text):\n include = include[:-1]\n self.includes.append(include)\n else:\n- name, value = line.split(\"=\", 1)\n+ try:\n+ name, value = line.split(\"=\", 1)\n+ except ValueError as error:\n+ raise ConanParsingError(\"Error while parsing line %i: '%s'\" % (counter, line))\n name = name.strip()\n if \" \" in name:\n raise ConanException(\"The names of the variables cannot contain spaces\")\ndiff --git a/conans/errors.py b/conans/errors.py\nindex 7df260b82f0..10517669ce7 100644\n--- a/conans/errors.py\n+++ b/conans/errors.py\n@@ -136,6 +136,10 @@ class ConanConnectionError(ConanException):\n pass\n \n \n+class ConanParsingError(ConanException):\n+ pass\n+\n+\n class ConanOutdatedClient(ConanException):\n pass\n \n"
}
|
[
{
"diff_hunk": "@@ -38,7 +38,10 @@ def __init__(self, text):\n include = include[:-1]\n self.includes.append(include)\n else:\n- name, value = line.split(\"=\", 1)\n+ try:\n+ name, value = line.split(\"=\", 1)\n+ except ValueError as error:\n+ raise ConanParsingError(\"Error while parsing line %i: '%s'\" % (counter, line))",
"line": null,
"original_line": 44,
"original_start_line": null,
"path": "conans/client/profile_loader.py",
"start_line": null,
"text": "@user1:\nLet's throw a generic ``ConanException``. New exceptions will be introduced from now on, only if it is going to be captured and processed somewhere else. Let's keep it simple."
}
] |
97ac4e5fc1e6a7709c4ec37ec100fb1f1ab941d5
|
diff --git a/conans/client/profile_loader.py b/conans/client/profile_loader.py
index 431dbad2965..2b12567c777 100644
--- a/conans/client/profile_loader.py
+++ b/conans/client/profile_loader.py
@@ -38,7 +38,10 @@ def __init__(self, text):
include = include[:-1]
self.includes.append(include)
else:
- name, value = line.split("=", 1)
+ try:
+ name, value = line.split("=", 1)
+ except ValueError as error:
+ raise ConanException("Error while parsing line %i: '%s'" % (counter, line))
name = name.strip()
if " " in name:
raise ConanException("The names of the variables cannot contain spaces")
diff --git a/conans/test/functional/command/install/install_test.py b/conans/test/functional/command/install/install_test.py
index bd41c945578..a2a0fb681bd 100644
--- a/conans/test/functional/command/install/install_test.py
+++ b/conans/test/functional/command/install/install_test.py
@@ -488,7 +488,7 @@ def requirements(self):
client.run("install . -pr=myotherprofile")
self.assertIn("PKGOS=FreeBSD", client.out)
client.run("install . -pr=./myotherprofile", assert_error=True)
- self.assertIn("Error parsing the profile", client.out)
+ self.assertIn("Error while parsing line 0", client.out)
def install_with_path_errors_test(self):
client = TestClient()
diff --git a/conans/test/unittests/client/profile_loader/profile_loader_test.py b/conans/test/unittests/client/profile_loader/profile_loader_test.py
index 40b2a166ac4..8eab2d9b5f0 100644
--- a/conans/test/unittests/client/profile_loader/profile_loader_test.py
+++ b/conans/test/unittests/client/profile_loader/profile_loader_test.py
@@ -66,6 +66,17 @@ def test_parser(self):
os=thing""")
+ txt = """
+includes(a/path/to\profile.txt)
+"""
+ with self.assertRaises(ConanException):
+ try:
+ ProfileParser(txt)
+ except Exception as error:
+ self.assertIn("Error while parsing line 1", error.args[0])
+ raise
+
+
class ProfileTest(unittest.TestCase):
def profile_loads_test(self):
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-7855@4b53630
|
conan-io/conan
|
Python
| 7,855
|
[poc] CMake + iOS
|
Changelog: Feature: Add POC on a toolchain for iOS (using CMake XCode generator).
Docs: https://github.com/conan-io/docs/pull/1906
Only working with XCode generator for the moment: https://cmake.org/cmake/help/v3.18/manual/cmake-toolchains.7.html#cross-compiling-for-ios-tvos-or-watchos
Close https://github.com/conan-io/conan/issues/7810
|
2020-10-09T10:49:22Z
|
[feature] CMakeToolchain + iOS projects PoC
Implement a proof of concept of using the new CMakeToolchain feature for building iOS projects
Notes:
- Toolchain can be assumed installed
- Focused on creating/building 1 package, both local flow (conan install + native cmake) and ``conan create`` flow.
- Integration test.
- Test can be skipped for CI, but should work locally, annotate assumptions and installation details
- No env-var configuration at all
- All code must be private and local to the toolchains package
- New cross-build model with contexts must be used if necessary, not the old os_build, arch_build
- If enough time to implement reuse of a packages, the ``cmake_find_package_multi`` generator must be used
|
[
{
"body": "Implement a proof of concept of using the new CMakeToolchain feature for building iOS projects\r\n\r\nNotes:\r\n- Toolchain can be assumed installed\r\n- Focused on creating/building 1 package, both local flow (conan install + native cmake) and ``conan create`` flow.\r\n- Integration test.\r\n- Test can be skipped for CI, but should work locally, annotate assumptions and installation details\r\n- No env-var configuration at all\r\n- All code must be private and local to the toolchains package\r\n- New cross-build model with contexts must be used if necessary, not the old os_build, arch_build\r\n- If enough time to implement reuse of a packages, the ``cmake_find_package_multi`` generator must be used",
"number": 7810,
"title": "[feature] CMakeToolchain + iOS projects PoC"
}
] |
0caff8eb3b284fdd8eca221d9863ef7bf90a5c9c
|
{
"head_commit": "4b536304f7ae4b45453189d182c02cc75c05d36c",
"head_commit_message": "minor changes",
"patch_to_review": "diff --git a/conans/client/toolchain/cmake/__init__.py b/conans/client/toolchain/cmake/__init__.py\nindex d9020743b4f..2e04e7cb575 100644\n--- a/conans/client/toolchain/cmake/__init__.py\n+++ b/conans/client/toolchain/cmake/__init__.py\n@@ -1,4 +1,5 @@\n from .android import CMakeAndroidToolchain\n+from .ios import CMakeiOSToolchain\n from .generic import CMakeGenericToolchain\n \n \n@@ -7,5 +8,7 @@ def CMakeToolchain(conanfile, *args, **kwargs):\n if os_ == 'Android':\n # assert cross_building(conanfile) # FIXME: Conan v2.0, two-profiles approach by default\n return CMakeAndroidToolchain(conanfile, *args, **kwargs)\n+ if os_ == 'iOS':\n+ return CMakeiOSToolchain(conanfile, *args, **kwargs)\n else:\n return CMakeGenericToolchain(conanfile, *args, **kwargs)\ndiff --git a/conans/client/toolchain/cmake/ios.py b/conans/client/toolchain/cmake/ios.py\nnew file mode 100644\nindex 00000000000..1f925be695e\n--- /dev/null\n+++ b/conans/client/toolchain/cmake/ios.py\n@@ -0,0 +1,133 @@\n+import textwrap\n+\n+from .base import CMakeToolchainBase\n+\n+\n+class CMakeiOSToolchain(CMakeToolchainBase):\n+ _template_project_include = ''\n+\n+ _template_toolchain = textwrap.dedent(\"\"\"\n+ # Conan automatically generated toolchain file\n+ # DO NOT EDIT MANUALLY, it will be overwritten\n+ # Avoid including toolchain file several times (bad if appending to variables like\n+ # CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323\n+ if(CONAN_TOOLCHAIN_INCLUDED)\n+ return()\n+ endif()\n+ set(CONAN_TOOLCHAIN_INCLUDED TRUE)\n+ # build_type (Release, Debug, etc) is only defined for single-config generators\n+ {%- if build_type %}\n+ set(CMAKE_BUILD_TYPE \"{{ build_type }}\" CACHE STRING \"Choose the type of build.\" FORCE)\n+ {%- endif %}\n+ get_property( _CMAKE_IN_TRY_COMPILE GLOBAL PROPERTY IN_TRY_COMPILE )\n+ if(_CMAKE_IN_TRY_COMPILE)\n+ message(STATUS \"Running toolchain IN_TRY_COMPILE\")\n+ return()\n+ endif()\n+ message(\"Using Conan toolchain through ${CMAKE_TOOLCHAIN_FILE}.\")\n+ # We are going to adjust automagically many things as requested by Conan\n+ # these are the things done by 'conan_basic_setup()'\n+ set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)\n+ # To support the cmake_find_package generators\n+ {% if cmake_module_path -%}\n+ set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})\n+ {%- endif %}\n+ {% if cmake_prefix_path -%}\n+ set(CMAKE_PREFIX_PATH {{ cmake_prefix_path }} ${CMAKE_PREFIX_PATH})\n+ {%- endif %}\n+ # shared libs\n+ {% if shared_libs -%}\n+ message(STATUS \"Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}\")\n+ set(BUILD_SHARED_LIBS {{ shared_libs }})\n+ {%- endif %}\n+\n+ # C++ Standard\n+ {% if cppstd -%}\n+ message(STATUS \"Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}\")\n+ set(CMAKE_CXX_STANDARD {{ cppstd }})\n+ set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})\n+ {%- endif %}\n+ # Install prefix\n+ {% if install_prefix -%}\n+ set(CMAKE_INSTALL_PREFIX \"{{install_prefix}}\" CACHE STRING \"\" FORCE)\n+ {%- endif %}\n+\n+ # iOS stuff\n+ # conan vars\n+ set(CONAN_SETTINGS_HOST_ARCH \"{{host_architecture}}\")\n+ set(CONAN_SETTINGS_HOST_OS \"{{host_os}}\") # CMAKE_SYSTEM_NAME\n+ set(CONAN_SETTINGS_HOST_OS_VERSION \"{{host_os_version}}\") # SDK_VERSION\n+ set(CONAN_SDK_NAME \"{{host_sdk_name}}\")\n+ # TODO: add logic to calc the deployment target\n+ set(CONAN_SETTINGS_HOST_MIN_OS_VERSION \"{{host_os_min_version}}\") # DEPLOYMENT TARGET\n+\n+ # set cmake vars\n+ set(CMAKE_SYSTEM_NAME ${CONAN_SETTINGS_HOST_OS})\n+ set(CMAKE_SYSTEM_VERSION ${CONAN_SETTINGS_HOST_OS_VERSION})\n+ set(DEPLOYMENT_TARGET ${CONAN_SETTINGS_HOST_MIN_OS_VERSION})\n+ # Set the architectures for which to build.\n+ set(CMAKE_OSX_ARCHITECTURES ${CONAN_SETTINGS_HOST_ARCH})\n+ # Setting CMAKE_OSX_SYSROOT SDK, when using Xcode generator the name is enough\n+ # but full path is necessary for others\n+ set(CMAKE_OSX_SYSROOT \"${CONAN_SDK_NAME}\")\n+ if(NOT DEFINED CMAKE_XCODE_ATTRIBUTE_DEVELOPMENT_TEAM)\n+ set(CMAKE_XCODE_ATTRIBUTE_DEVELOPMENT_TEAM \"123456789A\" CACHE INTERNAL \"\")\n+ endif()\n+ \"\"\")\n+\n+ def __init__(self, conanfile, build_type=None, **kwargs):\n+ super(CMakeiOSToolchain, self).__init__(conanfile, build_type=build_type, **kwargs)\n+ self.build_type = build_type or self._conanfile.settings.get_safe(\"build_type\")\n+ self.host_architecture = self._get_architecture()\n+ self.host_os = self._conanfile.settings.get_safe(\"os\")\n+ self.host_os_version = self._conanfile.settings.get_safe(\"os.version\")\n+ self.host_sdk_name = self._get_sdk_name(self.host_architecture)\n+ self.host_os_min_version = \"9.0\"\n+ self.libcxx = self._conanfile.settings.get_safe(\"compiler.libcxx\")\n+ self.cppstd = self._conanfile.settings.get_safe(\"compiler.cppstd\")\n+\n+ try:\n+ # This is only defined in the cache, not in the local flow\n+ self.install_prefix = self._conanfile.package_folder.replace(\"\\\\\", \"/\")\n+ except AttributeError:\n+ # FIXME: In the local flow, we don't know the package_folder\n+ self.install_prefix = None\n+\n+ def _get_architecture(self):\n+ # check valid combinations of architecture - os ?\n+ # for iOS a FAT library valid for simulator and device\n+ # can be generated if multiple archs are specified:\n+ # \"-DCMAKE_OSX_ARCHITECTURES=armv7;armv7s;arm64;i386;x86_64\"\n+ arch = self._conanfile.settings.get_safe(\"arch\")\n+ return {\"x86\": \"i386\",\n+ \"x86_64\": \"x86_64\",\n+ \"armv8\": \"arm64\",\n+ \"armv8_32\": \"arm64_32\"}.get(arch, arch)\n+ return None\n+\n+ def _get_sdk_name(self, architecture):\n+ os_name = self._conanfile.settings.get_safe(\"os\")\n+ if \"arm\" in architecture:\n+ return {\"iOS\": \"iphoneos\",\n+ \"watchOS\": \"appletvos\",\n+ \"tvOS\": \"watchos\"}.get(os_name)\n+ else:\n+ return {\"iOS\": \"iphonesimulator\",\n+ \"watchOS\": \"appletvsimulator\",\n+ \"tvOS\": \"watchsimulator\"}.get(os_name)\n+ return None\n+\n+ def _get_template_context_data(self):\n+ tpl_toolchain_context, tpl_project_include_context = \\\n+ super(CMakeiOSToolchain, self)._get_template_context_data()\n+ tpl_toolchain_context.update({\n+ \"host_architecture\": self.host_architecture,\n+ \"host_os\": self.host_os,\n+ \"host_os_version\": self.host_os_version,\n+ \"host_sdk_name\": self.host_sdk_name,\n+ \"host_os_min_version\": self.host_os_min_version,\n+ \"install_prefix\": self.install_prefix,\n+ \"set_libcxx\": self.libcxx,\n+ \"cppstd\": self.cppstd\n+ })\n+ return tpl_toolchain_context, tpl_project_include_context\ndiff --git a/conans/test/integration/toolchains/__init__.py b/conans/test/integration/toolchains/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/integration/toolchains/ios/__init__.py b/conans/test/integration/toolchains/ios/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/integration/toolchains/ios/_utils.py b/conans/test/integration/toolchains/ios/_utils.py\nnew file mode 100644\nindex 00000000000..fd25c959d0b\n--- /dev/null\n+++ b/conans/test/integration/toolchains/ios/_utils.py\n@@ -0,0 +1,72 @@\n+import textwrap\n+\n+lib_h = textwrap.dedent(\"\"\"\n+ #pragma once\n+ #include <string>\n+ class HelloLib {\n+ public:\n+ void hello(const std::string& name);\n+ };\n+\"\"\")\n+\n+lib_cpp = textwrap.dedent(\"\"\"\n+ #include \"hello.h\"\n+ #include <iostream>\n+ using namespace std;\n+ void HelloLib::hello(const std::string& name) {\n+ #ifdef DEBUG\n+ std::cout << \"Hello \" << name << \" Debug!\" <<std::endl;\n+ #else\n+ std::cout << \"Hello \" << name << \" Release!\" <<std::endl;\n+ #endif\n+ }\n+\"\"\")\n+\n+cpp_wrapper_h = textwrap.dedent(\"\"\"\n+ #import <Foundation/Foundation.h>\n+ @interface CPP_Wrapper : NSObject\n+ - (void)hello_cpp_wrapped:(NSString *)name;\n+ @end\n+\"\"\")\n+\n+cpp_wrapper_mm = textwrap.dedent(\"\"\"\n+ #import \"cpp-wrapper.h\"\n+ #include \"hello.h\"\n+ @implementation CPP_Wrapper\n+ - (void)hello_cpp_wrapped:(NSString *)name {\n+ HelloLib hello_lib;\n+ hello_lib.hello([name cStringUsingEncoding:NSUTF8StringEncoding]);\n+ }\n+ @end\n+\"\"\")\n+\n+cmakelists = textwrap.dedent(\"\"\"\n+ cmake_minimum_required(VERSION 3.1)\n+ project(MyHello CXX)\n+ set(SOURCES\n+ hello.cpp\n+ cpp-wrapper.mm\n+ )\n+ set(HEADERS\n+ hello.h\n+ cpp-wrapper.h\n+ )\n+ add_library (hello ${SOURCES} ${HEADERS})\n+ set_target_properties(hello PROPERTIES PUBLIC_HEADER \"${HEADERS}\")\n+ install(TARGETS hello\n+ RUNTIME DESTINATION bin\n+ LIBRARY DESTINATION lib\n+ ARCHIVE DESTINATION lib\n+ PUBLIC_HEADER DESTINATION include\n+ )\n+\"\"\")\n+\n+\n+def create_library(client):\n+ client.save({\n+ 'hello.h': lib_h,\n+ 'hello.cpp': lib_cpp,\n+ 'cpp-wrapper.h': cpp_wrapper_h,\n+ 'cpp-wrapper.mm': cpp_wrapper_mm,\n+ 'CMakeLists.txt': cmakelists\n+ })\ndiff --git a/conans/test/integration/toolchains/ios/test_using_cmake.py b/conans/test/integration/toolchains/ios/test_using_cmake.py\nnew file mode 100644\nindex 00000000000..b7eeaa88102\n--- /dev/null\n+++ b/conans/test/integration/toolchains/ios/test_using_cmake.py\n@@ -0,0 +1,80 @@\n+import platform\n+import textwrap\n+import unittest\n+\n+from conans.client.toolchain.cmake.base import CMakeToolchainBase\n+from conans.test.utils.tools import TestClient\n+from ._utils import create_library\n+\n+\[email protected](platform.system() == \"Darwin\", \"Requires XCode\")\n+class ToolchainiOSTestCase(unittest.TestCase):\n+\n+ def setUp(self):\n+ self.t = TestClient()\n+ create_library(self.t)\n+ self._conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, CMake, CMakeToolchain\n+\n+\n+ class Library(ConanFile):\n+ name = 'hello'\n+ version = '1.0'\n+ settings = 'os', 'arch', 'compiler', 'build_type'\n+ exports_sources = 'hello.h', 'hello.cpp', 'cpp-wrapper.h', 'cpp-wrapper.mm', 'CMakeLists.txt'\n+ options = {{'shared': [True, False]}}\n+ default_options = {{'shared': False}}\n+ _cmake = None\n+\n+ def _configure_cmake(self):\n+ if not self._cmake:\n+ self._cmake = CMake(self, generator={generator}, parallel=False)\n+ self._cmake.configure()\n+ return self._cmake\n+\n+ def toolchain(self):\n+ tc = CMakeToolchain(self)\n+ tc.write_toolchain_files()\n+\n+ def build(self):\n+ cmake = self._configure_cmake()\n+ cmake.configure()\n+ cmake.build()\n+ self.run(\"lipo -info Release-iphoneos/libhello.a\")\n+\n+ def package(self):\n+ cmake = self._configure_cmake()\n+ cmake.install()\n+ \"\"\")\n+\n+ self.t.save({\n+ 'ios_profile': textwrap.dedent(\"\"\"\n+ [settings]\n+ os=iOS\n+ os.version=12.0\n+ arch=armv8\n+ compiler=apple-clang\n+ compiler.version=12.0\n+ compiler.libcxx=libc++\n+ build_type=Release\n+ \"\"\")\n+ })\n+\n+ def test_xcode_generator(self):\n+ \"\"\" Simplest approach:\n+ https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html#cross-compiling-for-ios-tvos-or-watchos\n+ \"\"\"\n+ self.t.save({'conanfile.py': self._conanfile.format(generator='\"Xcode\"')})\n+\n+ # Build in the cache\n+ self.t.run('create . --profile:build=default --profile:host=ios_profile')\n+ self.assertIn(\"Non-fat file: Release-iphoneos/libhello.a is architecture: arm64\",\n+ self.t.out)\n+\n+ # Build locally\n+ self.t.run('install . --profile:host=ios_profile --profile:build=default')\n+ self.t.run_command('cmake . -G\"Xcode\" -DCMAKE_TOOLCHAIN_FILE={}'.format(CMakeToolchainBase.filename))\n+ self.t.run_command('cmake --build . --config Release')\n+\n+ def test_unix_makefiles_generator(self):\n+ pass\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,72 @@\n+import textwrap\n+\n+lib_h = textwrap.dedent(\"\"\"\n+ #pragma once\n+ #include <string>\n+ class HelloLib {\n+ public:\n+ void hello(const std::string& name);\n+ };\n+\"\"\")\n+\n+lib_cpp = textwrap.dedent(\"\"\"\n+ #include \"hello.h\"\n+ #include <iostream>\n+ using namespace std;\n+ void HelloLib::hello(const std::string& name) {\n+ #ifdef DEBUG\n+ std::cout << \"Hello \" << name << \" Debug!\" <<std::endl;\n+ #else\n+ std::cout << \"Hello \" << name << \" Release!\" <<std::endl;\n+ #endif\n+ }\n+\"\"\")\n+\n+cpp_wrapper_h = textwrap.dedent(\"\"\"",
"line": null,
"original_line": 25,
"original_start_line": null,
"path": "conans/test/integration/toolchains/ios/_utils.py",
"start_line": null,
"text": "@user1:\nIsn't it possible to have a pure C++ library compiled for iOS?\r\n\r\nThat would be much better, if the testing code would be generic?. If we could be building exactly the same \"hello world\" C++ library with all toolchains, that would be great.\n\n@author:\nIt is not strictly needed, I just liked the idea of having also the wrapper there, that you will need if you want to call the library from Swift. If if makes it cleaner I'll remove it 👍"
},
{
"diff_hunk": "@@ -0,0 +1,133 @@\n+import textwrap\n+\n+from .base import CMakeToolchainBase\n+\n+\n+class CMakeiOSToolchain(CMakeToolchainBase):\n+ _template_project_include = ''\n+\n+ _template_toolchain = textwrap.dedent(\"\"\"\n+ # Conan automatically generated toolchain file\n+ # DO NOT EDIT MANUALLY, it will be overwritten\n+ # Avoid including toolchain file several times (bad if appending to variables like\n+ # CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323\n+ if(CONAN_TOOLCHAIN_INCLUDED)\n+ return()\n+ endif()\n+ set(CONAN_TOOLCHAIN_INCLUDED TRUE)\n+ # build_type (Release, Debug, etc) is only defined for single-config generators\n+ {%- if build_type %}\n+ set(CMAKE_BUILD_TYPE \"{{ build_type }}\" CACHE STRING \"Choose the type of build.\" FORCE)\n+ {%- endif %}\n+ get_property( _CMAKE_IN_TRY_COMPILE GLOBAL PROPERTY IN_TRY_COMPILE )\n+ if(_CMAKE_IN_TRY_COMPILE)\n+ message(STATUS \"Running toolchain IN_TRY_COMPILE\")\n+ return()\n+ endif()\n+ message(\"Using Conan toolchain through ${CMAKE_TOOLCHAIN_FILE}.\")\n+ # We are going to adjust automagically many things as requested by Conan\n+ # these are the things done by 'conan_basic_setup()'\n+ set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)\n+ # To support the cmake_find_package generators\n+ {% if cmake_module_path -%}\n+ set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})\n+ {%- endif %}\n+ {% if cmake_prefix_path -%}\n+ set(CMAKE_PREFIX_PATH {{ cmake_prefix_path }} ${CMAKE_PREFIX_PATH})\n+ {%- endif %}\n+ # shared libs\n+ {% if shared_libs -%}\n+ message(STATUS \"Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}\")\n+ set(BUILD_SHARED_LIBS {{ shared_libs }})\n+ {%- endif %}\n+\n+ # C++ Standard\n+ {% if cppstd -%}\n+ message(STATUS \"Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}\")\n+ set(CMAKE_CXX_STANDARD {{ cppstd }})\n+ set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})\n+ {%- endif %}\n+ # Install prefix\n+ {% if install_prefix -%}\n+ set(CMAKE_INSTALL_PREFIX \"{{install_prefix}}\" CACHE STRING \"\" FORCE)\n+ {%- endif %}\n+\n+ # iOS stuff\n+ # conan vars\n+ set(CONAN_SETTINGS_HOST_ARCH \"{{host_architecture}}\")\n+ set(CONAN_SETTINGS_HOST_OS \"{{host_os}}\") # CMAKE_SYSTEM_NAME\n+ set(CONAN_SETTINGS_HOST_OS_VERSION \"{{host_os_version}}\") # SDK_VERSION\n+ set(CONAN_SDK_NAME \"{{host_sdk_name}}\")\n+ # TODO: add logic to calc the deployment target\n+ set(CONAN_SETTINGS_HOST_MIN_OS_VERSION \"{{host_os_min_version}}\") # DEPLOYMENT TARGET\n+\n+ # set cmake vars\n+ set(CMAKE_SYSTEM_NAME ${CONAN_SETTINGS_HOST_OS})\n+ set(CMAKE_SYSTEM_VERSION ${CONAN_SETTINGS_HOST_OS_VERSION})\n+ set(DEPLOYMENT_TARGET ${CONAN_SETTINGS_HOST_MIN_OS_VERSION})\n+ # Set the architectures for which to build.\n+ set(CMAKE_OSX_ARCHITECTURES ${CONAN_SETTINGS_HOST_ARCH})\n+ # Setting CMAKE_OSX_SYSROOT SDK, when using Xcode generator the name is enough\n+ # but full path is necessary for others\n+ set(CMAKE_OSX_SYSROOT \"${CONAN_SDK_NAME}\")\n+ if(NOT DEFINED CMAKE_XCODE_ATTRIBUTE_DEVELOPMENT_TEAM)\n+ set(CMAKE_XCODE_ATTRIBUTE_DEVELOPMENT_TEAM \"123456789A\" CACHE INTERNAL \"\")\n+ endif()\n+ \"\"\")\n+\n+ def __init__(self, conanfile, build_type=None, **kwargs):\n+ super(CMakeiOSToolchain, self).__init__(conanfile, build_type=build_type, **kwargs)\n+ self.build_type = build_type or self._conanfile.settings.get_safe(\"build_type\")\n+ self.host_architecture = self._get_architecture()\n+ self.host_os = self._conanfile.settings.get_safe(\"os\")\n+ self.host_os_version = self._conanfile.settings.get_safe(\"os.version\")\n+ self.host_sdk_name = self._get_sdk_name(self.host_architecture)\n+ self.host_os_min_version = \"9.0\"",
"line": null,
"original_line": 85,
"original_start_line": null,
"path": "conans/client/toolchain/cmake/ios.py",
"start_line": null,
"text": "@user1:\nThis constant \"9.0\" is a bit concerning, please add at least an explanation, a warning or something."
}
] |
adda7bd4240d72dab91f8be62e4e8b5eaf819f13
|
diff --git a/conans/client/toolchain/cmake/__init__.py b/conans/client/toolchain/cmake/__init__.py
index fb2dfdc0a27..9ca41c03a89 100644
--- a/conans/client/toolchain/cmake/__init__.py
+++ b/conans/client/toolchain/cmake/__init__.py
@@ -1,4 +1,5 @@
from .android import CMakeAndroidToolchain
+from .ios import CMakeiOSToolchain
from .generic import CMakeGenericToolchain
@@ -7,5 +8,7 @@ def CMakeToolchain(conanfile, **kwargs):
if os_ == 'Android':
# assert cross_building(conanfile) # FIXME: Conan v2.0, two-profiles approach by default
return CMakeAndroidToolchain(conanfile, **kwargs)
+ if os_ == 'iOS':
+ return CMakeiOSToolchain(conanfile, **kwargs)
else:
return CMakeGenericToolchain(conanfile, **kwargs)
diff --git a/conans/client/toolchain/cmake/ios.py b/conans/client/toolchain/cmake/ios.py
new file mode 100644
index 00000000000..c22aa430acb
--- /dev/null
+++ b/conans/client/toolchain/cmake/ios.py
@@ -0,0 +1,102 @@
+import textwrap
+
+from .base import CMakeToolchainBase
+
+
+class CMakeiOSToolchain(CMakeToolchainBase):
+ _toolchain_tpl = textwrap.dedent("""
+ {% extends 'base_toolchain' %}
+ {% block before_try_compile %}
+ {{ super() }}
+ # set cmake vars
+ set(CMAKE_SYSTEM_NAME {{ CMAKE_SYSTEM_NAME }})
+ set(CMAKE_SYSTEM_VERSION {{ CMAKE_SYSTEM_VERSION }})
+ set(DEPLOYMENT_TARGET ${CONAN_SETTINGS_HOST_MIN_OS_VERSION})
+ # Set the architectures for which to build.
+ set(CMAKE_OSX_ARCHITECTURES {{ CMAKE_OSX_ARCHITECTURES }})
+ # Setting CMAKE_OSX_SYSROOT SDK, when using Xcode generator the name is enough
+ # but full path is necessary for others
+ set(CMAKE_OSX_SYSROOT {{ CMAKE_OSX_SYSROOT }})
+ if(NOT DEFINED CMAKE_XCODE_ATTRIBUTE_DEVELOPMENT_TEAM)
+ set(CMAKE_XCODE_ATTRIBUTE_DEVELOPMENT_TEAM "123456789A" CACHE INTERNAL "")
+ endif()
+ {% endblock %}
+ {% block main %}
+ {{ super() }}
+ {% if shared_libs -%}
+ message(STATUS "Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}")
+ set(BUILD_SHARED_LIBS {{ shared_libs }})
+ {%- endif %}
+ {% if parallel -%}
+ set(CONAN_CXX_FLAGS "${CONAN_CXX_FLAGS} {{ parallel }}")
+ set(CONAN_C_FLAGS "${CONAN_C_FLAGS} {{ parallel }}")
+ {%- endif %}
+ {% if cppstd -%}
+ message(STATUS "Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}")
+ set(CMAKE_CXX_STANDARD {{ cppstd }})
+ set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})
+ {%- endif %}
+ set(CMAKE_CXX_FLAGS_INIT "${CONAN_CXX_FLAGS}" CACHE STRING "" FORCE)
+ set(CMAKE_C_FLAGS_INIT "${CONAN_C_FLAGS}" CACHE STRING "" FORCE)
+ set(CMAKE_SHARED_LINKER_FLAGS_INIT "${CONAN_SHARED_LINKER_FLAGS}" CACHE STRING "" FORCE)
+ set(CMAKE_EXE_LINKER_FLAGS_INIT "${CONAN_EXE_LINKER_FLAGS}" CACHE STRING "" FORCE)
+ {% endblock %}
+ """)
+
+ def __init__(self, conanfile, build_type=None, **kwargs):
+ super(CMakeiOSToolchain, self).__init__(conanfile, build_type=build_type, **kwargs)
+ self.build_type = build_type or self._conanfile.settings.get_safe("build_type")
+ self.host_architecture = self._get_architecture()
+ self.host_os = self._conanfile.settings.get_safe("os")
+ self.host_os_version = self._conanfile.settings.get_safe("os.version")
+ self.host_sdk_name = self._apple_sdk_name()
+
+ # TODO: Discuss how to handle CMAKE_OSX_DEPLOYMENT_TARGET to set min-version
+ # add a setting? check an option and if not present set a default?
+ # default to os.version?
+
+ def _get_templates(self):
+ templates = super(CMakeiOSToolchain, self)._get_templates()
+ templates.update({
+ CMakeToolchainBase.filename: self._toolchain_tpl,
+ })
+ return templates
+
+ def _get_architecture(self):
+ # check valid combinations of architecture - os ?
+ # for iOS a FAT library valid for simulator and device
+ # can be generated if multiple archs are specified:
+ # "-DCMAKE_OSX_ARCHITECTURES=armv7;armv7s;arm64;i386;x86_64"
+ arch = self._conanfile.settings.get_safe("arch")
+ return {"x86": "i386",
+ "x86_64": "x86_64",
+ "armv8": "arm64",
+ "armv8_32": "arm64_32"}.get(arch, arch)
+ return None
+
+ # TODO: refactor, comes from conans.client.tools.apple.py
+ def _apple_sdk_name(self):
+ """returns proper SDK name suitable for OS and architecture
+ we're building for (considering simulators)"""
+ arch = self._conanfile.settings.get_safe('arch')
+ os_ = self._conanfile.settings.get_safe('os')
+ if str(arch).startswith('x86'):
+ return {'Macos': 'macosx',
+ 'iOS': 'iphonesimulator',
+ 'watchOS': 'watchsimulator',
+ 'tvOS': 'appletvsimulator'}.get(str(os_))
+ else:
+ return {'Macos': 'macosx',
+ 'iOS': 'iphoneos',
+ 'watchOS': 'watchos',
+ 'tvOS': 'appletvos'}.get(str(os_), None)
+
+ def _get_template_context_data(self):
+ ctxt_toolchain, _ = super(CMakeiOSToolchain, self)._get_template_context_data()
+ ctxt_toolchain.update({
+ "CMAKE_OSX_ARCHITECTURES": self.host_architecture,
+ "CMAKE_SYSTEM_NAME": self.host_os,
+ "CMAKE_SYSTEM_VERSION": self.host_os_version,
+ "CMAKE_OSX_SYSROOT": self.host_sdk_name
+ })
+ return ctxt_toolchain, {}
diff --git a/conans/test/integration/toolchains/ios/__init__.py b/conans/test/integration/toolchains/ios/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/integration/toolchains/ios/_utils.py b/conans/test/integration/toolchains/ios/_utils.py
new file mode 100644
index 00000000000..10caeec6333
--- /dev/null
+++ b/conans/test/integration/toolchains/ios/_utils.py
@@ -0,0 +1,50 @@
+import textwrap
+
+lib_h = textwrap.dedent("""
+ #pragma once
+ #include <string>
+ class HelloLib {
+ public:
+ void hello(const std::string& name);
+ };
+""")
+
+lib_cpp = textwrap.dedent("""
+ #include "hello.h"
+ #include <iostream>
+ using namespace std;
+ void HelloLib::hello(const std::string& name) {
+ #ifdef DEBUG
+ std::cout << "Hello " << name << " Debug!" <<std::endl;
+ #else
+ std::cout << "Hello " << name << " Release!" <<std::endl;
+ #endif
+ }
+""")
+
+cmakelists = textwrap.dedent("""
+ cmake_minimum_required(VERSION 3.1)
+ project(MyHello CXX)
+ set(SOURCES
+ hello.cpp
+ )
+ set(HEADERS
+ hello.h
+ )
+ add_library (hello ${SOURCES} ${HEADERS})
+ set_target_properties(hello PROPERTIES PUBLIC_HEADER "${HEADERS}")
+ install(TARGETS hello
+ RUNTIME DESTINATION bin
+ LIBRARY DESTINATION lib
+ ARCHIVE DESTINATION lib
+ PUBLIC_HEADER DESTINATION include
+ )
+""")
+
+
+def create_library(client):
+ client.save({
+ 'hello.h': lib_h,
+ 'hello.cpp': lib_cpp,
+ 'CMakeLists.txt': cmakelists
+ })
diff --git a/conans/test/integration/toolchains/ios/test_using_cmake.py b/conans/test/integration/toolchains/ios/test_using_cmake.py
new file mode 100644
index 00000000000..7348201953d
--- /dev/null
+++ b/conans/test/integration/toolchains/ios/test_using_cmake.py
@@ -0,0 +1,81 @@
+import platform
+import textwrap
+import unittest
+
+from conans.client.toolchain.cmake.base import CMakeToolchainBase
+from conans.test.utils.tools import TestClient
+from ._utils import create_library
+
+
[email protected](platform.system() == "Darwin", "Requires XCode")
+class ToolchainiOSTestCase(unittest.TestCase):
+
+ def setUp(self):
+ self.t = TestClient()
+ create_library(self.t)
+ self._conanfile = textwrap.dedent("""
+ from conans import ConanFile, CMake, CMakeToolchain
+
+
+ class Library(ConanFile):
+ name = 'hello'
+ version = '1.0'
+ settings = 'os', 'arch', 'compiler', 'build_type'
+ exports_sources = 'hello.h', 'hello.cpp', 'CMakeLists.txt'
+ options = {{'shared': [True, False]}}
+ default_options = {{'shared': False}}
+ _cmake = None
+
+ def _configure_cmake(self):
+ if not self._cmake:
+ self._cmake = CMake(self, generator={generator}, parallel=False)
+ self._cmake.configure()
+ return self._cmake
+
+ def toolchain(self):
+ tc = CMakeToolchain(self)
+ tc.write_toolchain_files()
+
+ def build(self):
+ cmake = self._configure_cmake()
+ cmake.configure()
+ cmake.build()
+ self.run("lipo -info Release-iphoneos/libhello.a")
+
+ def package(self):
+ cmake = self._configure_cmake()
+ cmake.install()
+ """)
+
+ self.t.save({
+ 'ios_profile': textwrap.dedent("""
+ [settings]
+ os=iOS
+ os.version=12.0
+ arch=armv8
+ compiler=apple-clang
+ compiler.version=12.0
+ compiler.libcxx=libc++
+ build_type=Release
+ """)
+ })
+
+ def test_xcode_generator(self):
+ """ Simplest approach:
+ https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html#cross-compiling-for-ios-tvos-or-watchos
+ """
+ self.t.save({'conanfile.py': self._conanfile.format(generator='"Xcode"')})
+
+ # Build in the cache
+ self.t.run('create . --profile:build=default --profile:host=ios_profile')
+ self.assertIn("Non-fat file: Release-iphoneos/libhello.a is architecture: arm64", self.t.out)
+
+ # Build locally
+ self.t.run('install . --profile:host=ios_profile --profile:build=default')
+ self.t.run_command('cmake . -G"Xcode" -DCMAKE_TOOLCHAIN_FILE={}'.format(CMakeToolchainBase.filename))
+ self.t.run_command('cmake --build . --config Release')
+ self.t.run_command("lipo -info Release-iphoneos/libhello.a")
+ self.assertIn("Non-fat file: Release-iphoneos/libhello.a is architecture: arm64", self.t.out)
+
+ def test_unix_makefiles_generator(self):
+ pass
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
conan-io__conan-7843@9559fdc
|
conan-io/conan
|
Python
| 7,843
|
[poc] CMake + Android
|
Changelog: Feature: Add POC on a toolchain for Android (using CMake provided modules).
Docs: https://github.com/conan-io/docs/pull/1902
Close #7809
There are two main different approaches to compile to Android using CMake, both of them should be equivalent:
* Using CMake provided toolchains: https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html#cross-compiling-for-android-with-the-ndk
* Using AndroidNDK provided toolchain: https://developer.android.com/ndk/guides/cmake#command-line_1
Not sure which one is better, here it is implemented the first one. Do we want to let the user choose?
---
- [ ] There are many TODOs to work on yet
|
2020-10-07T16:50:48Z
|
[feature] CMakeToolchain + Android POC
Implement a proof of concept of using the new CMakeToolchain feature for building:
- Android projects (from Linux/OSX)
- From Windows
Notes:
- Android NDK can be assumed installed
- Focused on creating/building 1 package, both local flow (conan install + native cmake) and ``conan create`` flow.
- Integration test.
- Test can be skipped for CI, but should work locally, annotate assumptions and installation details
- No env-var configuration at all
- All code must be private and local to the toolchains package
- New cross-build model with contexts must be used if necessary, not the old os_build, arch_build
- If enough time to implement reuse of a package, the ``cmake_find_package_multi`` generator must be used
|
[
{
"body": "Implement a proof of concept of using the new CMakeToolchain feature for building:\r\n\r\n- Android projects (from Linux/OSX)\r\n- From Windows\r\n\r\nNotes:\r\n- Android NDK can be assumed installed\r\n- Focused on creating/building 1 package, both local flow (conan install + native cmake) and ``conan create`` flow.\r\n- Integration test.\r\n- Test can be skipped for CI, but should work locally, annotate assumptions and installation details\r\n- No env-var configuration at all\r\n- All code must be private and local to the toolchains package\r\n- New cross-build model with contexts must be used if necessary, not the old os_build, arch_build\r\n- If enough time to implement reuse of a package, the ``cmake_find_package_multi`` generator must be used\r\n",
"number": 7809,
"title": "[feature] CMakeToolchain + Android POC"
}
] |
d27ac1743facf2b29b0964c5cc47762411e6aab3
|
{
"head_commit": "9559fdc392195345e189b5db419f73c3e0b62702",
"head_commit_message": "moving around vars",
"patch_to_review": "diff --git a/conans/client/build/cmake_toolchain_build_helper.py b/conans/client/build/cmake_toolchain_build_helper.py\nindex b1a19c98702..2df4208dff3 100644\n--- a/conans/client/build/cmake_toolchain_build_helper.py\n+++ b/conans/client/build/cmake_toolchain_build_helper.py\n@@ -4,7 +4,7 @@\n from conans.client import tools\n from conans.client.build import defs_to_string, join_arguments\n from conans.client.build.cmake_flags import is_multi_configuration, get_generator\n-from conans.client.toolchain.cmake import CMakeToolchain\n+from conans.client.toolchain.cmake.base import CMakeToolchainBase\n from conans.client.tools.files import chdir\n from conans.client.tools.oss import cpu_count, args_to_string\n from conans.errors import ConanException\n@@ -74,7 +74,7 @@ def configure(self, source_folder=None):\n if self._build_folder:\n build_folder = os.path.join(self._conanfile.build_folder, self._build_folder)\n \n- defs = {\"CMAKE_TOOLCHAIN_FILE\": CMakeToolchain.filename}\n+ defs = {\"CMAKE_TOOLCHAIN_FILE\": CMakeToolchainBase.filename}\n \n mkdir(build_folder)\n arg_list = join_arguments([\ndiff --git a/conans/client/toolchain/cmake/__init__.py b/conans/client/toolchain/cmake/__init__.py\nnew file mode 100644\nindex 00000000000..da7d77e91a2\n--- /dev/null\n+++ b/conans/client/toolchain/cmake/__init__.py\n@@ -0,0 +1,16 @@\n+from conans.client.tools import cross_building\n+from .native import CMakeNativeToolchain\n+from .android import CMakeAndroidToolchain\n+\n+\n+def CMakeToolchain(conanfile, **kwargs):\n+ if not cross_building(conanfile):\n+ return CMakeNativeToolchain(conanfile=conanfile, **kwargs)\n+ else:\n+ # Exceptions to cross-building scenarios\n+ if conanfile.settings.os == 'Windows' and conanfile.settings_build.os == 'Windows':\n+ return CMakeNativeToolchain(conanfile=conanfile, **kwargs)\n+\n+ # Actual cross-building\n+ if conanfile.settings.os == 'Android':\n+ return CMakeAndroidToolchain(conanfile=conanfile, **kwargs)\ndiff --git a/conans/client/toolchain/cmake/android.py b/conans/client/toolchain/cmake/android.py\nnew file mode 100644\nindex 00000000000..e0741fefcdb\n--- /dev/null\n+++ b/conans/client/toolchain/cmake/android.py\n@@ -0,0 +1,129 @@\n+import textwrap\n+\n+from .base import CMakeToolchainBase\n+\n+\n+class CMakeAndroidToolchain(CMakeToolchainBase):\n+ _template_project_include = '' # TODO: This file is not useful to Android, there is no MSVC runtime MD/MT\n+\n+ # TODO: Factorize with the native one\n+ _template_toolchain = textwrap.dedent(\"\"\"\n+ # Conan automatically generated toolchain file\n+ # DO NOT EDIT MANUALLY, it will be overwritten\n+\n+ # Avoid including toolchain file several times (bad if appending to variables like\n+ # CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323\n+ if(CONAN_TOOLCHAIN_INCLUDED)\n+ return()\n+ endif()\n+ set(CONAN_TOOLCHAIN_INCLUDED TRUE)\n+\n+ # build_type (Release, Debug, etc) is only defined for single-config generators\n+ {%- if build_type %}\n+ set(CMAKE_BUILD_TYPE \"{{ build_type }}\" CACHE STRING \"Choose the type of build.\" FORCE)\n+ {%- endif %}\n+\n+ get_property( _CMAKE_IN_TRY_COMPILE GLOBAL PROPERTY IN_TRY_COMPILE )\n+ if(_CMAKE_IN_TRY_COMPILE)\n+ message(STATUS \"Running toolchain IN_TRY_COMPILE\")\n+ return()\n+ endif()\n+\n+ message(\"Using Conan toolchain through ${CMAKE_TOOLCHAIN_FILE}.\")\n+\n+ # We are going to adjust automagically many things as requested by Conan\n+ # these are the things done by 'conan_basic_setup()'\n+ set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)\n+\n+ # To support the cmake_find_package generators\n+ {% if cmake_module_path -%}\n+ set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})\n+ {%- endif %}\n+ {% if cmake_prefix_path -%}\n+ set(CMAKE_PREFIX_PATH {{ cmake_prefix_path }} ${CMAKE_PREFIX_PATH})\n+ {%- endif %}\n+\n+ # shared libs\n+ {% if shared_libs -%}\n+ message(STATUS \"Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}\")\n+ set(BUILD_SHARED_LIBS {{ shared_libs }})\n+ {%- endif %}\n+\n+ # Parallel builds\n+ {% if parallel -%}\n+ set(CONAN_CXX_FLAGS \"${CONAN_CXX_FLAGS} {{ parallel }}\")\n+ set(CONAN_C_FLAGS \"${CONAN_C_FLAGS} {{ parallel }}\")\n+ {%- endif %}\n+\n+ # C++ Standard\n+ {% if cppstd -%}\n+ message(STATUS \"Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}\")\n+ set(CMAKE_CXX_STANDARD {{ cppstd }})\n+ set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})\n+ {%- endif %}\n+\n+ # Install prefix\n+ {% if install_prefix -%}\n+ set(CMAKE_INSTALL_PREFIX \"{{install_prefix}}\" CACHE STRING \"\" FORCE)\n+ {%- endif %}\n+\n+ # Variables\n+ {% for it, value in variables.items() -%}\n+ set({{ it }} \"{{ value }}\")\n+ {% endfor %}\n+ # Variables per configuration\n+ {% for it, values in variables_config.items() -%}\n+ {%- set genexpr = namespace(str='') %}\n+ {%- for conf, value in values -%}\n+ {%- set genexpr.str = genexpr.str +\n+ '$<IF:$<CONFIG:' + conf + '>,\"' + value|string + '\",' %}\n+ {%- if loop.last %}{% set genexpr.str = genexpr.str + '\"\"' -%}{%- endif -%}\n+ {%- endfor -%}\n+ {% for i in range(values|count) %}{%- set genexpr.str = genexpr.str + '>' %}\n+ {%- endfor -%}\n+ set({{ it }} {{ genexpr.str }})\n+ {% endfor %}\n+\n+ # Preprocessor definitions\n+ {% for it, value in preprocessor_definitions.items() -%}\n+ # add_compile_definitions only works in cmake >= 3.12\n+ add_definitions(-D{{ it }}=\"{{ value }}\")\n+ {% endfor %}\n+ # Preprocessor definitions per configuration\n+ {% for it, values in preprocessor_definitions_config.items() -%}\n+ {%- set genexpr = namespace(str='') %}\n+ {%- for conf, value in values -%}\n+ {%- set genexpr.str = genexpr.str +\n+ '$<IF:$<CONFIG:' + conf + '>,\"' + value|string + '\",' %}\n+ {%- if loop.last %}{% set genexpr.str = genexpr.str + '\"\"' -%}{%- endif -%}\n+ {%- endfor -%}\n+ {% for i in range(values|count) %}{%- set genexpr.str = genexpr.str + '>' %}\n+ {%- endfor -%}\n+ add_definitions(-D{{ it }}={{ genexpr.str }})\n+ {% endfor %}\n+ \"\"\")\n+\n+ # TODO: fPIC, fPIE\n+ # TODO: RPATH, cross-compiling to Android?\n+ # TODO: libcxx, only libc++ https://developer.android.com/ndk/guides/cpp-support\n+\n+ def __init__(self, build_type=None, **kwargs):\n+ super(CMakeAndroidToolchain, self).__init__(build_type=build_type, **kwargs)\n+ # TODO: Is this abuse of 'variables' attribute?\n+ self.variables['CMAKE_SYSTEM_NAME'] = 'Android'\n+ self.variables['CMAKE_SYSTEM_VERSION'] = self._conanfile.settings.os.api_level\n+ self.variables['CMAKE_ANDROID_ARCH_ABI'] = self._get_android_abi()\n+ self.variables['CMAKE_ANDROID_NDK'] = '/Users/jgsogo/Library/Android/sdk/ndk/21.0.6113669' # TODO: ???\n+ self.variables['CMAKE_ANDROID_STL_TYPE'] = self._get_android_stl()\n+\n+ self.build_type = build_type or self._conanfile.settings.get_safe(\"build_type\")\n+\n+ def _get_android_abi(self):\n+ return {\"x86\": \"x86\",\n+ \"x86_64\": \"x86_64\",\n+ \"armv7\": \"armeabi-v7a\",\n+ \"armv8\": \"arm64-v8a\"}.get(str(self._conanfile.settings.arch))\n+\n+ def _get_android_stl(self):\n+ libcxx_str = str(self._conanfile.settings.compiler.libcxx)\n+ return libcxx_str # TODO: only 'c++_shared' y 'c++_static' supported?\ndiff --git a/conans/client/toolchain/cmake/base.py b/conans/client/toolchain/cmake/base.py\nnew file mode 100644\nindex 00000000000..3ddafae958b\n--- /dev/null\n+++ b/conans/client/toolchain/cmake/base.py\n@@ -0,0 +1,94 @@\n+import os\n+from collections import OrderedDict, defaultdict\n+\n+from jinja2 import Template\n+\n+from conans.client.build.cmake_flags import is_multi_configuration\n+from conans.errors import ConanException\n+from conans.util.files import save\n+\n+\n+class Variables(OrderedDict):\n+ _configuration_types = None # Needed for py27 to avoid infinite recursion\n+\n+ def __init__(self):\n+ super(Variables, self).__init__()\n+ self._configuration_types = {}\n+\n+ def __getattribute__(self, config):\n+ try:\n+ return super(Variables, self).__getattribute__(config)\n+ except AttributeError:\n+ return self._configuration_types.setdefault(config, dict())\n+\n+ @property\n+ def configuration_types(self):\n+ # Reverse index for the configuration_types variables\n+ ret = defaultdict(list)\n+ for conf, definitions in self._configuration_types.items():\n+ for k, v in definitions.items():\n+ ret[k].append((conf, v))\n+ return ret\n+\n+\n+class CMakeToolchainBase(object):\n+ filename = \"conan_toolchain.cmake\"\n+ project_include_filename = \"conan_project_include.cmake\"\n+\n+ _template_project_include = None\n+ _template_toolchain = None\n+\n+ def __init__(self, conanfile, **kwargs):\n+ self._conanfile = conanfile\n+ self.variables = Variables()\n+ self.preprocessor_definitions = Variables()\n+\n+ # To find the generated cmake_find_package finders\n+ self.cmake_prefix_path = \"${CMAKE_BINARY_DIR}\"\n+ self.cmake_module_path = \"${CMAKE_BINARY_DIR}\"\n+\n+ try:\n+ # This is only defined in the cache, not in the local flow\n+ self.install_prefix = self._conanfile.package_folder.replace(\"\\\\\", \"/\")\n+ except AttributeError:\n+ # FIXME: In the local flow, we don't know the package_folder\n+ self.install_prefix = None\n+\n+ try:\n+ self._build_shared_libs = \"ON\" if self._conanfile.options.shared else \"OFF\"\n+ except ConanException:\n+ self._build_shared_libs = None\n+\n+ self.build_type = None\n+\n+ def _get_template_context_data(self):\n+ \"\"\" Returns two dictionaries, the context for the '_template_toolchain' and\n+ the context for the '_template_project_include' templates.\n+ \"\"\"\n+ tpl_toolchain_context = {\n+ \"variables\": self.variables,\n+ \"variables_config\": self.variables.configuration_types,\n+ \"preprocessor_definitions\": self.preprocessor_definitions,\n+ \"preprocessor_definitions_config\": self.preprocessor_definitions.configuration_types,\n+ \"cmake_prefix_path\": self.cmake_prefix_path,\n+ \"cmake_module_path\": self.cmake_module_path,\n+ \"install_prefix\": self.install_prefix,\n+ \"shared_libs\": self._build_shared_libs,\n+ \"build_type\": self.build_type,\n+ }\n+ return tpl_toolchain_context, {}\n+\n+ def write_toolchain_files(self):\n+ tpl_toolchain_context, tpl_project_include_context = self._get_template_context_data()\n+\n+ # Make it absolute, wrt to current folder, set by the caller\n+ conan_project_include_cmake = os.path.abspath(self.project_include_filename)\n+ conan_project_include_cmake = conan_project_include_cmake.replace(\"\\\\\", \"/\")\n+ t = Template(self._template_project_include)\n+ content = t.render(**tpl_project_include_context)\n+ save(conan_project_include_cmake, content)\n+\n+ t = Template(self._template_toolchain)\n+ content = t.render(conan_project_include_cmake=conan_project_include_cmake,\n+ **tpl_toolchain_context)\n+ save(self.filename, content)\ndiff --git a/conans/client/toolchain/cmake.py b/conans/client/toolchain/cmake/native.py\nsimilarity index 80%\nrename from conans/client/toolchain/cmake.py\nrename to conans/client/toolchain/cmake/native.py\nindex 0e65e39b215..53c98898f6b 100644\n--- a/conans/client/toolchain/cmake.py\n+++ b/conans/client/toolchain/cmake/native.py\n@@ -1,15 +1,11 @@\n-import os\n import textwrap\n-from collections import OrderedDict, defaultdict\n \n-from jinja2 import Template\n-\n-from conans.client.build.cmake_flags import get_generator, get_generator_platform, get_toolset, \\\n+from conans.client.build.cmake_flags import get_generator, get_generator_platform, get_toolset, \\\n is_multi_configuration\n from conans.client.build.compiler_flags import architecture_flag\n from conans.client.tools import cpu_count\n from conans.errors import ConanException\n-from conans.util.files import save\n+from .base import CMakeToolchainBase\n \n \n # https://stackoverflow.com/questions/30503631/cmake-in-which-order-are-files-parsed-cache-toolchain-etc\n@@ -17,32 +13,7 @@\n # https://github.com/microsoft/vcpkg/tree/master/scripts/buildsystems\n \n \n-class Variables(OrderedDict):\n- _configuration_types = None # Needed for py27 to avoid infinite recursion\n-\n- def __init__(self):\n- super(Variables, self).__init__()\n- self._configuration_types = {}\n-\n- def __getattribute__(self, config):\n- try:\n- return super(Variables, self).__getattribute__(config)\n- except AttributeError:\n- return self._configuration_types.setdefault(config, dict())\n-\n- @property\n- def configuration_types(self):\n- # Reverse index for the configuration_types variables\n- ret = defaultdict(list)\n- for conf, definitions in self._configuration_types.items():\n- for k, v in definitions.items():\n- ret[k].append((conf, v))\n- return ret\n-\n-\n-class CMakeToolchain(object):\n- filename = \"conan_toolchain.cmake\"\n-\n+class CMakeNativeToolchain(CMakeToolchainBase):\n _template_toolchain = textwrap.dedent(\"\"\"\n # Conan automatically generated toolchain file\n # DO NOT EDIT MANUALLY, it will be overwritten\n@@ -239,29 +210,18 @@ class CMakeToolchain(object):\n \n def __init__(self, conanfile, generator=None, generator_platform=None, build_type=None,\n toolset=None, parallel=True):\n- self._conanfile = conanfile\n+ super(CMakeNativeToolchain, self).__init__(conanfile)\n \n self.fpic = self._deduce_fpic()\n self.vs_static_runtime = self._deduce_vs_static_runtime()\n self.parallel = parallel\n \n- # To find the generated cmake_find_package finders\n- self.cmake_prefix_path = \"${CMAKE_BINARY_DIR}\"\n- self.cmake_module_path = \"${CMAKE_BINARY_DIR}\"\n-\n self.generator = generator or get_generator(self._conanfile)\n self.generator_platform = (generator_platform or\n get_generator_platform(self._conanfile.settings,\n self.generator))\n self.toolset = toolset or get_toolset(self._conanfile.settings, self.generator)\n \n- self.variables = Variables()\n- self.preprocessor_definitions = Variables()\n- try:\n- self._build_shared_libs = \"ON\" if self._conanfile.options.shared else \"OFF\"\n- except ConanException:\n- self._build_shared_libs = None\n-\n self.set_libcxx, self.glibcxx = self._get_libcxx()\n \n self.parallel = None\n@@ -277,12 +237,6 @@ def __init__(self, conanfile, generator=None, generator_platform=None, build_typ\n # TODO: I would want to have here the path to the compiler too\n build_type = build_type or self._conanfile.settings.get_safe(\"build_type\")\n self.build_type = build_type if not is_multi_configuration(self.generator) else None\n- try:\n- # This is only defined in the cache, not in the local flow\n- self.install_prefix = self._conanfile.package_folder.replace(\"\\\\\", \"/\")\n- except AttributeError:\n- # FIXME: In the local flow, we don't know the package_folder\n- self.install_prefix = None\n \n def _deduce_fpic(self):\n fpic = self._conanfile.options.get_safe(\"fPIC\")\n@@ -306,7 +260,7 @@ def _get_architecture(self):\n def _deduce_vs_static_runtime(self):\n settings = self._conanfile.settings\n if (settings.get_safe(\"compiler\") == \"Visual Studio\" and\n- \"MT\" in settings.get_safe(\"compiler.runtime\")):\n+ \"MT\" in settings.get_safe(\"compiler.runtime\")):\n return True\n return False\n \n@@ -352,39 +306,20 @@ def _cppstd(self):\n cppstd_extensions = \"OFF\"\n return cppstd, cppstd_extensions\n \n- def write_toolchain_files(self):\n- # Make it absolute, wrt to current folder, set by the caller\n- conan_project_include_cmake = os.path.abspath(\"conan_project_include.cmake\")\n- conan_project_include_cmake = conan_project_include_cmake.replace(\"\\\\\", \"/\")\n- t = Template(self._template_project_include)\n- content = t.render(vs_static_runtime=self.vs_static_runtime)\n- save(conan_project_include_cmake, content)\n-\n- # TODO: I need the profile_host and profile_build here!\n- # TODO: What if the compiler is a build require?\n- # TODO: Add all the stuff related to settings (ALL settings or just _MY_ settings?)\n-\n- context = {\n- \"variables\": self.variables,\n- \"variables_config\": self.variables.configuration_types,\n- \"preprocessor_definitions\": self.preprocessor_definitions,\n- \"preprocessor_definitions_config\": self.preprocessor_definitions.configuration_types,\n- \"build_type\": self.build_type,\n+ def _get_template_context_data(self):\n+ tpl_toolchain_context, tpl_project_include_context = \\\n+ super(CMakeNativeToolchain, self)._get_template_context_data()\n+ tpl_toolchain_context.update({\n \"generator_platform\": self.generator_platform,\n \"toolset\": self.toolset,\n- \"cmake_prefix_path\": self.cmake_prefix_path,\n- \"cmake_module_path\": self.cmake_module_path,\n \"fpic\": self.fpic,\n \"skip_rpath\": self.skip_rpath,\n \"set_libcxx\": self.set_libcxx,\n \"glibcxx\": self.glibcxx,\n- \"install_prefix\": self.install_prefix,\n \"parallel\": self.parallel,\n \"cppstd\": self.cppstd,\n \"cppstd_extensions\": self.cppstd_extensions,\n- \"shared_libs\": self._build_shared_libs,\n \"architecture\": self.architecture\n- }\n- t = Template(self._template_toolchain)\n- content = t.render(conan_project_include_cmake=conan_project_include_cmake, **context)\n- save(self.filename, content)\n+ })\n+ tpl_project_include_context.update({'vs_static_runtime': self.vs_static_runtime})\n+ return tpl_toolchain_context, tpl_project_include_context\ndiff --git a/conans/test/integration/toolchains/__init__.py b/conans/test/integration/toolchains/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/integration/toolchains/android/__init__.py b/conans/test/integration/toolchains/android/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/integration/toolchains/android/_utils.py b/conans/test/integration/toolchains/android/_utils.py\nnew file mode 100644\nindex 00000000000..216c46123fe\n--- /dev/null\n+++ b/conans/test/integration/toolchains/android/_utils.py\n@@ -0,0 +1,38 @@\n+import textwrap\n+\n+lib_h = textwrap.dedent(\"\"\"\n+ int some_function(int value);\n+\"\"\")\n+\n+lib_cpp = textwrap.dedent(\"\"\"\n+ #include \"lib.h\"\n+ #include <iostream>\n+\n+ int some_function(int value) {\n+ std::cout << \"some_function(value=\" << value << \")\" << std::endl;\n+ return 42;\n+ }\n+\"\"\")\n+\n+cmakelists = textwrap.dedent(\"\"\"\n+ cmake_minimum_required(VERSION 2.8.12) # TODO: Define minimun required here\n+ project(AndroidLibrary CXX)\n+\n+ add_library(library lib.h lib.cpp)\n+ set_target_properties(library PROPERTIES PUBLIC_HEADER lib.h)\n+\n+ install(TARGETS library\n+ RUNTIME DESTINATION bin\n+ LIBRARY DESTINATION lib\n+ ARCHIVE DESTINATION lib\n+ PUBLIC_HEADER DESTINATION include\n+ )\n+\"\"\")\n+\n+\n+def create_library(client):\n+ client.save({\n+ 'lib.h': lib_h,\n+ 'lib.cpp': lib_cpp,\n+ 'CMakeLists.txt': cmakelists\n+ })\ndiff --git a/conans/test/integration/toolchains/android/test_using_cmake.py b/conans/test/integration/toolchains/android/test_using_cmake.py\nnew file mode 100644\nindex 00000000000..768a7bc3c05\n--- /dev/null\n+++ b/conans/test/integration/toolchains/android/test_using_cmake.py\n@@ -0,0 +1,81 @@\n+import shutil\n+import textwrap\n+import unittest\n+\n+from conans.test.utils.tools import TestClient\n+from ._utils import create_library\n+from conans.client.toolchain.cmake.base import CMakeToolchainBase\n+\n+class SystemToolsTestCase(unittest.TestCase):\n+ # This test assumes that 'CMake' and 'AndroidNDK' are available in the system\n+ #\n+ # Guidelines: https://developer.android.com/ndk/guides/cmake#command-line\n+\n+ @classmethod\n+ def setUpClass(cls):\n+ if not shutil.which('cmake'):\n+ raise unittest.SkipTest(\"CMake expected in PATH\")\n+ if not shutil.which('cmake'):\n+ raise unittest.SkipTest(\"CMake expected in PATH\")\n+\n+ def setUp(self):\n+ current_folder = '/private/var/folders/fc/6mvcrc952dqcjfhl4c7c11ph0000gn/T/tmp4xr45tt5conans/path with spaces'\n+ self.t = TestClient(current_folder=current_folder)\n+ create_library(self.t)\n+ self.t.save({\n+ 'conanfile.py': textwrap.dedent(\"\"\"\n+ from conans import ConanFile, CMake, CMakeToolchain\n+\n+ class Library(ConanFile):\n+ name = 'library'\n+ settings = 'os', 'arch', 'compiler', 'build_type'\n+ exports_sources = \"CMakeLists.txt\", \"lib.h\", \"lib.cpp\"\n+ options = {'shared': [True, False]}\n+ default_options = {'shared': False}\n+\n+ def toolchain(self):\n+ tc = CMakeToolchain(self)\n+ tc.write_toolchain_files()\n+\n+ def build(self):\n+ cmake = CMake(self)\n+ cmake.configure()\n+\n+ def package(self):\n+ cmake = CMake(self)\n+ cmake.install()\n+ \"\"\"),\n+ 'profile_host': textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Android\n+ os.api_level=23\n+ arch=x86_64\n+ compiler=clang\n+ compiler.version=9\n+ compiler.libcxx=c++_shared\n+ build_type=Release\n+ \"\"\")\n+ })\n+\n+ def test_regular_build(self):\n+ # TODO: Remove this test, useless besides validating this project\n+ self.t.run('create . library/version@ --profile:host=default --profile:build=default')\n+\n+ def test_use_cmake_toolchain(self):\n+ \"\"\" This is the naïve approach, we follow instruction from CMake in its documentation\n+ https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html#cross-compiling-for-android\n+ \"\"\"\n+ # Build in the cache\n+ self.t.run('create . library/version@ --profile:host=profile_host --profile:build=default')\n+\n+ # Build locally\n+ self.t.run('install . library/version@ --profile:host=profile_host --profile:build=default')\n+ self.t.run_command('cmake . -DCMAKE_TOOLCHAIN_FILE={}'.format(CMakeToolchainBase.filename))\n+ self.t.run_command('cmake --build .')\n+\n+ def test_use_android_ndk_toolchain(self):\n+ \"\"\" Use the CMake toolchain provided by Android NDK itself\n+ https://developer.android.com/ndk/guides/cmake#command-line\n+ \"\"\"\n+ pass\n+\ndiff --git a/conans/test/integration/toolchains/android/test_using_envvars.py b/conans/test/integration/toolchains/android/test_using_envvars.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,16 @@\n+from conans.client.tools import cross_building\n+from .native import CMakeNativeToolchain\n+from .android import CMakeAndroidToolchain\n+\n+\n+def CMakeToolchain(conanfile, **kwargs):\n+ if not cross_building(conanfile):\n+ return CMakeNativeToolchain(conanfile=conanfile, **kwargs)\n+ else:\n+ # Exceptions to cross-building scenarios\n+ if conanfile.settings.os == 'Windows' and conanfile.settings_build.os == 'Windows':",
"line": null,
"original_line": 11,
"original_start_line": null,
"path": "conans/client/toolchain/cmake/__init__.py",
"start_line": null,
"text": "@user1:\nwhy exceptions for Windows? anyway, this sounds like out of scope for Android, don't understand why it should be changed here\n\n@author:\nMore about it here (https://github.com/conan-io/conan/pull/7841), this PR is built on top of that one. But, specifically, comments related to the Windows exception here (reviews and comments: https://github.com/conan-io/conan/pull/7827), current Conan behavior doesn't consider a cross-building scenario when going from x64 to x86 in Windows or Linux."
},
{
"diff_hunk": "@@ -0,0 +1,81 @@\n+import shutil\n+import textwrap\n+import unittest\n+\n+from conans.test.utils.tools import TestClient\n+from ._utils import create_library\n+from conans.client.toolchain.cmake.base import CMakeToolchainBase\n+\n+class SystemToolsTestCase(unittest.TestCase):\n+ # This test assumes that 'CMake' and 'AndroidNDK' are available in the system\n+ #\n+ # Guidelines: https://developer.android.com/ndk/guides/cmake#command-line\n+\n+ @classmethod\n+ def setUpClass(cls):\n+ if not shutil.which('cmake'):\n+ raise unittest.SkipTest(\"CMake expected in PATH\")\n+ if not shutil.which('cmake'):",
"line": null,
"original_line": 18,
"original_start_line": null,
"path": "conans/test/integration/toolchains/android/test_using_cmake.py",
"start_line": null,
"text": "@user1:\nchecking the same twice?\n\n@author:\nThis is an ongoing effort. I'll remove that. My plan was to have something like `which('ndk-build')`, but it looks like it is not in the path by default."
}
] |
b08f4452c1e365d8179f678386384ceebd315aab
|
diff --git a/conans/client/toolchain/cmake/android.py b/conans/client/toolchain/cmake/android.py
index 2746e5290b2..4ca60416168 100644
--- a/conans/client/toolchain/cmake/android.py
+++ b/conans/client/toolchain/cmake/android.py
@@ -1,5 +1,93 @@
+import os
+import textwrap
+
+from conans.client.tools.files import which
+from conans.errors import ConanException
from .base import CMakeToolchainBase
class CMakeAndroidToolchain(CMakeToolchainBase):
- pass
+ _toolchain_tpl = textwrap.dedent("""
+ {% extends 'base_toolchain' %}
+
+ {% block before_try_compile %}
+ {{ super() }}
+
+ set(CMAKE_SYSTEM_NAME {{ CMAKE_SYSTEM_NAME }})
+ set(CMAKE_SYSTEM_VERSION {{ CMAKE_SYSTEM_VERSION }})
+ set(CMAKE_ANDROID_ARCH_ABI {{ CMAKE_ANDROID_ARCH_ABI }})
+ set(CMAKE_ANDROID_STL_TYPE {{ CMAKE_ANDROID_STL_TYPE }})
+ set(CMAKE_ANDROID_NDK {{ CMAKE_ANDROID_NDK }})
+ {% endblock %}
+
+ {% block main %}
+ {{ super() }}
+
+ {% if shared_libs -%}
+ message(STATUS "Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}")
+ set(BUILD_SHARED_LIBS {{ shared_libs }})
+ {%- endif %}
+
+ {% if parallel -%}
+ set(CONAN_CXX_FLAGS "${CONAN_CXX_FLAGS} {{ parallel }}")
+ set(CONAN_C_FLAGS "${CONAN_C_FLAGS} {{ parallel }}")
+ {%- endif %}
+
+ {% if cppstd -%}
+ message(STATUS "Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}")
+ set(CMAKE_CXX_STANDARD {{ cppstd }})
+ set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})
+ {%- endif %}
+
+ set(CMAKE_CXX_FLAGS_INIT "${CONAN_CXX_FLAGS}" CACHE STRING "" FORCE)
+ set(CMAKE_C_FLAGS_INIT "${CONAN_C_FLAGS}" CACHE STRING "" FORCE)
+ set(CMAKE_SHARED_LINKER_FLAGS_INIT "${CONAN_SHARED_LINKER_FLAGS}" CACHE STRING "" FORCE)
+ set(CMAKE_EXE_LINKER_FLAGS_INIT "${CONAN_EXE_LINKER_FLAGS}" CACHE STRING "" FORCE)
+ {% endblock %}
+ """)
+
+ # TODO: fPIC, fPIE
+ # TODO: RPATH, cross-compiling to Android?
+ # TODO: libcxx, only libc++ https://developer.android.com/ndk/guides/cpp-support
+
+ def __init__(self, conanfile, build_type=None, **kwargs):
+ super(CMakeAndroidToolchain, self).__init__(conanfile, build_type=build_type, **kwargs)
+ self.build_type = build_type or self._conanfile.settings.get_safe("build_type")
+
+ def _get_templates(self):
+ templates = super(CMakeAndroidToolchain, self)._get_templates()
+ templates.update({
+ CMakeToolchainBase.filename: self._toolchain_tpl,
+ })
+ return templates
+
+ def _get_android_abi(self):
+ return {"x86": "x86",
+ "x86_64": "x86_64",
+ "armv7": "armeabi-v7a",
+ "armv8": "arm64-v8a"}.get(str(self._conanfile.settings.arch))
+
+ def _get_android_stl(self):
+ libcxx_str = str(self._conanfile.settings.compiler.libcxx)
+ return libcxx_str # TODO: only 'c++_shared' y 'c++_static' supported?
+
+ def _guess_android_ndk(self):
+ # TODO: Do not use envvar! This has to be provided by the user somehow
+ android_ndk = os.getenv("CONAN_CMAKE_ANDROID_NDK")
+ if not android_ndk:
+ android_ndk = which('ndk-build')
+ android_ndk = os.path.dirname(android_ndk) if android_ndk else None
+ if not android_ndk:
+ raise ConanException('Cannot find ANDROID_NDK (ndk-build) in the PATH')
+ return android_ndk
+
+ def _get_template_context_data(self):
+ ctxt_toolchain, _ = super(CMakeAndroidToolchain, self)._get_template_context_data()
+ ctxt_toolchain.update({
+ 'CMAKE_SYSTEM_NAME': 'Android',
+ 'CMAKE_SYSTEM_VERSION': self._conanfile.settings.os.api_level,
+ 'CMAKE_ANDROID_ARCH_ABI': self._get_android_abi(),
+ 'CMAKE_ANDROID_STL_TYPE': self._get_android_stl(),
+ 'CMAKE_ANDROID_NDK': self._guess_android_ndk(),
+ })
+ return ctxt_toolchain, {}
diff --git a/conans/test/integration/toolchains/__init__.py b/conans/test/integration/toolchains/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/integration/toolchains/android/__init__.py b/conans/test/integration/toolchains/android/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/integration/toolchains/android/_utils.py b/conans/test/integration/toolchains/android/_utils.py
new file mode 100644
index 00000000000..216c46123fe
--- /dev/null
+++ b/conans/test/integration/toolchains/android/_utils.py
@@ -0,0 +1,38 @@
+import textwrap
+
+lib_h = textwrap.dedent("""
+ int some_function(int value);
+""")
+
+lib_cpp = textwrap.dedent("""
+ #include "lib.h"
+ #include <iostream>
+
+ int some_function(int value) {
+ std::cout << "some_function(value=" << value << ")" << std::endl;
+ return 42;
+ }
+""")
+
+cmakelists = textwrap.dedent("""
+ cmake_minimum_required(VERSION 2.8.12) # TODO: Define minimun required here
+ project(AndroidLibrary CXX)
+
+ add_library(library lib.h lib.cpp)
+ set_target_properties(library PROPERTIES PUBLIC_HEADER lib.h)
+
+ install(TARGETS library
+ RUNTIME DESTINATION bin
+ LIBRARY DESTINATION lib
+ ARCHIVE DESTINATION lib
+ PUBLIC_HEADER DESTINATION include
+ )
+""")
+
+
+def create_library(client):
+ client.save({
+ 'lib.h': lib_h,
+ 'lib.cpp': lib_cpp,
+ 'CMakeLists.txt': cmakelists
+ })
diff --git a/conans/test/integration/toolchains/android/test_using_cmake.py b/conans/test/integration/toolchains/android/test_using_cmake.py
new file mode 100644
index 00000000000..21b72f81949
--- /dev/null
+++ b/conans/test/integration/toolchains/android/test_using_cmake.py
@@ -0,0 +1,70 @@
+import textwrap
+import unittest
+
+from conans.client.toolchain.cmake.base import CMakeToolchainBase
+from conans.client.tools import which
+from conans.test.utils.tools import TestClient
+from ._utils import create_library
+
+
+class AndroidToolchainTestCase(unittest.TestCase):
+ # This test assumes that 'CMake' and 'AndroidNDK' are available in the system
+ #
+ # Guidelines: https://developer.android.com/ndk/guides/cmake#command-line
+
+ @classmethod
+ def setUpClass(cls):
+ if not which('cmake'):
+ raise unittest.SkipTest("CMake expected in PATH")
+ if not which('ndk-build'):
+ raise unittest.SkipTest("ANDROID_NDK (ndk-build) expected in PATH")
+
+ def setUp(self):
+ self.t = TestClient()
+ create_library(self.t)
+ self.t.save({
+ 'conanfile.py': textwrap.dedent("""
+ from conans import ConanFile, CMake, CMakeToolchain
+
+ class Library(ConanFile):
+ name = 'library'
+ settings = 'os', 'arch', 'compiler', 'build_type'
+ exports_sources = "CMakeLists.txt", "lib.h", "lib.cpp"
+ options = {'shared': [True, False]}
+ default_options = {'shared': False}
+
+ def toolchain(self):
+ tc = CMakeToolchain(self)
+ tc.write_toolchain_files()
+
+ def build(self):
+ cmake = CMake(self)
+ cmake.configure()
+
+ def package(self):
+ cmake = CMake(self)
+ cmake.install()
+ """),
+ 'profile_host': textwrap.dedent("""
+ [settings]
+ os=Android
+ os.api_level=23
+ arch=x86_64
+ compiler=clang
+ compiler.version=9
+ compiler.libcxx=c++_shared
+ build_type=Release
+ """)
+ })
+
+ def test_use_cmake_toolchain(self):
+ """ This is the naive approach, we follow instruction from CMake in its documentation
+ https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html#cross-compiling-for-android
+ """
+ # Build in the cache
+ self.t.run('create . library/version@ --profile:host=profile_host --profile:build=default')
+
+ # Build locally
+ self.t.run('install . library/version@ --profile:host=profile_host --profile:build=default')
+ self.t.run_command('cmake . -DCMAKE_TOOLCHAIN_FILE={}'.format(CMakeToolchainBase.filename))
+ self.t.run_command('cmake --build .')
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
conan-io__conan-7600@dde5092
|
conan-io/conan
|
Python
| 7,600
|
Set cmake compile options based on language
|
Changelog: Fix: Set CMake targets compile options based on language
Docs: omit
#tags: slow
- [x] Refer to the issue that supports this Pull Request: closes #7499
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2020-08-26T14:03:01Z
|
[bug] Cmake targets glue C and CXX flags
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
To help us debug your issue please explain:
-->
### Environment Details (include every applicable attribute)
* Operating System+version: Ubuntu 20.04 in docker
* Compiler+version: GCC 9.3 and GCC 9.2 cross for arm
* Conan version: 1.28.0
* Python version: 3.8.2
### Steps to reproduce (Include if Applicable)
1. Make package with CXX specific flags, eg. -fno-exceptions, -fno-rtti
2. Use cmake generator with target aproach and link that package to your target.
3. Enable languages: C and C++ and add one .c and one .cpp file
4. Compile using cmake
### Logs (Executed commands with output) (Include/Attach if Applicable)
`[build] cc1: warning: command line option '-fno-rtti' is valid for C++/D/ObjC++ but not for C`
### Possible solution:
There are lines for every target in conanbuildinfo.cmake:
```
set_property(TARGET CONAN_PKG::PACKAGE_NAME PROPERTY INTERFACE_COMPILE_OPTIONS ${CONAN_C_FLAGS_PACKAGE_NAME_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_LIST}
$<$<CONFIG:Release>:${CONAN_C_FLAGS_PACKAGE_NAME_RELEASE_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_RELEASE_LIST}>
$<$<CONFIG:RelWithDebInfo>:${CONAN_C_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST}>
$<$<CONFIG:MinSizeRel>:${CONAN_C_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST}>
$<$<CONFIG:Debug>:${CONAN_C_FLAGS_PACKAGE_NAME_DEBUG_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_DEBUG_LIST}>)
```
Those lines should be changed to use additinal generator expression: `<$<COMPILE_LANGUAGE:(CXX|C)>`, eg.
```
set_property(TARGET CONAN_PKG::PACKAGE_NAME PROPERTY INTERFACE_COMPILE_OPTIONS (
$<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_LIST}>
$<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_LIST}>
$<$<CONFIG:Release>:
$<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_RELEASE_LIST}>
$<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_RELEASE_LIST}>>
$<$<CONFIG:RelWithDebInfo>:
$<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST}>
$<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST}>>
$<$<CONFIG:MinSizeRel>:
$<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST}>
$<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST}>>
$<$<CONFIG:Debug>:
$<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_DEBUG_LIST}>
$<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_DEBUG_LIST}>>)
```
I haven't check it yet but I use this expression to set rtti flags in cmake project and it works correctly:
```
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-exceptions>)
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-rtti>)
```
<!--
Your log content should be related to the bug description, it can be:
- Conan command output
- Server output (Artifactory, conan_server)
-->
|
Hi @pgorgon-hem
Thanks for reporting this issue.
Just to understand the severity of the issue, what is the typical effect, just that warning, but the flag is safely ignored by the compilers? Does it fail in some scenarios?
This might need to be changed in other generators as well. As it seems this code section is getting some complexity and would be error prone, I wonder if this could be templatized.
Hi,
The typical effect is just warning and flag is safety ignored by gcc, but I've tested it only with gcc 9 and -fno-rtti and -fno-exception flags. I don't know how it behaves with other compilers and other flags. But I can imagine that some build parameters for c and c++ objects might be different and can't link when flags set wrongly. Eg. I have external library packaged with conan. This library is also distributed for embedded targets (compiled without RTTI) and when I link that library to my executable it can't find typeinfo for objects from library, so I need to add -fno-rtti to my executable compilation to link it properly. Because I'm always traying to achive no warnings in compilation I added these flags manually in my cmake (for whole project).
This fragment of cmake code is from [conan/conans/client/generators/cmake_common.py:203](https://github.com/conan-io/conan/blob/74548ddf0a3eabf87cdc7f1e9e38a238ed16667a/conans/client/generators/cmake_common.py#L203). This file is included in every cmake generator, but it is used in generate_targets_section method which is used in CMakeGenerator and CMakeMultiGenerator, so I think it have influence on cmake and cmake-multi generators only.
I thought about making fix of that by myself but I have only little experience in conan so I can't predict all effects of changes.
|
[
{
"body": "<!--\r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n\r\n To help us debug your issue please explain:\r\n-->\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Ubuntu 20.04 in docker\r\n * Compiler+version: GCC 9.3 and GCC 9.2 cross for arm\r\n * Conan version: 1.28.0\r\n * Python version: 3.8.2\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n 1. Make package with CXX specific flags, eg. -fno-exceptions, -fno-rtti\r\n 2. Use cmake generator with target aproach and link that package to your target.\r\n 3. Enable languages: C and C++ and add one .c and one .cpp file\r\n 4. Compile using cmake\r\n\r\n### Logs (Executed commands with output) (Include/Attach if Applicable)\r\n`[build] cc1: warning: command line option '-fno-rtti' is valid for C++/D/ObjC++ but not for C`\r\n\r\n### Possible solution:\r\nThere are lines for every target in conanbuildinfo.cmake:\r\n```\r\nset_property(TARGET CONAN_PKG::PACKAGE_NAME PROPERTY INTERFACE_COMPILE_OPTIONS ${CONAN_C_FLAGS_PACKAGE_NAME_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_LIST}\r\n $<$<CONFIG:Release>:${CONAN_C_FLAGS_PACKAGE_NAME_RELEASE_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_RELEASE_LIST}>\r\n $<$<CONFIG:RelWithDebInfo>:${CONAN_C_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST}>\r\n $<$<CONFIG:MinSizeRel>:${CONAN_C_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST}>\r\n $<$<CONFIG:Debug>:${CONAN_C_FLAGS_PACKAGE_NAME_DEBUG_LIST} ${CONAN_CXX_FLAGS_PACKAGE_NAME_DEBUG_LIST}>)\r\n```\r\n\r\nThose lines should be changed to use additinal generator expression: `<$<COMPILE_LANGUAGE:(CXX|C)>`, eg.\r\n\r\n```\r\nset_property(TARGET CONAN_PKG::PACKAGE_NAME PROPERTY INTERFACE_COMPILE_OPTIONS (\r\n $<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_LIST}> \r\n $<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_LIST}>\r\n $<$<CONFIG:Release>:\r\n $<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_RELEASE_LIST}> \r\n $<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_RELEASE_LIST}>>\r\n $<$<CONFIG:RelWithDebInfo>:\r\n $<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST}> \r\n $<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_RELWITHDEBINFO_LIST}>>\r\n $<$<CONFIG:MinSizeRel>:\r\n $<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST}> \r\n $<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_MINSIZEREL_LIST}>>\r\n $<$<CONFIG:Debug>:\r\n $<$<COMPILE_LANGUAGE:C>:${CONAN_C_FLAGS_PACKAGE_NAME_DEBUG_LIST}> \r\n $<$<COMPILE_LANGUAGE:CXX>:${CONAN_CXX_FLAGS_PACKAGE_NAME_DEBUG_LIST}>>)\r\n```\r\n\r\nI haven't check it yet but I use this expression to set rtti flags in cmake project and it works correctly:\r\n```\r\n add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-exceptions>)\r\n add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-fno-rtti>)\r\n```\r\n\r\n<!--\r\n Your log content should be related to the bug description, it can be:\r\n - Conan command output\r\n - Server output (Artifactory, conan_server)\r\n-->\r\n",
"number": 7499,
"title": "[bug] Cmake targets glue C and CXX flags"
}
] |
362e81ca1e4ef4d7b3aaca225c653a70e7c641ce
|
{
"head_commit": "dde5092810e731a0d3a7796b9e8d1ea3723bd464",
"head_commit_message": "fix cmake code and tests",
"patch_to_review": "diff --git a/conans/client/generators/cmake_common.py b/conans/client/generators/cmake_common.py\nindex 72c42d7f951..e95c6102b9b 100644\n--- a/conans/client/generators/cmake_common.py\n+++ b/conans/client/generators/cmake_common.py\n@@ -200,11 +200,20 @@ def cmake_global_vars(deps, build_type=\"\"):\n $<$<CONFIG:RelWithDebInfo>:${{CONAN_COMPILE_DEFINITIONS_{uname}_RELWITHDEBINFO}}>\n $<$<CONFIG:MinSizeRel>:${{CONAN_COMPILE_DEFINITIONS_{uname}_MINSIZEREL}}>\n $<$<CONFIG:Debug>:${{CONAN_COMPILE_DEFINITIONS_{uname}_DEBUG}}>)\n- set_property(TARGET {name} PROPERTY INTERFACE_COMPILE_OPTIONS ${{CONAN_C_FLAGS_{uname}_LIST}} ${{CONAN_CXX_FLAGS_{uname}_LIST}}\n- $<$<CONFIG:Release>:${{CONAN_C_FLAGS_{uname}_RELEASE_LIST}} ${{CONAN_CXX_FLAGS_{uname}_RELEASE_LIST}}>\n- $<$<CONFIG:RelWithDebInfo>:${{CONAN_C_FLAGS_{uname}_RELWITHDEBINFO_LIST}} ${{CONAN_CXX_FLAGS_{uname}_RELWITHDEBINFO_LIST}}>\n- $<$<CONFIG:MinSizeRel>:${{CONAN_C_FLAGS_{uname}_MINSIZEREL_LIST}} ${{CONAN_CXX_FLAGS_{uname}_MINSIZEREL_LIST}}>\n- $<$<CONFIG:Debug>:${{CONAN_C_FLAGS_{uname}_DEBUG_LIST}} ${{CONAN_CXX_FLAGS_{uname}_DEBUG_LIST}}>)\n+ set_property(TARGET {name} PROPERTY INTERFACE_COMPILE_OPTIONS $<$<COMPILE_LANGUAGE:C>:${{CONAN_C_FLAGS_{uname}_LIST}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{CONAN_CXX_FLAGS_{uname}_LIST}}>\n+ $<$<CONFIG:Release>:\n+ $<$<COMPILE_LANGUAGE:C>:${{CONAN_C_FLAGS_{uname}_RELEASE_LIST}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{CONAN_CXX_FLAGS_{uname}_RELEASE_LIST}}>>\n+ $<$<CONFIG:RelWithDebInfo>:\n+ $<$<COMPILE_LANGUAGE:C>:${{CONAN_C_FLAGS_{uname}_RELWITHDEBINFO_LIST}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{CONAN_CXX_FLAGS_{uname}_RELWITHDEBINFO_LIST}}>>\n+ $<$<CONFIG:MinSizeRel>:\n+ $<$<COMPILE_LANGUAGE:C>:${{CONAN_C_FLAGS_{uname}_MINSIZEREL_LIST}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{CONAN_CXX_FLAGS_{uname}_MINSIZEREL_LIST}}>>\n+ $<$<CONFIG:Debug>:\n+ $<$<COMPILE_LANGUAGE:C>:${{CONAN_C_FLAGS_{uname}_DEBUG_LIST}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{CONAN_CXX_FLAGS_{uname}_DEBUG_LIST}}>>)\n \"\"\"\n \n \ndiff --git a/conans/client/generators/cmake_find_package.py b/conans/client/generators/cmake_find_package.py\nindex 350af11a40c..2f04a1b2adb 100644\n--- a/conans/client/generators/cmake_find_package.py\n+++ b/conans/client/generators/cmake_find_package.py\n@@ -42,7 +42,8 @@ class CMakeFindPackageGenerator(Generator):\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_DEFINITIONS\n ${{{name}_COMPILE_DEFINITIONS}})\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_OPTIONS\n- \"${{{name}_COMPILE_OPTIONS_LIST}}\")\n+ $<$<COMPILE_LANGUAGE:C>:${{{name}_COMPILE_OPTIONS_C}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{{name}_COMPILE_OPTIONS_CXX}}>)\n {find_dependencies_block}\n endif()\n endif()\n@@ -96,7 +97,8 @@ class CMakeFindPackageGenerator(Generator):\n set({{ pkg_name }}_{{ comp_name }}_RES_DIRS {{ comp.res_paths }})\n set({{ pkg_name }}_{{ comp_name }}_DEFINITIONS {{ comp.defines }})\n set({{ pkg_name }}_{{ comp_name }}_COMPILE_DEFINITIONS {{ comp.compile_definitions }})\n- set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_LIST \"{{ comp.cxxflags_list }}\" \"{{ comp.cflags_list }}\")\n+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_C {{ comp.cflags_list }})\n+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_CPP {{ comp.cxxflags_list }})\n set({{ pkg_name }}_{{ comp_name }}_LIBS {{ comp.libs }})\n set({{ pkg_name }}_{{ comp_name }}_SYSTEM_LIBS {{ comp.system_libs }})\n set({{ pkg_name }}_{{ comp_name }}_FRAMEWORK_DIRS {{ comp.framework_paths }})\n@@ -181,7 +183,8 @@ class CMakeFindPackageGenerator(Generator):\n set_target_properties({{ pkg_name }}::{{ comp_name }} PROPERTIES INTERFACE_COMPILE_DEFINITIONS\n \"{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_DEFINITIONS}' }}\")\n set_target_properties({{ pkg_name }}::{{ comp_name }} PROPERTIES INTERFACE_COMPILE_OPTIONS\n- \"{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST}' }}\")\n+ \"$<$<COMPILE_LANGUAGE:C>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C}' }}>\n+ $<$<COMPILE_LANGUAGE:CXX>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX}' }}>\")\n endif()\n endif()\n \n@@ -266,7 +269,7 @@ def _find_for_dep(self, pkg_name, pkg_findname, pkg_filename, cpp_info):\n pkg_public_deps_filenames = [self._get_filename(self.deps_build_info[public_dep])\n for public_dep in cpp_info.public_deps]\n pkg_public_deps_names = [self._get_name(self.deps_build_info[public_dep])\n- for public_dep in cpp_info.public_deps]\n+ for public_dep in cpp_info.public_deps]\n deps_names = \";\".join([\"{n}::{n}\".format(n=n) for n in pkg_public_deps_names])\n if cpp_info.components:\n components = self._get_components(pkg_name, pkg_findname, cpp_info)\ndiff --git a/conans/client/generators/cmake_find_package_common.py b/conans/client/generators/cmake_find_package_common.py\nindex c4af23b4aca..deb6836df4f 100644\n--- a/conans/client/generators/cmake_find_package_common.py\n+++ b/conans/client/generators/cmake_find_package_common.py\n@@ -15,6 +15,8 @@\n )\n set({name}_COMPILE_DEFINITIONS{build_type_suffix} {deps.compile_definitions})\n set({name}_COMPILE_OPTIONS{build_type_suffix}_LIST \"{deps.cxxflags_list}\" \"{deps.cflags_list}\")\n+set({name}_COMPILE_OPTIONS_C{build_type_suffix} \"{deps.cflags_list}\")\n+set({name}_COMPILE_OPTIONS_CXX{build_type_suffix} \"{deps.cxxflags_list}\")\n set({name}_LIBRARIES_TARGETS{build_type_suffix} \"\") # Will be filled later, if CMake 3\n set({name}_LIBRARIES{build_type_suffix} \"\") # Will be filled later\n set({name}_LIBS{build_type_suffix} \"\") # Same as {name}_LIBRARIES\ndiff --git a/conans/client/generators/cmake_find_package_multi.py b/conans/client/generators/cmake_find_package_multi.py\nindex 1576f558f0b..bb852eb7198 100644\n--- a/conans/client/generators/cmake_find_package_multi.py\n+++ b/conans/client/generators/cmake_find_package_multi.py\n@@ -64,10 +64,18 @@ class CMakeFindPackageMultiGenerator(CMakeFindPackageGenerator):\n $<$<CONFIG:Debug>:${{{name}_COMPILE_DEFINITIONS_DEBUG}}>)\n set_property(TARGET {name}::{name}\n PROPERTY INTERFACE_COMPILE_OPTIONS\n- $<$<CONFIG:Release>:${{{name}_COMPILE_OPTIONS_RELEASE_LIST}}>\n- $<$<CONFIG:RelWithDebInfo>:${{{name}_COMPILE_OPTIONS_RELWITHDEBINFO_LIST}}>\n- $<$<CONFIG:MinSizeRel>:${{{name}_COMPILE_OPTIONS_MINSIZEREL_LIST}}>\n- $<$<CONFIG:Debug>:${{{name}_COMPILE_OPTIONS_DEBUG_LIST}}>)\n+ $<$<CONFIG:Release>:\n+ $<$<COMPILE_LANGUAGE:C>:${{{name}_COMPILE_OPTIONS_C_RELEASE}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{{name}_COMPILE_OPTIONS_CXX_RELEASE}}>>\n+ $<$<CONFIG:RelWithDebInfo>:\n+ $<$<COMPILE_LANGUAGE:C>:${{{name}_COMPILE_OPTIONS_C_RELWITHDEBINFO}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{{name}_COMPILE_OPTIONS_CXX_RELWITHDEBINFO}}>>\n+ $<$<CONFIG:MinSizeRel>:\n+ $<$<COMPILE_LANGUAGE:C>:${{{name}_COMPILE_OPTIONS_C_MINSIZEREL}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{{name}_COMPILE_OPTIONS_CXX_MINSIZEREL}}>>\n+ $<$<CONFIG:Debug>:\n+ $<$<COMPILE_LANGUAGE:C>:${{{name}_COMPILE_OPTIONS_C_DEBUG}}>\n+ $<$<COMPILE_LANGUAGE:CXX>:${{{name}_COMPILE_OPTIONS_CXX_DEBUG}}>>)\n \"\"\"\n \n # https://gitlab.kitware.com/cmake/cmake/blob/master/Modules/BasicConfigVersion-SameMajorVersion.cmake.in\n@@ -119,7 +127,8 @@ class CMakeFindPackageMultiGenerator(CMakeFindPackageGenerator):\n set({{ pkg_name }}_{{ comp_name }}_RES_DIRS_{{ build_type }} {{ comp.res_paths }})\n set({{ pkg_name }}_{{ comp_name }}_DEFINITIONS_{{ build_type }} {{ comp.defines }})\n set({{ pkg_name }}_{{ comp_name }}_COMPILE_DEFINITIONS_{{ build_type }} {{ comp.compile_definitions }})\n- set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_LIST_{{ build_type }} \"{{ comp.cxxflags_list }}\" \"{{ comp.cflags_list }}\")\n+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_C_{{ build_type }} \"{{ comp.cflags_list }}\")\n+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_CXX_{{ build_type }} \"{{ comp.cxxflags_list }}\")\n set({{ pkg_name }}_{{ comp_name }}_LIBS_{{ build_type }} {{ comp.libs }})\n set({{ pkg_name }}_{{ comp_name }}_SYSTEM_LIBS_{{ build_type }} {{ comp.system_libs }})\n set({{ pkg_name }}_{{ comp_name }}_FRAMEWORK_DIRS_{{ build_type }} {{ comp.framework_paths }})\n@@ -239,10 +248,18 @@ class CMakeFindPackageMultiGenerator(CMakeFindPackageGenerator):\n $<$<CONFIG:MinSizeRel>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_DEFINITIONS_MINSIZEREL}' }}>\n $<$<CONFIG:Debug>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_DEFINITIONS_DEBUG}' }}>)\n set_property(TARGET {{ pkg_name }}::{{ comp_name }} PROPERTY INTERFACE_COMPILE_OPTIONS\n- $<$<CONFIG:Release>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_RELEASE}' }}>\n- $<$<CONFIG:RelWithDebInfo>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_RELWITHDEBINFO}' }}>\n- $<$<CONFIG:MinSizeRel>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_MINSIZEREL}' }}>\n- $<$<CONFIG:Debug>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_DEBUG}' }}>)\n+ $<$<CONFIG:Release>:\n+ $<$<COMPILE_LANGUAGE:C>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_RELEASE}' }}>\n+ $<$<COMPILE_LANGUAGE:CXX>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_RELEASE}' }}>>\n+ $<$<CONFIG:RelWithDebInfo>:\n+ $<$<COMPILE_LANGUAGE:C>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_RELWITHDEBINFO}' }}>\n+ $<$<COMPILE_LANGUAGE:CXX>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_RELWITHDEBINFO}' }}>>\n+ $<$<CONFIG:MinSizeRel>:\n+ $<$<COMPILE_LANGUAGE:C>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_MINSIZEREL}' }}>\n+ $<$<COMPILE_LANGUAGE:CXX>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_MINSIZEREL}' }}>>\n+ $<$<CONFIG:Debug>:\n+ $<$<COMPILE_LANGUAGE:C>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_DEBUG}' }}>\n+ $<$<COMPILE_LANGUAGE:CXX>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_DEBUG}' }}>>)\n set({{ pkg_name }}_{{ comp_name }}_TARGET_PROPERTIES TRUE)\n \n {%- endfor %}\ndiff --git a/conans/test/functional/build_helpers/cmake_flags_test.py b/conans/test/functional/build_helpers/cmake_flags_test.py\nindex 8309af28cb2..da2e3c6e25f 100644\n--- a/conans/test/functional/build_helpers/cmake_flags_test.py\n+++ b/conans/test/functional/build_helpers/cmake_flags_test.py\n@@ -262,12 +262,18 @@ def transitive_targets_flags_test(self):\n self.assertNotIn(\"My\", cmake_cxx_flags)\n self.assertIn(\"CONAN_CXX_FLAGS=MyFlag1 MyFlag2 MyChatFlag1 MyChatFlag2\",\n client.out)\n- self.assertIn(\"HELLO_CXX_FLAGS=-load;C:\\some\\path;MyFlag1;MyFlag2;\"\n- \"$<$<CONFIG:Release>:;>;$<$<CONFIG:RelWithDebInfo>:;>;\"\n- \"$<$<CONFIG:MinSizeRel>:;>;$<$<CONFIG:Debug>:;>\", client.out)\n- self.assertIn(\"CHAT_CXX_FLAGS=MyChatFlag1;MyChatFlag2;\"\n- \"$<$<CONFIG:Release>:;>;$<$<CONFIG:RelWithDebInfo>:;>;\"\n- \"$<$<CONFIG:MinSizeRel>:;>;$<$<CONFIG:Debug>:;>\", client.out)\n+ self.assertIn(\"HELLO_CXX_FLAGS=$<$<COMPILE_LANGUAGE:C>:-load;C:\\some\\path>;$<$<COMPILE_LANGUAGE:CXX>:MyFlag1;MyFlag2>;\"\n+ \"$<$<CONFIG:Release>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>;\"\n+ \"$<$<CONFIG:RelWithDebInfo>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>;\"\n+ \"$<$<CONFIG:MinSizeRel>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>;\"\n+ \"$<$<CONFIG:Debug>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>\",\n+ client.out)\n+ self.assertIn(\"CHAT_CXX_FLAGS=$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:MyChatFlag1;MyChatFlag2;>;\"\n+ \"$<$<CONFIG:Release>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>;\"\n+ \"$<$<CONFIG:RelWithDebInfo>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>;\"\n+ \"$<$<CONFIG:MinSizeRel>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>;\"\n+ \"$<$<CONFIG:Debug>:;$<$<COMPILE_LANGUAGE:C>:>;$<$<COMPILE_LANGUAGE:CXX>:>>\",\n+ client.out)\n self.assertIn('HELLO_DEFINES=MY_DEF=My\" \\string;MY_DEF2=My${} other \\string;', client.out)\n \n def cmake_test_needed_settings(self):\ndiff --git a/conans/test/functional/generators/cmake_find_package_test.py b/conans/test/functional/generators/cmake_find_package_test.py\nindex 1412c66fa00..beaa9d32714 100644\n--- a/conans/test/functional/generators/cmake_find_package_test.py\n+++ b/conans/test/functional/generators/cmake_find_package_test.py\n@@ -68,7 +68,8 @@ def build(self):\n \"$<$<STREQUAL:$<TARGET_PROPERTY:TYPE>,SHARED_LIBRARY>:shared_link_flag>;\"\n \"$<$<STREQUAL:$<TARGET_PROPERTY:TYPE>,MODULE_LIBRARY>:shared_link_flag>;\"\n \"$<$<STREQUAL:$<TARGET_PROPERTY:TYPE>,EXECUTABLE>:>\", client.out)\n- self.assertIn(\"Compile options: a_cxx_flag;a_flag\", client.out)\n+ self.assertIn(\"Compile options: $<$<COMPILE_LANGUAGE:C>:a_flag>;$<$<COMPILE_LANGUAGE:CXX>:a_cxx_flag>\",\n+ client.out)\n \n def cmake_lock_target_redefinition_test(self):\n client = TestClient()\n@@ -486,14 +487,6 @@ def build(self):\n client.out)\n \n def cpp_info_filename_test(self):\n- def add_to_conan_file(after, add_lines, spaces_to_indent):\n- indent = '\\n' + (' ' * spaces_to_indent)\n- replace = indent.join([after] + add_lines)\n- replace_in_file(os.path.join(client.current_folder, \"conanfile.py\"),\n- after,\n- replace,\n- output=client.out)\n-\n client = TestClient()\n client.run(\"new hello/1.0 -s\")\n indent = '\\n '\n@@ -562,10 +555,6 @@ def build(self):\n client.run(\"install .\")\n client.run(\"build .\")\n \n- print('~' * 120)\n- print(client.out)\n- print('~' * 120)\n-\n self.assertIn('Found MYHELLO2: 1.0 (found version \"1.0\")', client.out)\n self.assertIn('Found MYHELLO: 1.0 (found version \"1.0\")', client.out)\n self.assertIn(\"Target libs (hello2): \"\ndiff --git a/conans/test/functional/generators/cmake_test.py b/conans/test/functional/generators/cmake_test.py\nindex 4ca405247ec..b66547a00ce 100644\n--- a/conans/test/functional/generators/cmake_test.py\n+++ b/conans/test/functional/generators/cmake_test.py\n@@ -380,3 +380,51 @@ def build(self):\n \n client.run('create .')\n self.assertIn(\"POLICY CMP0054 IS OLD\", client.out)\n+\n+ def test_cmake_compile_options(self):\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+\n+ class ConanTest(ConanFile):\n+ name = \"flags\"\n+ version = \"1.0\"\n+ generators = \"cmake\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+\n+ def package_info(self):\n+ self.cpp_info.cflags = [\"-fno-asm\"]\n+ self.cpp_info.cxxflags = [\"-fno-exceptions\", \"-fno-rtti\"]\n+ \"\"\")\n+\n+ client = TestClient()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create conanfile.py\")\n+\n+ conanfile_consumer = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, CMake\n+\n+ class ConanTest(ConanFile):\n+ name = \"consumer\"\n+ version = \"1.0\"\n+ generators = \"cmake\"\n+ settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n+ requires = \"flags/1.0\"\n+ exports_sources = \"CMakeLists.txt\"\n+\n+ def build(self):\n+ cmake = CMake(self)\n+ cmake.configure()\n+ \"\"\")\n+ cmakelists = textwrap.dedent(\"\"\"\n+ cmake_minimum_required(VERSION 2.8)\n+ PROJECT(consumer C CXX)\n+ include(conanbuildinfo.cmake)\n+ CONAN_BASIC_SETUP(TARGETS)\n+ get_target_property(opts CONAN_PKG::flags INTERFACE_COMPILE_OPTIONS)\n+ message(\"${opts}\")\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile_consumer,\n+ \"CMakeLists.txt\": cmakelists})\n+ client.run('create .')\n+ self.assertIn(\"$<$<COMPILE_LANGUAGE:C>:-fno-asm>;$<$<COMPILE_LANGUAGE:CXX>:-fno-exceptions;-fno-rtti>;\",\n+ client.out)\n"
}
|
[
{
"diff_hunk": "@@ -486,14 +487,6 @@ def build(self):\n client.out)\n \n def cpp_info_filename_test(self):\n- def add_to_conan_file(after, add_lines, spaces_to_indent):",
"line": null,
"original_line": 489,
"original_start_line": null,
"path": "conans/test/functional/generators/cmake_find_package_test.py",
"start_line": null,
"text": "@author:\nremoved code not used"
}
] |
5ac94165d1d387a6d3744a7a7f7734198de7e684
|
diff --git a/conans/client/generators/cmake_find_package.py b/conans/client/generators/cmake_find_package.py
index 4ba8b001d8a..800ecf5d527 100644
--- a/conans/client/generators/cmake_find_package.py
+++ b/conans/client/generators/cmake_find_package.py
@@ -97,7 +97,8 @@ class CMakeFindPackageGenerator(Generator):
set({{ pkg_name }}_{{ comp_name }}_RES_DIRS {{ comp.res_paths }})
set({{ pkg_name }}_{{ comp_name }}_DEFINITIONS {{ comp.defines }})
set({{ pkg_name }}_{{ comp_name }}_COMPILE_DEFINITIONS {{ comp.compile_definitions }})
- set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_LIST "{{ comp.cxxflags_list }}" "{{ comp.cflags_list }}")
+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_C "{{ comp.cflags_list }}")
+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_CXX "{{ comp.cxxflags_list }}")
set({{ pkg_name }}_{{ comp_name }}_LIBS {{ comp.libs }})
set({{ pkg_name }}_{{ comp_name }}_SYSTEM_LIBS {{ comp.system_libs }})
set({{ pkg_name }}_{{ comp_name }}_FRAMEWORK_DIRS {{ comp.framework_paths }})
@@ -182,7 +183,7 @@ class CMakeFindPackageGenerator(Generator):
set_target_properties({{ pkg_name }}::{{ comp_name }} PROPERTIES INTERFACE_COMPILE_DEFINITIONS
"{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_DEFINITIONS}' }}")
set_target_properties({{ pkg_name }}::{{ comp_name }} PROPERTIES INTERFACE_COMPILE_OPTIONS
- "{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST}' }}")
+ "{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C}' }};{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX}' }}")
endif()
endif()
@@ -267,7 +268,7 @@ def _find_for_dep(self, pkg_name, pkg_findname, pkg_filename, cpp_info):
pkg_public_deps_filenames = [self._get_filename(self.deps_build_info[public_dep])
for public_dep in cpp_info.public_deps]
pkg_public_deps_names = [self._get_name(self.deps_build_info[public_dep])
- for public_dep in cpp_info.public_deps]
+ for public_dep in cpp_info.public_deps]
deps_names = ";".join(["{n}::{n}".format(n=n) for n in pkg_public_deps_names])
if cpp_info.components:
components = self._get_components(pkg_name, pkg_findname, cpp_info)
diff --git a/conans/client/generators/cmake_find_package_common.py b/conans/client/generators/cmake_find_package_common.py
index 6d257c30b74..f3a61ba13dc 100644
--- a/conans/client/generators/cmake_find_package_common.py
+++ b/conans/client/generators/cmake_find_package_common.py
@@ -15,6 +15,8 @@
)
set({name}_COMPILE_DEFINITIONS{build_type_suffix} {deps.compile_definitions})
set({name}_COMPILE_OPTIONS{build_type_suffix}_LIST "{deps.cxxflags_list}" "{deps.cflags_list}")
+set({name}_COMPILE_OPTIONS_C{build_type_suffix} "{deps.cflags_list}")
+set({name}_COMPILE_OPTIONS_CXX{build_type_suffix} "{deps.cxxflags_list}")
set({name}_LIBRARIES_TARGETS{build_type_suffix} "") # Will be filled later, if CMake 3
set({name}_LIBRARIES{build_type_suffix} "") # Will be filled later
set({name}_LIBS{build_type_suffix} "") # Same as {name}_LIBRARIES
diff --git a/conans/client/generators/cmake_find_package_multi.py b/conans/client/generators/cmake_find_package_multi.py
index 96b4ebd7937..ddf1610d602 100644
--- a/conans/client/generators/cmake_find_package_multi.py
+++ b/conans/client/generators/cmake_find_package_multi.py
@@ -119,7 +119,8 @@ class CMakeFindPackageMultiGenerator(CMakeFindPackageGenerator):
set({{ pkg_name }}_{{ comp_name }}_RES_DIRS_{{ build_type }} {{ comp.res_paths }})
set({{ pkg_name }}_{{ comp_name }}_DEFINITIONS_{{ build_type }} {{ comp.defines }})
set({{ pkg_name }}_{{ comp_name }}_COMPILE_DEFINITIONS_{{ build_type }} {{ comp.compile_definitions }})
- set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_LIST_{{ build_type }} "{{ comp.cxxflags_list }}" "{{ comp.cflags_list }}")
+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_C_{{ build_type }} "{{ comp.cflags_list }}")
+ set({{ pkg_name }}_{{ comp_name }}_COMPILE_OPTIONS_CXX_{{ build_type }} "{{ comp.cxxflags_list }}")
set({{ pkg_name }}_{{ comp_name }}_LIBS_{{ build_type }} {{ comp.libs }})
set({{ pkg_name }}_{{ comp_name }}_SYSTEM_LIBS_{{ build_type }} {{ comp.system_libs }})
set({{ pkg_name }}_{{ comp_name }}_FRAMEWORK_DIRS_{{ build_type }} {{ comp.framework_paths }})
@@ -239,10 +240,18 @@ class CMakeFindPackageMultiGenerator(CMakeFindPackageGenerator):
$<$<CONFIG:MinSizeRel>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_DEFINITIONS_MINSIZEREL}' }}>
$<$<CONFIG:Debug>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_DEFINITIONS_DEBUG}' }}>)
set_property(TARGET {{ pkg_name }}::{{ comp_name }} PROPERTY INTERFACE_COMPILE_OPTIONS
- $<$<CONFIG:Release>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_RELEASE}' }}>
- $<$<CONFIG:RelWithDebInfo>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_RELWITHDEBINFO}' }}>
- $<$<CONFIG:MinSizeRel>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_MINSIZEREL}' }}>
- $<$<CONFIG:Debug>:{{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_LIST_DEBUG}' }}>)
+ $<$<CONFIG:Release>:
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_RELEASE}' }}
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_RELEASE}' }}>
+ $<$<CONFIG:RelWithDebInfo>:
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_RELWITHDEBINFO}' }}
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_RELWITHDEBINFO}' }}>
+ $<$<CONFIG:MinSizeRel>:
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_MINSIZEREL}' }}
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_MINSIZEREL}' }}>
+ $<$<CONFIG:Debug>:
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_C_DEBUG}' }}
+ {{ '${'+pkg_name+'_'+comp_name+'_COMPILE_OPTIONS_CXX_DEBUG}' }}>)
set({{ pkg_name }}_{{ comp_name }}_TARGET_PROPERTIES TRUE)
{%- endfor %}
diff --git a/conans/test/functional/generators/cmake_test.py b/conans/test/functional/generators/cmake_test.py
index 4ca405247ec..3dcedc6a19d 100644
--- a/conans/test/functional/generators/cmake_test.py
+++ b/conans/test/functional/generators/cmake_test.py
@@ -380,3 +380,125 @@ def build(self):
client.run('create .')
self.assertIn("POLICY CMP0054 IS OLD", client.out)
+
+ def do_not_mix_cflags_cxxflags_test(self):
+ client = TestClient()
+
+ def run_test(consumer_generator, consumer_cmakelists, with_components=True):
+
+ def generate_files(upstream_cpp_info, consumer_generator, consumer_cmakelists):
+ upstream_conanfile = GenConanfile().with_name("upstream").with_version("1.0")\
+ .with_package_info(cpp_info=upstream_cpp_info, env_info={})
+ client.save({"conanfile.py": upstream_conanfile}, clean_first=True)
+ client.run("create .")
+ consumer_conanfile = textwrap.dedent("""
+ from conans import ConanFile, CMake
+
+ class Consumer(ConanFile):
+ name = "consumer"
+ version = "1.0"
+ settings = "os", "compiler", "arch", "build_type"
+ exports_sources = "CMakeLists.txt"
+ requires = "upstream/1.0"
+ generators = "{}"
+
+ def build(self):
+ cmake = CMake(self)
+ cmake.configure()
+ """)
+ client.save({"conanfile.py": consumer_conanfile.format(consumer_generator),
+ "CMakeLists.txt": consumer_cmakelists})
+ client.run("create .")
+
+ if consumer_generator in ["cmake_find_package", "cmake_find_package_multi"]:
+ if with_components:
+ cpp_info = {"components": {"comp": {"cflags": ["one", "two"],
+ "cxxflags": ["three", "four"]}}}
+ else:
+ cpp_info = {"cflags": ["one", "two"], "cxxflags": ["three", "four"]}
+ generate_files(cpp_info, consumer_generator, consumer_cmakelists)
+ self.assertIn("compile options: three;four;one;two", client.out)
+ self.assertIn("cflags: one;two", client.out)
+ self.assertIn("cxxflags: three;four", client.out)
+ if with_components:
+ self.assertIn("comp cflags: one;two", client.out)
+ self.assertIn("comp cxxflags: three;four", client.out)
+ if consumer_generator == "cmake_find_package":
+ self.assertIn("comp compile options: one;two;three;four", client.out)
+ else:
+ self.assertIn("$<$<CONFIG:Release>:;one;two;three;four>;"
+ "$<$<CONFIG:RelWithDebInfo>:;>;"
+ "$<$<CONFIG:MinSizeRel>:;>;"
+ "$<$<CONFIG:Debug>:;>", client.out)
+ else:
+ generate_files({"cflags": ["one", "two"], "cxxflags": ["three", "four"]},
+ consumer_generator, consumer_cmakelists)
+ self.assertIn("global cflags: one two", client.out)
+ self.assertIn("global cxxflags: three four", client.out)
+ self.assertIn("upstream cflags: one two", client.out)
+ self.assertIn("upstream cxxflags: three four", client.out)
+
+ # Test cmake generator
+ cmakelists = textwrap.dedent("""
+ cmake_minimum_required(VERSION 2.8)
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ project(consumer)
+ include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
+ message("global cflags: ${CONAN_C_FLAGS}")
+ message("global cxxflags: ${CONAN_CXX_FLAGS}")
+ message("upstream cflags: ${CONAN_C_FLAGS_UPSTREAM}")
+ message("upstream cxxflags: ${CONAN_CXX_FLAGS_UPSTREAM}")
+ """)
+ run_test("cmake", cmakelists)
+
+ # Test cmake_multi generator
+ cmakelists = textwrap.dedent("""
+ cmake_minimum_required(VERSION 2.8)
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ project(consumer)
+ include(${CMAKE_BINARY_DIR}/conanbuildinfo_multi.cmake)
+ message("global cflags: ${CONAN_C_FLAGS_RELEASE}")
+ message("global cxxflags: ${CONAN_CXX_FLAGS_RELEASE}")
+ message("upstream cflags: ${CONAN_C_FLAGS_UPSTREAM_RELEASE}")
+ message("upstream cxxflags: ${CONAN_CXX_FLAGS_UPSTREAM_RELEASE}")
+ """)
+ run_test("cmake_multi", cmakelists)
+
+ # Test cmake_find_package generator
+ cmakelists = textwrap.dedent("""
+ cmake_minimum_required(VERSION 2.8)
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ project(consumer)
+ find_package(upstream)
+ message("compile options: ${upstream_COMPILE_OPTIONS_LIST}")
+ message("cflags: ${upstream_COMPILE_OPTIONS_C}")
+ message("cxxflags: ${upstream_COMPILE_OPTIONS_CXX}")
+ message("comp cflags: ${upstream_comp_COMPILE_OPTIONS_C}")
+ message("comp cxxflags: ${upstream_comp_COMPILE_OPTIONS_CXX}")
+ get_target_property(tmp upstream::comp INTERFACE_COMPILE_OPTIONS)
+ message("comp compile options: ${tmp}")
+ """)
+ run_test("cmake_find_package", cmakelists)
+ print(client.out)
+
+ # Test cmake_find_package generator without components
+ run_test("cmake_find_package", cmakelists, with_components=False)
+
+ # Test cmake_find_package_multi generator
+ cmakelists = textwrap.dedent("""
+ cmake_minimum_required(VERSION 2.8)
+ set(CMAKE_CXX_COMPILER_WORKS 1)
+ project(consumer)
+ find_package(upstream)
+ message("compile options: ${upstream_COMPILE_OPTIONS_RELEASE_LIST}")
+ message("cflags: ${upstream_COMPILE_OPTIONS_C_RELEASE}")
+ message("cxxflags: ${upstream_COMPILE_OPTIONS_CXX_RELEASE}")
+ message("comp cflags: ${upstream_comp_COMPILE_OPTIONS_C_RELEASE}")
+ message("comp cxxflags: ${upstream_comp_COMPILE_OPTIONS_CXX_RELEASE}")
+ get_target_property(tmp upstream::comp INTERFACE_COMPILE_OPTIONS)
+ message("comp compile options: ${tmp}")
+ """)
+ run_test("cmake_find_package_multi", cmakelists)
+
+ # Test cmake_find_package_multi generator without components
+ run_test("cmake_find_package_multi", cmakelists, with_components=False)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7394@488d857
|
conan-io/conan
|
Python
| 7,394
|
Fix Conan V2 cli broken help
|
Changelog: Fix: Fixing `--help` for commands in proposal for command line v2.0.
Docs: omit
Closes: #7366
- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2020-07-21T12:21:53Z
|
[bug] Conan V2 cli broken help
```bash
$ conan --help # broken
$ conan help # works
$ conan help search # broken
$ conan search --help # works
```
I am fine with removing the ``conan help`` approach (the --help is more standard).
I would also name the groups like ``Misc`` or ``Consumer``, and let the framework add the `` commands`` part in the output.
|
[
{
"body": "```bash\r\n$ conan --help # broken\r\n$ conan help # works\r\n$ conan help search # broken\r\n$ conan search --help # works\r\n```\r\n\r\nI am fine with removing the ``conan help`` approach (the --help is more standard). \r\n\r\nI would also name the groups like ``Misc`` or ``Consumer``, and let the framework add the `` commands`` part in the output.\r\n\r\n",
"number": 7366,
"title": "[bug] Conan V2 cli broken help"
}
] |
0c63663d731b6800336b1ce2c6fe26e19a9a375d
|
{
"head_commit": "488d8573275828929fccea2cd865359611201a8d",
"head_commit_message": "dont test --help",
"patch_to_review": "diff --git a/conans/cli/cli.py b/conans/cli/cli.py\nindex b9a79b56302..abe850714aa 100644\n--- a/conans/cli/cli.py\n+++ b/conans/cli/cli.py\n@@ -81,13 +81,15 @@ def __init__(self, conan_api):\n self._commands = {}\n conan_commands_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"commands\")\n for module in pkgutil.iter_modules([conan_commands_path]):\n- self._add_command(\"conans.cli.commands.{}\".format(module.name), module.name)\n+ module_name = module[1]\n+ self._add_command(\"conans.cli.commands.{}\".format(module_name), module_name)\n if get_env(\"CONAN_USER_COMMANDS\", default=False):\n user_commands_path = os.path.join(self._conan_api.cache_folder, \"commands\")\n sys.path.append(user_commands_path)\n for module in pkgutil.iter_modules([user_commands_path]):\n- if module.name.startswith(\"cmd_\"):\n- self._add_command(module.name, module.name.replace(\"cmd_\", \"\"))\n+ module_name = module[1]\n+ if module_name.startswith(\"cmd_\"):\n+ self._add_command(module_name, module_name.replace(\"cmd_\", \"\"))\n \n def _add_command(self, import_path, method_name):\n try:\n@@ -131,7 +133,8 @@ def _print_similar(self, command):\n self._out.writeln(\"\")\n \n def help_message(self):\n- self.commands[\"help\"].method(self.conan_api, self.commands, self.groups)\n+ self.commands[\"help\"].method(conan_api=self.conan_api, parser=self.commands[\"help\"].parser,\n+ commands=self.commands, groups=self.groups)\n \n def run(self, *args):\n \"\"\" Entry point for executing commands, dispatcher to class\n@@ -164,7 +167,8 @@ def run(self, *args):\n self._print_similar(command_argument)\n raise ConanException(\"Unknown command %s\" % str(exc))\n \n- command.run(self.conan_api, args[0][1:], parser=self.commands[command_argument].parser,\n+ command.run(args[0][1:], conan_api=self.conan_api,\n+ parser=self.commands[command_argument].parser,\n commands=self.commands, groups=self.groups)\n \n return SUCCESS\ndiff --git a/conans/cli/command.py b/conans/cli/command.py\nindex b1940087f17..387f96676c2 100644\n--- a/conans/cli/command.py\n+++ b/conans/cli/command.py\n@@ -34,7 +34,7 @@ def __init__(self, method, group, formatters=None):\n self._parser.add_argument('-o', '--output', default=default_output, choices=formatters_list,\n action=OnceArgument, help=self._output_help_message)\n \n- def run(self, conan_api, *args, **kwargs):\n+ def run(self, *args, conan_api=None, **kwargs):\n try:\n info = self._method(*args, conan_api=conan_api, **kwargs)\n parser_args = self._parser.parse_args(*args)\ndiff --git a/conans/test/functional/command_v2/__init__.py b/conans/test/functional/command_v2/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/functional/command_v2/help_test.py b/conans/test/functional/command_v2/help_test.py\nnew file mode 100644\nindex 00000000000..af26fe241ff\n--- /dev/null\n+++ b/conans/test/functional/command_v2/help_test.py\n@@ -0,0 +1,21 @@\n+import unittest\n+\n+from conans.client.tools import environment_append, save, six\n+from conans.test.utils.tools import TestClient\n+\n+\[email protected](six.PY2, \"v2.0: Only testing for Python 3\")\n+class CliHelpTest(unittest.TestCase):\n+\n+ def run(self, *args, **kwargs):\n+ with environment_append({\"CONAN_V2_CLI\": \"1\"}):\n+ super(CliHelpTest, self).run(*args, **kwargs)\n+\n+ def help_command_test(self):\n+ client = TestClient()\n+\n+ client.run(\"help\")\n+ self.assertIn(\"Shows help for a specific command\", client.out)\n+\n+ client.run(\"help search\")\n+ self.assertIn(\"Searches for package recipes whose name contain\", client.out)\ndiff --git a/conans/test/utils/tools.py b/conans/test/utils/tools.py\nindex cfaf582301c..e62aabc44aa 100644\n--- a/conans/test/utils/tools.py\n+++ b/conans/test/utils/tools.py\n@@ -27,7 +27,6 @@\n from conans import load\n from conans.client.cache.cache import ClientCache\n from conans.client.cache.remote_registry import Remotes\n-from conans.client.command import Command\n from conans.client.conan_api import Conan\n from conans.client.output import ConanOutput\n from conans.client.rest.file_uploader import IterableToFileAdapter\n@@ -830,7 +829,12 @@ def run(self, command_line, user_io=None, assert_error=False):\n \"\"\"\n conan = self.get_conan_api(user_io)\n self.api = conan\n- command = Command(conan)\n+ if os.getenv(\"CONAN_V2_CLI\"):\n+ from conans.cli.cli import Cli\n+ command = Cli(conan)\n+ else:\n+ from conans.client.command import Command\n+ command = Command(conan)\n args = shlex.split(command_line)\n current_dir = os.getcwd()\n os.chdir(self.current_folder)\n"
}
|
[
{
"diff_hunk": "@@ -34,7 +34,7 @@ def __init__(self, method, group, formatters=None):\n self._parser.add_argument('-o', '--output', default=default_output, choices=formatters_list,\n action=OnceArgument, help=self._output_help_message)\n \n- def run(self, conan_api, *args, **kwargs):\n+ def run(self, *args, conan_api=None, **kwargs):",
"line": null,
"original_line": 37,
"original_start_line": null,
"path": "conans/cli/command.py",
"start_line": null,
"text": "@user1:\nIs there any invocation to this method that will not pass the ``conan_api`` argument?"
}
] |
7dd15fe43434e68cd10e675064871a9843dd138c
|
diff --git a/conans/cli/cli.py b/conans/cli/cli.py
index b9a79b56302..ff10f0d7869 100644
--- a/conans/cli/cli.py
+++ b/conans/cli/cli.py
@@ -81,13 +81,14 @@ def __init__(self, conan_api):
self._commands = {}
conan_commands_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "commands")
for module in pkgutil.iter_modules([conan_commands_path]):
- self._add_command("conans.cli.commands.{}".format(module.name), module.name)
- if get_env("CONAN_USER_COMMANDS", default=False):
- user_commands_path = os.path.join(self._conan_api.cache_folder, "commands")
- sys.path.append(user_commands_path)
- for module in pkgutil.iter_modules([user_commands_path]):
- if module.name.startswith("cmd_"):
- self._add_command(module.name, module.name.replace("cmd_", ""))
+ module_name = module[1]
+ self._add_command("conans.cli.commands.{}".format(module_name), module_name)
+ user_commands_path = os.path.join(self._conan_api.cache_folder, "commands")
+ sys.path.append(user_commands_path)
+ for module in pkgutil.iter_modules([user_commands_path]):
+ module_name = module[1]
+ if module_name.startswith("cmd_"):
+ self._add_command(module_name, module_name.replace("cmd_", ""))
def _add_command(self, import_path, method_name):
try:
@@ -131,7 +132,8 @@ def _print_similar(self, command):
self._out.writeln("")
def help_message(self):
- self.commands["help"].method(self.conan_api, self.commands, self.groups)
+ self.commands["help"].method(conan_api=self.conan_api, parser=self.commands["help"].parser,
+ commands=self.commands, groups=self.groups)
def run(self, *args):
""" Entry point for executing commands, dispatcher to class
@@ -164,7 +166,8 @@ def run(self, *args):
self._print_similar(command_argument)
raise ConanException("Unknown command %s" % str(exc))
- command.run(self.conan_api, args[0][1:], parser=self.commands[command_argument].parser,
+ command.run(args[0][1:], conan_api=self.conan_api,
+ parser=self.commands[command_argument].parser,
commands=self.commands, groups=self.groups)
return SUCCESS
diff --git a/conans/cli/command.py b/conans/cli/command.py
index b1940087f17..1477d9d834d 100644
--- a/conans/cli/command.py
+++ b/conans/cli/command.py
@@ -34,7 +34,7 @@ def __init__(self, method, group, formatters=None):
self._parser.add_argument('-o', '--output', default=default_output, choices=formatters_list,
action=OnceArgument, help=self._output_help_message)
- def run(self, conan_api, *args, **kwargs):
+ def run(self, *args, conan_api, **kwargs):
try:
info = self._method(*args, conan_api=conan_api, **kwargs)
parser_args = self._parser.parse_args(*args)
diff --git a/conans/test/functional/command_v2/__init__.py b/conans/test/functional/command_v2/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/functional/command_v2/help_test.py b/conans/test/functional/command_v2/help_test.py
new file mode 100644
index 00000000000..af26fe241ff
--- /dev/null
+++ b/conans/test/functional/command_v2/help_test.py
@@ -0,0 +1,21 @@
+import unittest
+
+from conans.client.tools import environment_append, save, six
+from conans.test.utils.tools import TestClient
+
+
[email protected](six.PY2, "v2.0: Only testing for Python 3")
+class CliHelpTest(unittest.TestCase):
+
+ def run(self, *args, **kwargs):
+ with environment_append({"CONAN_V2_CLI": "1"}):
+ super(CliHelpTest, self).run(*args, **kwargs)
+
+ def help_command_test(self):
+ client = TestClient()
+
+ client.run("help")
+ self.assertIn("Shows help for a specific command", client.out)
+
+ client.run("help search")
+ self.assertIn("Searches for package recipes whose name contain", client.out)
diff --git a/conans/test/utils/tools.py b/conans/test/utils/tools.py
index cfaf582301c..e62aabc44aa 100644
--- a/conans/test/utils/tools.py
+++ b/conans/test/utils/tools.py
@@ -27,7 +27,6 @@
from conans import load
from conans.client.cache.cache import ClientCache
from conans.client.cache.remote_registry import Remotes
-from conans.client.command import Command
from conans.client.conan_api import Conan
from conans.client.output import ConanOutput
from conans.client.rest.file_uploader import IterableToFileAdapter
@@ -830,7 +829,12 @@ def run(self, command_line, user_io=None, assert_error=False):
"""
conan = self.get_conan_api(user_io)
self.api = conan
- command = Command(conan)
+ if os.getenv("CONAN_V2_CLI"):
+ from conans.cli.cli import Cli
+ command = Cli(conan)
+ else:
+ from conans.client.command import Command
+ command = Command(conan)
args = shlex.split(command_line)
current_dir = os.getcwd()
os.chdir(self.current_folder)
|
{
"difficulty": "low",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-7380@fd9532d
|
conan-io/conan
|
Python
| 7,380
|
fixing missing download of conan_sources.tgz for export_sources() method
|
Changelog: BugFix: Fix missing download of ``conan_sources.tgz`` created using ``export_sources()`` method.
Docs: Omit
Fix https://github.com/conan-io/conan/issues/7377
|
2020-07-17T19:09:32Z
|
[bug] conan install only considers exports_sources
### Environment Details (include every applicable attribute)
* Conan version: 1.26.0+
### Steps to reproduce (Include if Applicable)
Put a `dummy.txt` next to the following `conanfile.py` in some directory.
# conanfile.py
from conans import ConanFile
from os.path import isfile
class ExportSourcesTestConan(ConanFile):
name = "export-sources-test"
version = "1.0"
settings = None
#exports_sources = "dummy.txt",
def export_sources(self):
self.copy("dummy.txt")
def build(self):
if not isfile("dummy.txt"):
raise RuntimeError("Oh no!")
Create and upload the package
$ conan create . user/testing
$ conan upload -r <remote> export-sources-test/1.0@user/testing
Remove the package from the local cache and install it from your remote:
$ conan remove export-sources-test*
$ conan install export-sources-test/1.0@user/testing --build missing
If the `conanfile.py` uses `export_sources()` the build step will raise the exception. If `exports_sources` is used, the build step succeeds.
### Cause
The function `complete_recipe_sources()` in `client/source.py` only tests whether `conanfile.exports_sources` is defined. If only `export_sources()` is defined the sources won't be downloaded and the source directory stays empty.
### Workaround
Defining both `exports_sources` and `export_sources()` triggers the download of the sources. But it would be nice to use `export_sources()` completely independent from `exports_sources`.
|
Thanks very much for your complete report and detailed investigation. Indeed a bug that needs to be fixed.
You were just one ``if`` from a Pull Request, next time don't hesitate to submit it, we can help with the testing if necessary :)
I have contributed the fix in https://github.com/conan-io/conan/pull/7380, planned for next release (Conan 1.28)
You're welcome. I'm amazed how quick you're with feedback to new issues. Great work!
No problem, thanks to you for reporting! Let's keep the issue open it will be automatically closed when the PR is merged to develop.
|
[
{
"body": "### Environment Details (include every applicable attribute)\r\n * Conan version: 1.26.0+\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\nPut a `dummy.txt` next to the following `conanfile.py` in some directory.\r\n\r\n # conanfile.py\r\n from conans import ConanFile\r\n from os.path import isfile\r\n\r\n\r\n class ExportSourcesTestConan(ConanFile):\r\n name = \"export-sources-test\"\r\n version = \"1.0\"\r\n settings = None\r\n #exports_sources = \"dummy.txt\",\r\n\r\n def export_sources(self):\r\n self.copy(\"dummy.txt\")\r\n\r\n def build(self):\r\n if not isfile(\"dummy.txt\"):\r\n raise RuntimeError(\"Oh no!\")\r\n\r\nCreate and upload the package\r\n\r\n $ conan create . user/testing\r\n $ conan upload -r <remote> export-sources-test/1.0@user/testing\r\n\r\nRemove the package from the local cache and install it from your remote:\r\n\r\n $ conan remove export-sources-test*\r\n $ conan install export-sources-test/1.0@user/testing --build missing\r\n\r\nIf the `conanfile.py` uses `export_sources()` the build step will raise the exception. If `exports_sources` is used, the build step succeeds.\r\n\r\n### Cause\r\n\r\nThe function `complete_recipe_sources()` in `client/source.py` only tests whether `conanfile.exports_sources` is defined. If only `export_sources()` is defined the sources won't be downloaded and the source directory stays empty.\r\n\r\n### Workaround\r\n\r\nDefining both `exports_sources` and `export_sources()` triggers the download of the sources. But it would be nice to use `export_sources()` completely independent from `exports_sources`.",
"number": 7377,
"title": "[bug] conan install only considers exports_sources"
}
] |
f563439d18dacf4e7310c08c07386f95070eccfc
|
{
"head_commit": "fd9532dcfe67996d3226e1ee1987bc8829e1f6da",
"head_commit_message": "fire CI",
"patch_to_review": "diff --git a/conans/client/source.py b/conans/client/source.py\nindex ab4e3b5fd53..603d10e8c3d 100644\n--- a/conans/client/source.py\n+++ b/conans/client/source.py\n@@ -25,7 +25,7 @@ def complete_recipe_sources(remote_manager, cache, conanfile, ref, remotes):\n if os.path.exists(sources_folder):\n return None\n \n- if conanfile.exports_sources is None:\n+ if conanfile.exports_sources is None and not hasattr(conanfile, \"export_sources\"):\n mkdir(sources_folder)\n return None\n \ndiff --git a/conans/test/functional/command/export/exports_method_test.py b/conans/test/functional/command/export/exports_method_test.py\nindex fd216e629a0..6357b6de8cd 100644\n--- a/conans/test/functional/command/export/exports_method_test.py\n+++ b/conans/test/functional/command/export/exports_method_test.py\n@@ -209,3 +209,26 @@ def exports_sources(self):\n client.run(\"export . pkg/0.1@\", assert_error=True)\n self.assertIn(\"ERROR: conanfile 'exports_sources' shouldn't be a method, \"\n \"use 'export_sources()' instead\", client.out)\n+\n+ def test_exports_sources_upload_error(self):\n+ # https://github.com/conan-io/conan/issues/7377\n+ client = TestClient(default_server_user=True)\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, load\n+\n+ class MethodConan(ConanFile):\n+ def export_sources(self):\n+ self.copy(\"*\")\n+ def build(self):\n+ self.output.info(\"CONTENT: %s\" % load(\"myfile.txt\"))\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile,\n+ \"myfile.txt\": \"mycontent\"})\n+ client.run(\"export . pkg/0.1@\")\n+ self.assertIn(\"pkg/0.1 export_sources() method: Copied 1 '.txt' file: myfile.txt\",\n+ client.out)\n+ client.run(\"upload pkg/0.1@\")\n+ client.run(\"remove * -f\")\n+ client.run(\"install pkg/0.1@ --build\")\n+ self.assertIn(\"Downloading conan_sources.tgz\", client.out)\n+ self.assertIn(\"pkg/0.1: CONTENT: mycontent\", client.out)\n"
}
|
[
{
"diff_hunk": "@@ -209,3 +209,26 @@ def exports_sources(self):\n client.run(\"export . pkg/0.1@\", assert_error=True)\n self.assertIn(\"ERROR: conanfile 'exports_sources' shouldn't be a method, \"\n \"use 'export_sources()' instead\", client.out)\n+\n+ def test_exports_sources_upload_error(self):\n+ # https://github.com/conan-io/conan/issues/7377\n+ client = TestClient(default_server_user=True)\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, load\n+\n+ class MethodConan(ConanFile):\n+ def export_sources(self):\n+ self.copy(\"*\")\n+ def build(self):\n+ self.output.info(\"CONTENT: %s\" % load(\"myfile.txt\"))",
"line": null,
"original_line": 223,
"original_start_line": null,
"path": "conans/test/functional/command/export/exports_method_test.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n self.output.info(\"CONTENT: %s\" % tools.load(\"myfile.txt\"))\r\n```"
},
{
"diff_hunk": "@@ -209,3 +209,26 @@ def exports_sources(self):\n client.run(\"export . pkg/0.1@\", assert_error=True)\n self.assertIn(\"ERROR: conanfile 'exports_sources' shouldn't be a method, \"\n \"use 'export_sources()' instead\", client.out)\n+\n+ def test_exports_sources_upload_error(self):\n+ # https://github.com/conan-io/conan/issues/7377\n+ client = TestClient(default_server_user=True)\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile, load",
"line": null,
"original_line": 217,
"original_start_line": null,
"path": "conans/test/functional/command/export/exports_method_test.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n from conans import ConanFile, tools\r\n```"
}
] |
e54bc4fa2c1e67ce9ceee61cde6dc0908b17ea46
|
diff --git a/conans/client/source.py b/conans/client/source.py
index ab4e3b5fd53..603d10e8c3d 100644
--- a/conans/client/source.py
+++ b/conans/client/source.py
@@ -25,7 +25,7 @@ def complete_recipe_sources(remote_manager, cache, conanfile, ref, remotes):
if os.path.exists(sources_folder):
return None
- if conanfile.exports_sources is None:
+ if conanfile.exports_sources is None and not hasattr(conanfile, "export_sources"):
mkdir(sources_folder)
return None
diff --git a/conans/test/functional/command/export/exports_method_test.py b/conans/test/functional/command/export/exports_method_test.py
index fd216e629a0..674a293e53d 100644
--- a/conans/test/functional/command/export/exports_method_test.py
+++ b/conans/test/functional/command/export/exports_method_test.py
@@ -209,3 +209,26 @@ def exports_sources(self):
client.run("export . pkg/0.1@", assert_error=True)
self.assertIn("ERROR: conanfile 'exports_sources' shouldn't be a method, "
"use 'export_sources()' instead", client.out)
+
+ def test_exports_sources_upload_error(self):
+ # https://github.com/conan-io/conan/issues/7377
+ client = TestClient(default_server_user=True)
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile, tools
+
+ class MethodConan(ConanFile):
+ def export_sources(self):
+ self.copy("*")
+ def build(self):
+ self.output.info("CONTENT: %s" % tools.load("myfile.txt"))
+ """)
+ client.save({"conanfile.py": conanfile,
+ "myfile.txt": "mycontent"})
+ client.run("export . pkg/0.1@")
+ self.assertIn("pkg/0.1 export_sources() method: Copied 1 '.txt' file: myfile.txt",
+ client.out)
+ client.run("upload pkg/0.1@")
+ client.run("remove * -f")
+ client.run("install pkg/0.1@ --build")
+ self.assertIn("Downloading conan_sources.tgz", client.out)
+ self.assertIn("pkg/0.1: CONTENT: mycontent", client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7338@c96b388
|
conan-io/conan
|
Python
| 7,338
|
Do not fail with 'conan remove -r remote -p' if there are no packages in the remote
|
Changelog: Bugfix: Do not fail for `conan remove -r remote -p` when there are no packages in the remote.
Docs: omit
Check there are no packages before trying to remove the package folder in the server. It prevents a 404 error.
close https://github.com/conan-io/conan/issues/7332
#REVISIONS: 1
|
2020-07-10T15:28:00Z
|
[bug] conan remove fails when using -p without package id together with -r
When I do `conan remove -f -p -r <remote> <existing_reference>` with a reference for which there is only a recipe but no binary package on the remote Conan fails with the following error message:
```
ERROR: {
"errors" : [ {
"status" : 404,
"message" : "Couldn't find 'user/pkg/0.1.0/develop/d8cc7d7e1317dcbc71d65614521acffd/package'"
} ]
}. [Remote: my-remote]
```
I would expect it to succeed even if there is nothing to do.
|
It seems that the `-p` option without parameters is broken even when there is a binary package on the remote. I still get the same error.
It totally makes sense. Just to confirm, can you report the server you are using? Artifactory? Which version? Is there any special configuration like virtual repositories?
Thanks!
We're using Artifactory 6.17.0, no virtual repositories, revisions enabled.
I guess when no package ID is specified Conan would have to use the Artifactory API to list the available packages and then iterate on that list to delete them individually. Probably that functionality is currently missing.
Here are some traces
```
{"_action": "COMMAND", "name": "remove", "parameters": {"builds": null, "force": true, "outdated": false, "packages": [], "pattern": "EDP_Assign/1.8.0@sick/develop", "query": null, "remote_name": "edp-conan-local", "src": false}, "time": 1594288541.797875}
{"_action": "REST_API_CALL", "duration": 0.055852413177490234, "headers": {"User-Agent": "Conan/1.27.0 (Python 3.7.2) python-requests/2.21.0", "X-Client-Anonymous-Id": "**********", "X-Client-Id": ""}, "method": "GET", "time": 1594288541.8786561, "url": "https://myartifactory/artifactory/api/conan/edp-conan-local/v1/ping"}
{"_action": "REST_API_CALL", "duration": 0.00894618034362793, "headers": {"User-Agent": "Conan/1.27.0 (Python 3.7.2) python-requests/2.21.0", "X-Client-Anonymous-Id": "**********", "X-Client-Id": ""}, "method": "GET", "time": 1594288541.894613, "url": "https://myartifactory/artifactory/api/conan/edp-conan-local/v2/users/check_credentials"}
{"_action": "REST_API_CALL", "duration": 0.011967897415161133, "headers": {"User-Agent": "Conan/1.27.0 (Python 3.7.2) python-requests/2.21.0", "X-Client-Anonymous-Id": "**********", "X-Client-Id": ""}, "method": "GET", "time": 1594288541.9145598, "url": "https://myartifactory/artifactory/api/conan/edp-conan-local/v2/conans/EDP_Assign/1.8.0/sick/develop/revisions"}
{"_action": "REST_API_CALL", "duration": 0.009973287582397461, "headers": {"User-Agent": "Conan/1.27.0 (Python 3.7.2) python-requests/2.21.0", "X-Client-Anonymous-Id": "**********", "X-Client-Id": ""}, "method": "DELETE", "time": 1594288541.932512, "url": "https://myartifactory/artifactory/api/conan/edp-conan-local/v2/conans/EDP_Assign/1.8.0/sick/develop/revisions/235391/packages"}
{"_action": "EXCEPTION", "class": "NotFoundException", "message": "{\n \"errors\" : [ {\n \"status\" : 404,\n \"message\" : \"Couldn't find 'sick/EDP_Assign/1.8.0/develop/235391/package'\"\n } ]\n}. [Remote: edp-conan-local]", "time": 1594288541.9424858}
```
There's this function in `conans/server/store/server_store.py` which just unconditionally assumes that there must be a `package` folder:
```python
def remove_all_packages(self, ref):
assert ref.revision is not None, "BUG: server store needs RREV remove_all_packages"
assert isinstance(ref, ConanFileReference)
packages_folder = self.packages(ref)
self._storage_adapter.delete_folder(packages_folder)
```
Thanks for the investigation! You are one step away from creating the PR 😉
Hm not sure, that looks like some kind of server side implementation to me because the only storage adapter that I can find is a disk storage adapter. I guess that's not part of the callstack on the client side. Trying to look into this some more but probably I could use some help ;)
Unless that's actually the implementation on Artifactory side. Then a fix would require us to update Artifactory which is not what I want to hear...
The Conan API with Artifactory is quite straightforward, as you can see from your logs, Artifactory returns an error if the resource you are requesting to DELETE doesn't exist
```
{"_action": "REST_API_CALL", "duration": 0.009973287582397461, "headers": {"User-Agent": "Conan/1.27.0 (Python 3.7.2) python-requests/2.21.0", "X-Client-Anonymous-Id": "**********", "X-Client-Id": ""}, "method": "DELETE", "time": 1594288541.932512, "url": "https://myartifactory/artifactory/api/conan/edp-conan-local/v2/conans/EDP_Assign/1.8.0/sick/develop/revisions/235391/packages"}
```
This is a design decision in Artifactory that will affect every other package manager and I'm sure it is something we have _to live with_, so we need to implement the **logic in the client-side** (it will take us just one release, it should be really easy):
a) my first approach would be to check if the message/error returned by Artifactory it is clear about what has happened, Conan can capture that concrete error and ignore it safely.
b) if that is not possible, Conan can list the packages in the server and, if it is empty, do not call the `remove_all_packages` function.
c) (Maybe looking at the sources, a `try/catch-pass` around `remove_all_packages` is enough... )
|
[
{
"body": "When I do `conan remove -f -p -r <remote> <existing_reference>` with a reference for which there is only a recipe but no binary package on the remote Conan fails with the following error message:\r\n\r\n```\r\nERROR: {\r\n \"errors\" : [ {\r\n \"status\" : 404,\r\n \"message\" : \"Couldn't find 'user/pkg/0.1.0/develop/d8cc7d7e1317dcbc71d65614521acffd/package'\"\r\n } ]\r\n}. [Remote: my-remote]\r\n```\r\n\r\nI would expect it to succeed even if there is nothing to do.",
"number": 7332,
"title": "[bug] conan remove fails when using -p without package id together with -r"
}
] |
1ed9b250706c24b222aafcb598fbffc7355f691c
|
{
"head_commit": "c96b388c82912f028e9f2a516854dd05311d52ff",
"head_commit_message": "if not conaninfo, search doesn't work",
"patch_to_review": "diff --git a/conans/client/rest/rest_client_v1.py b/conans/client/rest/rest_client_v1.py\nindex ad977f59994..5b4d5c9c6da 100644\n--- a/conans/client/rest/rest_client_v1.py\n+++ b/conans/client/rest/rest_client_v1.py\n@@ -1,6 +1,7 @@\n import os\n import time\n import traceback\n+from collections import namedtuple\n \n from six.moves.urllib.parse import parse_qs, urljoin, urlparse, urlsplit\n \n@@ -306,10 +307,13 @@ def _remove_conanfile_files(self, ref, files):\n @handle_return_deserializer()\n def remove_packages(self, ref, package_ids=None):\n \"\"\" Remove any packages specified by package_ids\"\"\"\n- self.check_credentials()\n- payload = {\"package_ids\": package_ids}\n- url = self.router.remove_packages(ref)\n- return self._post_json(url, payload)\n+ pcks = self.search_packages(ref, query=None)\n+ if pcks:\n+ self.check_credentials()\n+ payload = {\"package_ids\": package_ids}\n+ url = self.router.remove_packages(ref)\n+ return self._post_json(url, payload)\n+ return namedtuple(\"_\", ['status_code', 'content'])(200, b'')\n \n @handle_return_deserializer()\n def remove_conanfile(self, ref):\ndiff --git a/conans/client/rest/rest_client_v2.py b/conans/client/rest/rest_client_v2.py\nindex af997117057..f0082fb52a0 100644\n--- a/conans/client/rest/rest_client_v2.py\n+++ b/conans/client/rest/rest_client_v2.py\n@@ -246,13 +246,16 @@ def remove_packages(self, ref, package_ids=None):\n for ref in refs:\n assert ref.revision is not None, \"remove_packages needs RREV\"\n if not package_ids:\n- url = self.router.remove_all_packages(ref)\n- response = self.requester.delete(url, auth=self.auth, headers=self.custom_headers,\n- verify=self.verify_ssl)\n- if response.status_code != 200: # Error message is text\n- # To be able to access ret.text (ret.content are bytes)\n- response.charset = \"utf-8\"\n- raise get_exception_from_error(response.status_code)(response.text)\n+ # Check there are packages to remove\n+ package_search_url = self.router.search_packages(ref)\n+ if self.get_json(package_search_url):\n+ url = self.router.remove_all_packages(ref)\n+ response = self.requester.delete(url, auth=self.auth, verify=self.verify_ssl,\n+ headers=self.custom_headers)\n+ if response.status_code != 200: # Error message is text\n+ # To be able to access ret.text (ret.content are bytes)\n+ response.charset = \"utf-8\"\n+ raise get_exception_from_error(response.status_code)(response.text)\n else:\n for pid in package_ids:\n pref = PackageReference(ref, pid)\ndiff --git a/conans/test/functional/command/remote_test.py b/conans/test/functional/command/remote_test.py\nindex ece905ef876..9dcf06d6645 100644\n--- a/conans/test/functional/command/remote_test.py\n+++ b/conans/test/functional/command/remote_test.py\n@@ -3,9 +3,10 @@\n import unittest\n from collections import OrderedDict\n \n+from conans.model.ref import ConanFileReference\n+from conans.test.utils.genconanfile import GenConanfile\n from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient, TestServer\n from conans.util.files import load\n-from conans.model.ref import ConanFileReference\n \n \n class RemoteTest(unittest.TestCase):\n@@ -21,12 +22,7 @@ def setUp(self):\n self.client = TestClient(servers=self.servers, users=self.users)\n \n def test_removed_references(self):\n- conanfile = \"\"\"\n-from conans import ConanFile\n-class HelloConan(ConanFile):\n- pass\n-\"\"\"\n- self.client.save({\"conanfile.py\": conanfile})\n+ self.client.save({\"conanfile.py\": GenConanfile()})\n self.client.run(\"create . lib/1.0@lasote/channel\")\n self.client.run('upload \"*\" -c -r remote1')\n self.client.run('upload \"*\" -c -r remote2')\n@@ -86,7 +82,7 @@ class HelloConan(ConanFile):\n \n def list_raw_test(self):\n self.client.run(\"remote list --raw\")\n- output = re.sub(r\"http:\\/\\/fake.+\\.com\", \"http://fake.com\", str(self.client.out))\n+ output = re.sub(r\"http://fake.+.com\", \"http://fake.com\", str(self.client.out))\n self.assertIn(\"remote0 http://fake.com True\", output)\n self.assertIn(\"remote1 http://fake.com True\", output)\n self.assertIn(\"remote2 http://fake.com True\", output)\n@@ -342,12 +338,10 @@ def invalid_remote_disable_test(self):\n client = TestClient()\n \n client.run(\"remote disable invalid_remote\", assert_error=True)\n- self.assertIn(\"ERROR: Remote 'invalid_remote' not found in remotes\",\n- client.out)\n+ self.assertIn(\"ERROR: Remote 'invalid_remote' not found in remotes\", client.out)\n \n client.run(\"remote enable invalid_remote\", assert_error=True)\n- self.assertIn(\"ERROR: Remote 'invalid_remote' not found in remotes\",\n- client.out)\n+ self.assertIn(\"ERROR: Remote 'invalid_remote' not found in remotes\", client.out)\n \n client.run(\"remote disable invalid_wildcard_*\")\n \n@@ -390,12 +384,10 @@ def duplicated_error_tests(self):\n self.client.run(\"remote list\")\n url = str(self.client.out).split()[1]\n self.client.run(\"remote add newname %s\" % url, assert_error=True)\n- self.assertIn(\"Remote 'remote0' already exists with same URL\",\n- self.client.out)\n+ self.assertIn(\"Remote 'remote0' already exists with same URL\", self.client.out)\n \n self.client.run(\"remote update remote1 %s\" % url, assert_error=True)\n- self.assertIn(\"Remote 'remote0' already exists with same URL\",\n- self.client.out)\n+ self.assertIn(\"Remote 'remote0' already exists with same URL\", self.client.out)\n \n def basic_refs_test(self):\n self.client.run(\"remote add_ref Hello/0.1@user/testing remote0\")\n@@ -468,9 +460,7 @@ def test_metadata_editable_packages(self):\n \"\"\"\n Check that 'conan remote' commands work with editable packages\n \"\"\"\n- self.client.save({\"conanfile.py\": \"\"\"from conans import ConanFile\n-class Conan(ConanFile):\n- pass\"\"\"})\n+ self.client.save({\"conanfile.py\": GenConanfile()})\n self.client.run(\"create . pkg/1.1@lasote/stable\")\n self.client.run(\"upload pkg/1.1@lasote/stable --all -c --remote remote1\")\n self.client.run(\"remove -f pkg/1.1@lasote/stable\")\n@@ -503,3 +493,9 @@ class Conan(ConanFile):\n self.client.run(\"remote list\")\n self.assertNotIn(\"remote1\", self.client.out)\n self.assertNotIn(\"remote0\", self.client.out)\n+\n+ def test_remove_package_empty(self):\n+ self.client.save({\"conanfile.py\": GenConanfile(\"name\", \"version\")})\n+ self.client.run(\"export . name/version@lasote/stable\")\n+ self.client.run(\"upload name/version@lasote/stable --remote remote1\")\n+ self.client.run(\"remove -f -p -r remote1 name/version@lasote/stable\")\ndiff --git a/conans/test/functional/remote/rest_api_test.py b/conans/test/functional/remote/rest_api_test.py\nindex ae024354c3e..15d7afe5397 100644\n--- a/conans/test/functional/remote/rest_api_test.py\n+++ b/conans/test/functional/remote/rest_api_test.py\n@@ -7,13 +7,13 @@\n from nose.plugins.attrib import attr\n \n from conans import DEFAULT_REVISION_V1\n-from conans.client.userio import UserIO\n-from conans.client.remote_manager import Remote\n from conans.client.conf import ConanClientConfigParser\n+from conans.client.remote_manager import Remote\n from conans.client.rest.auth_manager import ConanApiAuthManager\n from conans.client.rest.conan_requester import ConanRequester\n from conans.client.rest.rest_client import RestApiClientFactory\n from conans.client.rest.rest_client_v1 import complete_url\n+from conans.client.userio import UserIO\n from conans.model.info import ConanInfo\n from conans.model.manifest import FileTreeManifest\n from conans.model.ref import ConanFileReference, PackageReference\n@@ -231,11 +231,14 @@ def remove_packages_test(self):\n for sha in [\"1\", \"2\", \"3\", \"4\", \"5\"]:\n # Upload an package\n pref = PackageReference(ref, sha, DEFAULT_REVISION_V1)\n- self._upload_package(pref)\n+ self._upload_package(pref, {CONANINFO: \"\"})\n folder = self.server.server_store.package(pref)\n self.assertTrue(os.path.exists(folder))\n folders[sha] = folder\n \n+ data = self.api.search_packages(ref, None)\n+ self.assertEqual(len(data), 5)\n+\n self.api.remove_packages(ref, [\"1\"])\n self.assertTrue(os.path.exists(self.server.server_store.base_folder(ref)))\n self.assertFalse(os.path.exists(folders[\"1\"]))\n"
}
|
[
{
"diff_hunk": "@@ -246,13 +246,16 @@ def remove_packages(self, ref, package_ids=None):\n for ref in refs:\n assert ref.revision is not None, \"remove_packages needs RREV\"\n if not package_ids:\n- url = self.router.remove_all_packages(ref)\n- response = self.requester.delete(url, auth=self.auth, headers=self.custom_headers,\n- verify=self.verify_ssl)\n- if response.status_code != 200: # Error message is text\n- # To be able to access ret.text (ret.content are bytes)\n- response.charset = \"utf-8\"\n- raise get_exception_from_error(response.status_code)(response.text)\n+ # Check there are packages to remove\n+ package_search_url = self.router.search_packages(ref)",
"line": null,
"original_line": 250,
"original_start_line": null,
"path": "conans/client/rest/rest_client_v2.py",
"start_line": null,
"text": "@user1:\nThe ``search_packages`` is quite slow, tbh, I am not very excited about this approach. I would say:\r\n\r\n- This seems something to be fixed in the server side. I think it is a bug and can be fixed, it only belongs to the Conan API\r\n- In the meantime, I would try to workaround it in the client side with a try - except 404 and only checking the ``search_packages()`` in that case. But annotating clearly it is a temporary workaround."
},
{
"diff_hunk": "@@ -306,10 +307,13 @@ def _remove_conanfile_files(self, ref, files):\n @handle_return_deserializer()\n def remove_packages(self, ref, package_ids=None):\n \"\"\" Remove any packages specified by package_ids\"\"\"\n- self.check_credentials()\n- payload = {\"package_ids\": package_ids}\n- url = self.router.remove_packages(ref)\n- return self._post_json(url, payload)\n+ pcks = self.search_packages(ref, query=None)",
"line": null,
"original_line": 310,
"original_start_line": null,
"path": "conans/client/rest/rest_client_v1.py",
"start_line": null,
"text": "@user1:\nI would only do the search if ``package_ids is None``, in other cases it should still raise.\r\n\r\nActually, I'd prefer capturing the raise, checking if package_ids is None, and re-raising if not, but I guess it is not easy to differentiate the 404 that the ``packages`` folder is missing or the 404 that the recipe is missing."
}
] |
270cbde8eaae18be2d048c2c309c0e860385787f
|
diff --git a/conans/client/rest/rest_client_v1.py b/conans/client/rest/rest_client_v1.py
index ad977f59994..96fdbd43c84 100644
--- a/conans/client/rest/rest_client_v1.py
+++ b/conans/client/rest/rest_client_v1.py
@@ -1,6 +1,7 @@
import os
import time
import traceback
+from collections import namedtuple
from six.moves.urllib.parse import parse_qs, urljoin, urlparse, urlsplit
@@ -304,12 +305,21 @@ def _remove_conanfile_files(self, ref, files):
return self._post_json(url, payload)
@handle_return_deserializer()
- def remove_packages(self, ref, package_ids=None):
+ def remove_packages(self, ref, package_ids):
""" Remove any packages specified by package_ids"""
self.check_credentials()
payload = {"package_ids": package_ids}
url = self.router.remove_packages(ref)
- return self._post_json(url, payload)
+ ret = self._post_json(url, payload)
+ if not package_ids and ret.status_code == 404:
+ # Double check if it is a 404 because there are no packages
+ try:
+ if not self.search_packages(ref, query=None):
+ return namedtuple("_", ['status_code', 'content'])(200, b'')
+ except Exception as e:
+ logger.warning("Unexpected error searching {} packages"
+ " in remote {}: {}".format(ref, self.remote_url, e))
+ return ret
@handle_return_deserializer()
def remove_conanfile(self, ref):
diff --git a/conans/client/rest/rest_client_v2.py b/conans/client/rest/rest_client_v2.py
index af997117057..dd53f7b209f 100644
--- a/conans/client/rest/rest_client_v2.py
+++ b/conans/client/rest/rest_client_v2.py
@@ -232,7 +232,7 @@ def _remove_conanfile_files(self, ref, files):
# V2 === revisions, do not remove files, it will create a new revision if the files changed
return
- def remove_packages(self, ref, package_ids=None):
+ def remove_packages(self, ref, package_ids):
""" Remove any packages specified by package_ids"""
self.check_credentials()
@@ -247,8 +247,17 @@ def remove_packages(self, ref, package_ids=None):
assert ref.revision is not None, "remove_packages needs RREV"
if not package_ids:
url = self.router.remove_all_packages(ref)
- response = self.requester.delete(url, auth=self.auth, headers=self.custom_headers,
- verify=self.verify_ssl)
+ response = self.requester.delete(url, auth=self.auth, verify=self.verify_ssl,
+ headers=self.custom_headers)
+ if response.status_code == 404:
+ # Double check if it is a 404 because there are no packages
+ try:
+ package_search_url = self.router.search_packages(ref)
+ if not self.get_json(package_search_url):
+ return
+ except Exception as e:
+ logger.warning("Unexpected error searching {} packages"
+ " in remote {}: {}".format(ref, self.remote_url, e))
if response.status_code != 200: # Error message is text
# To be able to access ret.text (ret.content are bytes)
response.charset = "utf-8"
diff --git a/conans/test/functional/command/remote_test.py b/conans/test/functional/command/remote_test.py
index ece905ef876..9dcf06d6645 100644
--- a/conans/test/functional/command/remote_test.py
+++ b/conans/test/functional/command/remote_test.py
@@ -3,9 +3,10 @@
import unittest
from collections import OrderedDict
+from conans.model.ref import ConanFileReference
+from conans.test.utils.genconanfile import GenConanfile
from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient, TestServer
from conans.util.files import load
-from conans.model.ref import ConanFileReference
class RemoteTest(unittest.TestCase):
@@ -21,12 +22,7 @@ def setUp(self):
self.client = TestClient(servers=self.servers, users=self.users)
def test_removed_references(self):
- conanfile = """
-from conans import ConanFile
-class HelloConan(ConanFile):
- pass
-"""
- self.client.save({"conanfile.py": conanfile})
+ self.client.save({"conanfile.py": GenConanfile()})
self.client.run("create . lib/1.0@lasote/channel")
self.client.run('upload "*" -c -r remote1')
self.client.run('upload "*" -c -r remote2')
@@ -86,7 +82,7 @@ class HelloConan(ConanFile):
def list_raw_test(self):
self.client.run("remote list --raw")
- output = re.sub(r"http:\/\/fake.+\.com", "http://fake.com", str(self.client.out))
+ output = re.sub(r"http://fake.+.com", "http://fake.com", str(self.client.out))
self.assertIn("remote0 http://fake.com True", output)
self.assertIn("remote1 http://fake.com True", output)
self.assertIn("remote2 http://fake.com True", output)
@@ -342,12 +338,10 @@ def invalid_remote_disable_test(self):
client = TestClient()
client.run("remote disable invalid_remote", assert_error=True)
- self.assertIn("ERROR: Remote 'invalid_remote' not found in remotes",
- client.out)
+ self.assertIn("ERROR: Remote 'invalid_remote' not found in remotes", client.out)
client.run("remote enable invalid_remote", assert_error=True)
- self.assertIn("ERROR: Remote 'invalid_remote' not found in remotes",
- client.out)
+ self.assertIn("ERROR: Remote 'invalid_remote' not found in remotes", client.out)
client.run("remote disable invalid_wildcard_*")
@@ -390,12 +384,10 @@ def duplicated_error_tests(self):
self.client.run("remote list")
url = str(self.client.out).split()[1]
self.client.run("remote add newname %s" % url, assert_error=True)
- self.assertIn("Remote 'remote0' already exists with same URL",
- self.client.out)
+ self.assertIn("Remote 'remote0' already exists with same URL", self.client.out)
self.client.run("remote update remote1 %s" % url, assert_error=True)
- self.assertIn("Remote 'remote0' already exists with same URL",
- self.client.out)
+ self.assertIn("Remote 'remote0' already exists with same URL", self.client.out)
def basic_refs_test(self):
self.client.run("remote add_ref Hello/0.1@user/testing remote0")
@@ -468,9 +460,7 @@ def test_metadata_editable_packages(self):
"""
Check that 'conan remote' commands work with editable packages
"""
- self.client.save({"conanfile.py": """from conans import ConanFile
-class Conan(ConanFile):
- pass"""})
+ self.client.save({"conanfile.py": GenConanfile()})
self.client.run("create . pkg/1.1@lasote/stable")
self.client.run("upload pkg/1.1@lasote/stable --all -c --remote remote1")
self.client.run("remove -f pkg/1.1@lasote/stable")
@@ -503,3 +493,9 @@ class Conan(ConanFile):
self.client.run("remote list")
self.assertNotIn("remote1", self.client.out)
self.assertNotIn("remote0", self.client.out)
+
+ def test_remove_package_empty(self):
+ self.client.save({"conanfile.py": GenConanfile("name", "version")})
+ self.client.run("export . name/version@lasote/stable")
+ self.client.run("upload name/version@lasote/stable --remote remote1")
+ self.client.run("remove -f -p -r remote1 name/version@lasote/stable")
diff --git a/conans/test/functional/remote/rest_api_test.py b/conans/test/functional/remote/rest_api_test.py
index ae024354c3e..15d7afe5397 100644
--- a/conans/test/functional/remote/rest_api_test.py
+++ b/conans/test/functional/remote/rest_api_test.py
@@ -7,13 +7,13 @@
from nose.plugins.attrib import attr
from conans import DEFAULT_REVISION_V1
-from conans.client.userio import UserIO
-from conans.client.remote_manager import Remote
from conans.client.conf import ConanClientConfigParser
+from conans.client.remote_manager import Remote
from conans.client.rest.auth_manager import ConanApiAuthManager
from conans.client.rest.conan_requester import ConanRequester
from conans.client.rest.rest_client import RestApiClientFactory
from conans.client.rest.rest_client_v1 import complete_url
+from conans.client.userio import UserIO
from conans.model.info import ConanInfo
from conans.model.manifest import FileTreeManifest
from conans.model.ref import ConanFileReference, PackageReference
@@ -231,11 +231,14 @@ def remove_packages_test(self):
for sha in ["1", "2", "3", "4", "5"]:
# Upload an package
pref = PackageReference(ref, sha, DEFAULT_REVISION_V1)
- self._upload_package(pref)
+ self._upload_package(pref, {CONANINFO: ""})
folder = self.server.server_store.package(pref)
self.assertTrue(os.path.exists(folder))
folders[sha] = folder
+ data = self.api.search_packages(ref, None)
+ self.assertEqual(len(data), 5)
+
self.api.remove_packages(ref, ["1"])
self.assertTrue(os.path.exists(self.server.server_store.base_folder(ref)))
self.assertFalse(os.path.exists(folders["1"]))
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7337@521baaa
|
conan-io/conan
|
Python
| 7,337
|
[feat] Recipes declare what they provide (and this can create a conflict)
|
Changelog: Feature: Add `provides` attribute to `ConanFile`: recipes can declare what they provide and Conan will fail if several recipes provide the same functionality (ODR violation).
Docs: https://github.com/conan-io/docs/pull/1786
If nothing is provided explicitly, the `provides` attribute takes one single value equal to the reference name. Two recipes providing the same functionality **in the same context** (public closure) will produce an error.
Q: Error or warning?
Q: Transitive private requirements, is it an ODR violation?
close https://github.com/conan-io/conan/issues/7252
|
2020-07-10T11:11:02Z
|
[feature] Declare what a recipe 'provides'
Add a `provides` attribute to the ConanFile class, it will list other package names that define the same functionality. Some examples:
* `libjpeg`, `libjpeg-turbo`, `mozjpeg` are different implementations of the same functionality and break the ODR principle.
* deprecated recipes like `cpp-taskflow` that takes the new name `taskflow`
* frameworks and individual libraries like a monolith `boost` and a modular one
All of them will introduce duplicated functionality in the graph (probably linking problems) that we want to avoid.
Recipes `mozjpeg` and `libjpeg-turbo` will contain a `provides = "libjpeg"` attribute and Conan will raise if both are present in the graph (any alternative and the _master_ one).
The workaround for this situation is to replace the requirement modifying one recipe in the middle to change the requirements, but eventually it will remove all the alternative implementations from the graph. **The actual solution would be to create a "proxy/virtual" recipe that, based on an option, will choose one of the implementations**, and all the recipes should require that proxy recipe instead of the actual jpeg implementation. Conan built-in functionality ensures that there is only one instance of the proxy recipe and there will be only one implementation of the actual functionality.
We should provide a POC together with the feature implementation to check it actually works.
|
debian documentation: https://www.debian.org/doc/debian-policy/ch-relationships.html#virtual-packages-provides
more cases:
- [LibreSSL](https://www.libressl.org/), [BoringSSL](https://boringssl.googlesource.com/boringssl/) and [OpenSSL](https://www.openssl.org/)
- [libav](https://libav.org/) and [ffmpeg](https://ffmpeg.org/)
- [MariaDB client](https://downloads.mariadb.org/client-native/) and [MySQL client](https://dev.mysql.com/downloads/c-api/)
Reading some comments, I wonder if the value for the `provides` should be the name of the _master_ recipe or the name of the "virtual/proxy" one. With the name of the "virtual/proxy", Conan can show a meaningful message with the requirement you should use to fix the conflict.
|
[
{
"body": "Add a `provides` attribute to the ConanFile class, it will list other package names that define the same functionality. Some examples:\r\n * `libjpeg`, `libjpeg-turbo`, `mozjpeg` are different implementations of the same functionality and break the ODR principle. \r\n * deprecated recipes like `cpp-taskflow` that takes the new name `taskflow`\r\n * frameworks and individual libraries like a monolith `boost` and a modular one\r\n\r\nAll of them will introduce duplicated functionality in the graph (probably linking problems) that we want to avoid.\r\n\r\nRecipes `mozjpeg` and `libjpeg-turbo` will contain a `provides = \"libjpeg\"` attribute and Conan will raise if both are present in the graph (any alternative and the _master_ one).\r\n\r\nThe workaround for this situation is to replace the requirement modifying one recipe in the middle to change the requirements, but eventually it will remove all the alternative implementations from the graph. **The actual solution would be to create a \"proxy/virtual\" recipe that, based on an option, will choose one of the implementations**, and all the recipes should require that proxy recipe instead of the actual jpeg implementation. Conan built-in functionality ensures that there is only one instance of the proxy recipe and there will be only one implementation of the actual functionality.\r\n\r\nWe should provide a POC together with the feature implementation to check it actually works.",
"number": 7252,
"title": "[feature] Declare what a recipe 'provides'"
}
] |
693a458c2ff87e2182aa56f25e77743580363a74
|
{
"head_commit": "521baaafa15a89e59f2c501f61c55a4392bbfeb2",
"head_commit_message": "textwrap.indent not available in py27",
"patch_to_review": "diff --git a/conans/client/graph/graph_manager.py b/conans/client/graph/graph_manager.py\nindex f880e0ea9ee..bbc8c83e759 100644\n--- a/conans/client/graph/graph_manager.py\n+++ b/conans/client/graph/graph_manager.py\n@@ -1,6 +1,6 @@\n import fnmatch\n import os\n-from collections import OrderedDict\n+from collections import OrderedDict, defaultdict\n \n from conans.client.conanfile.configure import run_configure_method\n from conans.client.generators.text import TXTGenerator\n@@ -114,8 +114,14 @@ def load_graph(self, reference, create_reference, graph_info, build_mode, check_\n \"\"\" main entry point to compute a full dependency graph\n \"\"\"\n root_node = self._load_root_node(reference, create_reference, graph_info)\n- return self._resolve_graph(root_node, graph_info, build_mode, check_updates, update, remotes,\n- recorder, apply_build_requires=apply_build_requires)\n+ deps_graph = self._resolve_graph(root_node, graph_info, build_mode, check_updates, update,\n+ remotes, recorder,\n+ apply_build_requires=apply_build_requires)\n+\n+ # Run some validations once the graph is built\n+ self._validate_graph_provides(deps_graph)\n+\n+ return deps_graph\n \n def _load_root_node(self, reference, create_reference, graph_info):\n \"\"\" creates the first, root node of the graph, loading or creating a conanfile\n@@ -294,15 +300,15 @@ def _recurse_build_requires(self, graph, builder, check_updates,\n continue\n # Packages with PACKAGE_ID_UNKNOWN might be built in the future, need build requires\n if (node.binary not in (BINARY_BUILD, BINARY_EDITABLE, BINARY_UNKNOWN)\n- and node.recipe != RECIPE_CONSUMER):\n+ and node.recipe != RECIPE_CONSUMER):\n continue\n package_build_requires = self._get_recipe_build_requires(node.conanfile, default_context)\n str_ref = str(node.ref)\n new_profile_build_requires = []\n for pattern, build_requires in profile_build_requires.items():\n if ((node.recipe == RECIPE_CONSUMER and pattern == \"&\") or\n- (node.recipe != RECIPE_CONSUMER and pattern == \"&!\") or\n- fnmatch.fnmatch(str_ref, pattern)):\n+ (node.recipe != RECIPE_CONSUMER and pattern == \"&!\") or\n+ fnmatch.fnmatch(str_ref, pattern)):\n for build_require in build_requires:\n br_key = (build_require.name, default_context)\n if br_key in package_build_requires: # Override defined\n@@ -363,6 +369,28 @@ def _load_graph(self, root_node, check_updates, update, build_mode, remotes,\n \n return graph\n \n+ @staticmethod\n+ def _validate_graph_provides(deps_graph):\n+ # Check that two different nodes are not providing the same (ODR violation)\n+ for node in deps_graph.nodes:\n+ provides = defaultdict(list)\n+ if node.conanfile.provides is not None: # consumer conanfile doesn't initialize\n+ for it in node.conanfile.provides:\n+ provides[it].append(node)\n+\n+ for item in filter(lambda u: u.context == CONTEXT_HOST, node.public_closure):\n+ for it in item.conanfile.provides:\n+ provides[it].append(item)\n+\n+ # Check (and report) if any functionality is provided by several different recipes\n+ conflicts = [it for it in provides.keys() if len(provides[it]) > 1]\n+ if conflicts:\n+ msg_lines = [\"At least two recipes provides the same functionality:\"]\n+ for it in conflicts:\n+ nodes_str = \"', '\".join([n.conanfile.display_name for n in provides[it]])\n+ msg_lines.append(\" - '{}' provided by '{}'\".format(it, nodes_str))\n+ raise ConanException('\\n'.join(msg_lines))\n+\n \n def load_deps_info(current_path, conanfile, required):\n def get_forbidden_access_object(field_name):\ndiff --git a/conans/model/conan_file.py b/conans/model/conan_file.py\nindex 2e72cf948c1..72c00f21037 100644\n--- a/conans/model/conan_file.py\n+++ b/conans/model/conan_file.py\n@@ -18,6 +18,7 @@\n from conans.util.conan_v2_mode import CONAN_V2_MODE_ENVVAR\n from conans.util.conan_v2_mode import conan_v2_behavior\n from conans.util.env_reader import get_env\n+from conans.util.misc import make_tuple\n \n \n def create_options(conanfile):\n@@ -129,6 +130,8 @@ class ConanFile(object):\n options = None\n default_options = None\n \n+ provides = None\n+\n def __init__(self, output, runner, display_name=\"\", user=None, channel=None):\n # an output stream (writeln, info, warn error)\n self.output = ScopedOutput(display_name, output)\n@@ -170,6 +173,9 @@ def initialize(self, settings, env):\n # user specified env variables\n self._conan_env_values = env.copy() # user specified -e\n \n+ # Recipe provides its own name if nothing else is defined\n+ self.provides = make_tuple(self.provides or self.name)\n+\n @property\n def env(self):\n \"\"\"Apply the self.deps_env_info into a copy of self._conan_env_values (will prioritize the\ndiff --git a/conans/test/functional/graph_lock/graph_lock_test.py b/conans/test/functional/graph_lock/graph_lock_test.py\nindex 01c0078ea45..ad8cd94afec 100644\n--- a/conans/test/functional/graph_lock/graph_lock_test.py\n+++ b/conans/test/functional/graph_lock/graph_lock_test.py\n@@ -368,11 +368,11 @@ def export_pkg_test(self):\n class GraphLockBuildRequireVersionRangeTest(GraphLockVersionRangeTest):\n consumer = GenConanfile().with_name(\"PkgB\").with_version(\"0.1\")\\\n .with_build_require_plain(\"PkgA/[>=0.1]@user/channel\")\n- pkg_b_revision = \"b6f49e5ba6dd3d64af09a2f288e71330\"\n+ pkg_b_revision = \"b7338f650cc61f8e0ad0285cd1b77b92\"\n pkg_b_id = \"5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n- pkg_b_package_revision = \"#33a5634bbd9ec26b369d3900d91ea9a0\"\n- modified_pkg_b_revision = \"62a38c702f14cb9de952bb22b40d6ecc\"\n- modified_pkg_b_package_revision = \"#b7850e289326d594fbc10088d55f5259\"\n+ pkg_b_package_revision = \"#a903a10014693fc381c6bc8e6c91bf4a\"\n+ modified_pkg_b_revision = \"d4adbdf5c7426cfa4cdca63bb96984b6\"\n+ modified_pkg_b_package_revision = \"#fe05c10f59b44a0ae5de0269c5a4ca3e\"\n \n \n class GraphLockVersionRangeInfoTest(GraphLockVersionRangeTest):\n@@ -603,7 +603,7 @@ def build_order_build_requires_test(self):\n \":5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\", ca[1])\n level1 = jsonbo[1]\n cb = level1[0]\n- self.assertEqual(\"CB/1.0@user/channel#29352c82c9c6b7d1be85524ef607f77f\"\n+ self.assertEqual(\"CB/1.0@user/channel#5a90ed235bbda74b863e4045d41e5703\"\n \":5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\", cb[1])\n \n def consumer_build_order_test(self):\n@@ -705,7 +705,7 @@ def test_not_locked_build_requires(self):\n \n # Building the graphlock we get the message\n client.run(\"graph lock variant.py\")\n- fmpe = \"ffmpeg/1.0#5522e93e2abfbd455e6211fe4d0531a2:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n+ fmpe = \"ffmpeg/1.0#917812a86b6c17f109d7537f3c5102e4:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n font = \"fontconfig/1.0#f3367e0e7d170aa12abccb175fee5f97:\"\\\n \"5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n harf = \"harfbuzz/1.0#3172f5e84120f235f75f8dd90fdef84f:\"\\\n@@ -761,7 +761,7 @@ def test_build_requires_should_be_locked(self):\n \n # Building the graphlock we get the message\n client.run(\"graph lock variant.py --build\")\n- fmpe = \"ffmpeg/1.0#5522e93e2abfbd455e6211fe4d0531a2:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n+ fmpe = \"ffmpeg/1.0#917812a86b6c17f109d7537f3c5102e4:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n font = \"fontconfig/1.0#f3367e0e7d170aa12abccb175fee5f97:\"\\\n \"5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"\n harf = \"harfbuzz/1.0#3172f5e84120f235f75f8dd90fdef84f:\"\\\ndiff --git a/conans/test/functional/recipe_provides/__init__.py b/conans/test/functional/recipe_provides/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/functional/recipe_provides/test_build_requires_conflicts.py b/conans/test/functional/recipe_provides/test_build_requires_conflicts.py\nnew file mode 100644\nindex 00000000000..612559cfddc\n--- /dev/null\n+++ b/conans/test/functional/recipe_provides/test_build_requires_conflicts.py\n@@ -0,0 +1,81 @@\n+import unittest\n+\n+from parameterized import parameterized\n+\n+from conans.test.utils.tools import TestClient, GenConanfile\n+\n+\n+class BuildRequiresTestCase(unittest.TestCase):\n+\n+ @parameterized.expand([(True,), (False,)])\n+ def test_build_require_lib(self, use_single_profile):\n+ t = TestClient()\n+ t.save({'br_lib.py': GenConanfile(\"br_lib\", \"v1\").with_provides(\"libjpeg\"),\n+ 'br.py': GenConanfile(\"br\", \"v1\").with_require_plain(\"br_lib/v1\"),\n+ 'app.py': GenConanfile(\"app\", \"v1\").with_build_require_plain(\"br/v1\")\n+ .with_provides(\"libjpeg\")})\n+ t.run(\"create br_lib.py\")\n+ t.run(\"create br.py\")\n+ if use_single_profile:\n+ t.run(\"install app.py\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'app.py (app/v1)', 'br_lib/v1'\", t.out)\n+ else:\n+ t.run(\"install app.py --profile:host=default --profile:build=default\")\n+\n+ @parameterized.expand([(True,), (False,)])\n+ def test_build_require_host(self, use_single_profile):\n+ t = TestClient()\n+ t.save({'br_lib.py': GenConanfile(\"br_lib\", \"v1\").with_provides(\"libjpeg\"),\n+ 'br.py': GenConanfile(\"br\", \"v1\").with_require_plain(\"br_lib/v1\"),\n+ 'app.py': GenConanfile(\"app\", \"v1\").with_build_require_plain(\"br/v1\",\n+ force_host_context=True)\n+ .with_provides(\"libjpeg\")})\n+ t.run(\"create br_lib.py\")\n+ t.run(\"create br.py\")\n+ if use_single_profile:\n+ t.run(\"install app.py\", assert_error=True)\n+ else:\n+ t.run(\"install app.py --profile:host=default --profile:build=default\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'app.py (app/v1)', 'br_lib/v1'\", t.out)\n+\n+ @parameterized.expand([(True,), (False,)])\n+ def test_build_require_host_transitive(self, use_single_profile):\n+ t = TestClient()\n+ t.save({'br.py': GenConanfile(\"br\", \"v1\").with_provides(\"libjpeg\"),\n+ 'lib.py': GenConanfile(\"lib\", \"v1\").with_build_require_plain(\"br/v1\",\n+ force_host_context=True),\n+ 'app.py': GenConanfile(\"app\", \"v1\").with_require_plain(\"lib/v1\")\n+ .with_provides(\"libjpeg\")})\n+ t.run(\"export br.py\")\n+ t.run(\"export lib.py\")\n+ if use_single_profile:\n+ t.run(\"install app.py --build\")\n+ else:\n+ t.run(\"install app.py --profile:host=default --profile:build=default --build\")\n+\n+ @parameterized.expand([(True,), (False,)])\n+ def test_build_require_branches(self, use_single_profile):\n+ t = TestClient()\n+ t.save({'br_lhs.py': GenConanfile(\"br_lhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'br_rhs.py': GenConanfile(\"br_rhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'app.py': GenConanfile(\"app\", \"v1\").with_build_require_plain(\"br_lhs/v1\")\n+ .with_build_require_plain(\"br_rhs/v1\")})\n+ t.run(\"create br_lhs.py\")\n+ t.run(\"create br_rhs.py\")\n+ if use_single_profile:\n+ t.run(\"install app.py\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'br_lhs/v1', 'br_rhs/v1'\", t.out)\n+ else:\n+ t.run(\"install app.py --profile:host=default --profile:build=default\")\n+\n+ def test_build_require_of_build_require(self):\n+ # Only makes sense for two profiles\n+ t = TestClient()\n+ t.save({'br_nested.py': GenConanfile(\"br_nested\", \"v1\").with_provides(\"libjpeg\"),\n+ 'br.py': GenConanfile(\"br\", \"v1\").with_provides(\"libjpeg\")\n+ .with_build_require_plain(\"br_nested/v1\"),\n+ 'app.py': GenConanfile(\"app\", \"v1\").with_provides(\"libjpeg\")\n+ .with_build_require_plain(\"br/v1\")})\n+ t.run(\"export br_nested.py\")\n+ t.run(\"export br.py\")\n+ t.run(\"install app.py --profile:host=default --profile:build=default --build\")\ndiff --git a/conans/test/functional/recipe_provides/test_requires_conflicts.py b/conans/test/functional/recipe_provides/test_requires_conflicts.py\nnew file mode 100644\nindex 00000000000..224f4c5bf54\n--- /dev/null\n+++ b/conans/test/functional/recipe_provides/test_requires_conflicts.py\n@@ -0,0 +1,58 @@\n+import unittest\n+import textwrap\n+import unittest\n+\n+from jinja2 import Template\n+\n+from conans.test.utils.tools import TestClient, GenConanfile\n+\n+\n+class RequiresConflictsTestCase(unittest.TestCase):\n+ header_only = Template(textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+\n+ class Recipe(ConanFile):\n+ requires = '{{ requires|join(\"', '\") }}'\n+ def package_info(self):\n+ self.info.header_only()\n+ \"\"\"))\n+\n+ def test_conflict_requirement(self):\n+ t = TestClient()\n+ t.save({'requires.py': GenConanfile(\"req\", \"v1\").with_provides(\"libjpeg\"),\n+ 'app.py': GenConanfile().with_provides(\"libjpeg\")\n+ .with_require_plain(\"req/v1\")})\n+ t.run(\"export requires.py\")\n+ t.run(\"install app.py app/version@\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'app.py (app/version)', 'req/v1'\", t.out)\n+\n+ def test_conflict_transitive(self):\n+ t = TestClient()\n+ t.save({'top.py': GenConanfile(\"top\", \"v1\").with_provides(\"libjpeg\"),\n+ 'middle.py': self.header_only.render(requires=['top/v1', ]),\n+ 'app.py': GenConanfile().with_provides(\"libjpeg\")\n+ .with_require_plain(\"middle/v1\")})\n+ t.run(\"export top.py\")\n+ t.run(\"export middle.py middle/v1@\")\n+ t.run(\"install app.py app/version@\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'app.py (app/version)', 'top/v1'\", t.out)\n+\n+ def test_conflict_branches(self):\n+ t = TestClient()\n+ t.save({'lhs.py': GenConanfile(\"lhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'rhs.py': GenConanfile(\"rhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'app.py': GenConanfile().with_require_plain(\"lhs/v1\").with_require_plain(\"rhs/v1\")})\n+ t.run(\"export lhs.py\")\n+ t.run(\"export rhs.py\")\n+ t.run(\"install app.py app/version@\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'lhs/v1', 'rhs/v1'\", t.out)\n+\n+ def test_conflict_branches_txt(self):\n+ t = TestClient()\n+ t.save({'lhs.py': GenConanfile(\"lhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'rhs.py': GenConanfile(\"rhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'conanfile.txt': \"[requires]\\nlhs/v1\\nrhs/v1\"})\n+ t.run(\"export lhs.py\")\n+ t.run(\"export rhs.py\")\n+ t.run(\"install conanfile.txt\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'lhs/v1', 'rhs/v1'\", t.out)\ndiff --git a/conans/test/functional/recipe_provides/test_requires_private.py b/conans/test/functional/recipe_provides/test_requires_private.py\nnew file mode 100644\nindex 00000000000..f20919ffec8\n--- /dev/null\n+++ b/conans/test/functional/recipe_provides/test_requires_private.py\n@@ -0,0 +1,27 @@\n+import unittest\n+\n+from conans.test.utils.tools import TestClient, GenConanfile\n+\n+\n+class RequiresPrivateTestCase(unittest.TestCase):\n+\n+ def test_conflict_branches_private(self):\n+ t = TestClient()\n+ t.save({'lhs.py': GenConanfile(\"lhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'rhs.py': GenConanfile(\"rhs\", \"v1\").with_provides(\"libjpeg\"),\n+ 'app.py': GenConanfile().with_require_plain(\"lhs/v1\", private=True)\n+ .with_require_plain(\"rhs/v1\", private=True)})\n+ t.run(\"export lhs.py\")\n+ t.run(\"export rhs.py\")\n+ t.run(\"install app.py app/version@\", assert_error=True)\n+ self.assertIn(\" - 'libjpeg' provided by 'lhs/v1', 'rhs/v1'\", t.out)\n+\n+ def test_conflict_transitive(self):\n+ t = TestClient()\n+ t.save({'top.py': GenConanfile(\"top\", \"v1\").with_provides(\"libjpeg\"),\n+ 'middle.py': GenConanfile(\"middle\", \"v1\").with_require_plain(\"top/v1\", private=True),\n+ 'app.py': GenConanfile().with_provides(\"libjpeg\")\n+ .with_require_plain(\"middle/v1\", private=True)})\n+ t.run(\"export top.py\")\n+ t.run(\"export middle.py middle/v1@\")\n+ t.run(\"install app.py app/version@ --build=missing\")\ndiff --git a/conans/test/unittests/util/misc/__init__.py b/conans/test/unittests/util/misc/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/unittests/util/misc/test_make_tuple.py b/conans/test/unittests/util/misc/test_make_tuple.py\nnew file mode 100644\nindex 00000000000..2d29c5c9112\n--- /dev/null\n+++ b/conans/test/unittests/util/misc/test_make_tuple.py\n@@ -0,0 +1,22 @@\n+import unittest\n+\n+from conans.util.misc import make_tuple\n+\n+\n+class MakeTupleTestCase(unittest.TestCase):\n+ def test_corner_cases(self):\n+ self.assertIsNone(make_tuple(None))\n+ self.assertTupleEqual(make_tuple(\"one\"), (\"one\",))\n+\n+ def test_iterable(self):\n+ self.assertTupleEqual(make_tuple([1, 2, 3]), (1, 2, 3))\n+ self.assertTupleEqual(make_tuple((\"one\", \"two\")), (\"one\", \"two\"))\n+ self.assertTupleEqual(make_tuple({1: \"a\", 2: \"b\", 3: \"c\"}.keys()), (1, 2, 3))\n+ self.assertTupleEqual(make_tuple({1: \"a\", 2: \"b\", 3: \"c\"}.values()), (\"a\", \"b\", \"c\"))\n+\n+ def test_generator(self):\n+ def items():\n+ for i in [1, 2, 3]:\n+ yield i\n+\n+ self.assertTupleEqual(make_tuple(items()), (1, 2, 3))\ndiff --git a/conans/test/utils/genconanfile.py b/conans/test/utils/genconanfile.py\nindex b7315fc3693..9df9794df69 100644\n--- a/conans/test/utils/genconanfile.py\n+++ b/conans/test/utils/genconanfile.py\n@@ -20,6 +20,7 @@ def __init__(self, name=None, version=None):\n self._options = {}\n self._generators = []\n self._default_options = {}\n+ self._provides = []\n self._package_files = {}\n self._package_files_env = {}\n self._package_files_link = {}\n@@ -41,6 +42,10 @@ def with_version(self, version):\n self._version = version\n return self\n \n+ def with_provides(self, provides):\n+ self._provides.append(provides)\n+ return self\n+\n def with_revision_mode(self, revision_mode):\n self._revision_mode = revision_mode\n return self\n@@ -67,11 +72,11 @@ def with_requirement_plain(self, ref_str, private=False, override=False):\n self._requirements.append((ref_str, private, override))\n return self\n \n- def with_build_require(self, ref):\n- return self.with_build_require_plain(ref.full_str())\n+ def with_build_require(self, ref, force_host_context=False):\n+ return self.with_build_require_plain(ref.full_str(), force_host_context=force_host_context)\n \n- def with_build_require_plain(self, ref_str):\n- self._build_requires.append(ref_str)\n+ def with_build_require_plain(self, ref_str, force_host_context=False):\n+ self._build_requires.append((ref_str, force_host_context))\n return self\n \n def with_import(self, i):\n@@ -137,6 +142,13 @@ def _version_line(self):\n return \"\"\n return \"version = '{}'\".format(self._version)\n \n+ @property\n+ def _provides_line(self):\n+ if not self._provides:\n+ return \"\"\n+ line = \", \".join('\"{}\"'.format(provide) for provide in self._provides)\n+ return \"provides = {}\".format(line)\n+\n @property\n def _scm_line(self):\n if not self._scm:\n@@ -182,12 +194,15 @@ def _default_options_line(self):\n return tmp\n \n @property\n- def _build_requires_line(self):\n+ def _build_requirements_method(self):\n if not self._build_requires:\n return \"\"\n- line = \", \".join(['\"{}\"'.format(r) for r in self._build_requires])\n- tmp = \"build_requires = %s\" % line\n- return tmp\n+\n+ lines = []\n+ for ref, force_host_context in self._build_requires:\n+ force_host = \", force_host_context=True\" if force_host_context else \"\"\n+ lines.append(' self.build_requires(\"{}\"{})'.format(ref, force_host))\n+ return \"def build_requirements(self):\\n{}\\n\".format(\"\\n\".join(lines))\n \n @property\n def _requires_line(self):\n@@ -304,14 +319,16 @@ def __repr__(self):\n ret.append(\" {}\".format(self._name_line))\n if self._version_line:\n ret.append(\" {}\".format(self._version_line))\n+ if self._provides_line:\n+ ret.append(\" {}\".format(self._provides_line))\n if self._generators_line:\n ret.append(\" {}\".format(self._generators_line))\n if self._requires_line:\n ret.append(\" {}\".format(self._requires_line))\n if self._requirements_method:\n ret.append(\" {}\".format(self._requirements_method))\n- if self._build_requires_line:\n- ret.append(\" {}\".format(self._build_requires_line))\n+ if self._build_requirements_method:\n+ ret.append(\" {}\".format(self._build_requirements_method))\n if self._scm:\n ret.append(\" {}\".format(self._scm_line))\n if self._revision_mode_line:\ndiff --git a/conans/util/misc.py b/conans/util/misc.py\nnew file mode 100644\nindex 00000000000..1b3b3bc54b8\n--- /dev/null\n+++ b/conans/util/misc.py\n@@ -0,0 +1,18 @@\n+import six\n+\n+\n+def make_tuple(value):\n+ \"\"\" Converts the value into a tuple if the value is an iterable with the following exceptions:\n+ * a `None` value will return `None`\n+ * a string value will return a tuple with the string as the unique member\n+ \"\"\"\n+ if value is None:\n+ return None\n+\n+ if isinstance(value, six.string_types):\n+ return value,\n+\n+ if isinstance(value, six.moves.collections_abc.Iterable):\n+ return tuple(value)\n+ else:\n+ return value,\n"
}
|
[
{
"diff_hunk": "@@ -363,6 +369,28 @@ def _load_graph(self, root_node, check_updates, update, build_mode, remotes,\n \n return graph\n \n+ @staticmethod\n+ def _validate_graph_provides(deps_graph):\n+ # Check that two different nodes are not providing the same (ODR violation)\n+ for node in deps_graph.nodes:\n+ provides = defaultdict(list)\n+ if node.conanfile.provides is not None: # consumer conanfile doesn't initialize\n+ for it in node.conanfile.provides:\n+ provides[it].append(node)\n+\n+ for item in filter(lambda u: u.context == CONTEXT_HOST, node.public_closure):\n+ for it in item.conanfile.provides:\n+ provides[it].append(item)\n+\n+ # Check (and report) if any functionality is provided by several different recipes\n+ conflicts = [it for it in provides.keys() if len(provides[it]) > 1]",
"line": null,
"original_line": 386,
"original_start_line": null,
"path": "conans/client/graph/graph_manager.py",
"start_line": null,
"text": "@user1:\n```python\r\nconflicts = [it for it, nodes in provides.items() if len(nodes) > 1]\r\n```"
}
] |
1859b38f7d27ecdd72ed88a7bcabce57e1c71b4f
|
diff --git a/conans/client/conanfile/configure.py b/conans/client/conanfile/configure.py
index 510a935a804..074e524177c 100644
--- a/conans/client/conanfile/configure.py
+++ b/conans/client/conanfile/configure.py
@@ -2,6 +2,7 @@
from conans.model.conan_file import get_env_context_manager
from conans.util.conan_v2_mode import conan_v2_behavior, CONAN_V2_MODE_ENVVAR
from conans.util.env_reader import get_env
+from conans.util.misc import make_tuple
def run_configure_method(conanfile, down_options, down_ref, ref):
@@ -30,6 +31,8 @@ def run_configure_method(conanfile, down_options, down_ref, ref):
conanfile.settings.validate() # All has to be ok!
conanfile.options.validate()
+ # Recipe provides its own name if nothing else is defined
+ conanfile.provides = make_tuple(conanfile.provides or conanfile.name)
_validate_fpic(conanfile)
diff --git a/conans/client/graph/graph_manager.py b/conans/client/graph/graph_manager.py
index f880e0ea9ee..c80ed32e4e2 100644
--- a/conans/client/graph/graph_manager.py
+++ b/conans/client/graph/graph_manager.py
@@ -1,6 +1,6 @@
import fnmatch
import os
-from collections import OrderedDict
+from collections import OrderedDict, defaultdict
from conans.client.conanfile.configure import run_configure_method
from conans.client.generators.text import TXTGenerator
@@ -114,8 +114,14 @@ def load_graph(self, reference, create_reference, graph_info, build_mode, check_
""" main entry point to compute a full dependency graph
"""
root_node = self._load_root_node(reference, create_reference, graph_info)
- return self._resolve_graph(root_node, graph_info, build_mode, check_updates, update, remotes,
- recorder, apply_build_requires=apply_build_requires)
+ deps_graph = self._resolve_graph(root_node, graph_info, build_mode, check_updates, update,
+ remotes, recorder,
+ apply_build_requires=apply_build_requires)
+
+ # Run some validations once the graph is built
+ self._validate_graph_provides(deps_graph)
+
+ return deps_graph
def _load_root_node(self, reference, create_reference, graph_info):
""" creates the first, root node of the graph, loading or creating a conanfile
@@ -294,15 +300,15 @@ def _recurse_build_requires(self, graph, builder, check_updates,
continue
# Packages with PACKAGE_ID_UNKNOWN might be built in the future, need build requires
if (node.binary not in (BINARY_BUILD, BINARY_EDITABLE, BINARY_UNKNOWN)
- and node.recipe != RECIPE_CONSUMER):
+ and node.recipe != RECIPE_CONSUMER):
continue
package_build_requires = self._get_recipe_build_requires(node.conanfile, default_context)
str_ref = str(node.ref)
new_profile_build_requires = []
for pattern, build_requires in profile_build_requires.items():
if ((node.recipe == RECIPE_CONSUMER and pattern == "&") or
- (node.recipe != RECIPE_CONSUMER and pattern == "&!") or
- fnmatch.fnmatch(str_ref, pattern)):
+ (node.recipe != RECIPE_CONSUMER and pattern == "&!") or
+ fnmatch.fnmatch(str_ref, pattern)):
for build_require in build_requires:
br_key = (build_require.name, default_context)
if br_key in package_build_requires: # Override defined
@@ -363,6 +369,28 @@ def _load_graph(self, root_node, check_updates, update, build_mode, remotes,
return graph
+ @staticmethod
+ def _validate_graph_provides(deps_graph):
+ # Check that two different nodes are not providing the same (ODR violation)
+ for node in deps_graph.nodes:
+ provides = defaultdict(list)
+ if node.conanfile.provides is not None: # consumer conanfile doesn't initialize
+ for it in node.conanfile.provides:
+ provides[it].append(node)
+
+ for item in filter(lambda u: u.context == CONTEXT_HOST, node.public_closure):
+ for it in item.conanfile.provides:
+ provides[it].append(item)
+
+ # Check (and report) if any functionality is provided by several different recipes
+ conflicts = [it for it, nodes in provides.items() if len(nodes) > 1]
+ if conflicts:
+ msg_lines = ["At least two recipes provides the same functionality:"]
+ for it in conflicts:
+ nodes_str = "', '".join([n.conanfile.display_name for n in provides[it]])
+ msg_lines.append(" - '{}' provided by '{}'".format(it, nodes_str))
+ raise ConanException('\n'.join(msg_lines))
+
def load_deps_info(current_path, conanfile, required):
def get_forbidden_access_object(field_name):
diff --git a/conans/model/conan_file.py b/conans/model/conan_file.py
index 2e72cf948c1..8a0b272e102 100644
--- a/conans/model/conan_file.py
+++ b/conans/model/conan_file.py
@@ -1,8 +1,8 @@
import os
from contextlib import contextmanager
-from six import string_types
import six
+from six import string_types
from conans.client import tools
from conans.client.output import ScopedOutput
@@ -129,6 +129,8 @@ class ConanFile(object):
options = None
default_options = None
+ provides = None
+
def __init__(self, output, runner, display_name="", user=None, channel=None):
# an output stream (writeln, info, warn error)
self.output = ScopedOutput(display_name, output)
diff --git a/conans/test/functional/provides/__init__.py b/conans/test/functional/provides/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/functional/provides/test_build_requires_conflicts.py b/conans/test/functional/provides/test_build_requires_conflicts.py
new file mode 100644
index 00000000000..d0800b9e21f
--- /dev/null
+++ b/conans/test/functional/provides/test_build_requires_conflicts.py
@@ -0,0 +1,81 @@
+import unittest
+
+from parameterized import parameterized
+
+from conans.test.utils.tools import TestClient, GenConanfile
+
+
+class BuildRequiresTestCase(unittest.TestCase):
+
+ @parameterized.expand([(True,), (False,)])
+ def test_build_require_lib(self, use_single_profile):
+ t = TestClient()
+ t.save({'br_lib.py': GenConanfile("br_lib", "v1").with_provides("libjpeg"),
+ 'br.py': GenConanfile("br", "v1").with_require_plain("br_lib/v1"),
+ 'app.py': GenConanfile("app", "v1").with_build_require_plain("br/v1")
+ .with_provides("libjpeg")})
+ t.run("create br_lib.py")
+ t.run("create br.py")
+ if use_single_profile:
+ t.run("install app.py", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'app.py (app/v1)', 'br_lib/v1'", t.out)
+ else:
+ t.run("install app.py --profile:host=default --profile:build=default")
+
+ @parameterized.expand([(True,), (False,)])
+ def test_build_require_host(self, use_single_profile):
+ t = TestClient()
+ t.save({'br_lib.py': GenConanfile("br_lib", "v1").with_provides("libjpeg"),
+ 'br.py': GenConanfile("br", "v1").with_require_plain("br_lib/v1"),
+ 'app.py': GenConanfile("app", "v1").with_build_requirement_plain("br/v1",
+ force_host_context=True)
+ .with_provides("libjpeg")})
+ t.run("create br_lib.py")
+ t.run("create br.py")
+ if use_single_profile:
+ t.run("install app.py", assert_error=True)
+ else:
+ t.run("install app.py --profile:host=default --profile:build=default", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'app.py (app/v1)', 'br_lib/v1'", t.out)
+
+ @parameterized.expand([(True,), (False,)])
+ def test_build_require_host_transitive(self, use_single_profile):
+ t = TestClient()
+ t.save({'br.py': GenConanfile("br", "v1").with_provides("libjpeg"),
+ 'lib.py': GenConanfile("lib", "v1").with_build_requirement_plain("br/v1",
+ force_host_context=True),
+ 'app.py': GenConanfile("app", "v1").with_require_plain("lib/v1")
+ .with_provides("libjpeg")})
+ t.run("export br.py")
+ t.run("export lib.py")
+ if use_single_profile:
+ t.run("install app.py --build")
+ else:
+ t.run("install app.py --profile:host=default --profile:build=default --build")
+
+ @parameterized.expand([(True,), (False,)])
+ def test_build_require_branches(self, use_single_profile):
+ t = TestClient()
+ t.save({'br_lhs.py': GenConanfile("br_lhs", "v1").with_provides("libjpeg"),
+ 'br_rhs.py': GenConanfile("br_rhs", "v1").with_provides("libjpeg"),
+ 'app.py': GenConanfile("app", "v1").with_build_require_plain("br_lhs/v1")
+ .with_build_require_plain("br_rhs/v1")})
+ t.run("create br_lhs.py")
+ t.run("create br_rhs.py")
+ if use_single_profile:
+ t.run("install app.py", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'br_lhs/v1', 'br_rhs/v1'", t.out)
+ else:
+ t.run("install app.py --profile:host=default --profile:build=default")
+
+ def test_build_require_of_build_require(self):
+ # Only makes sense for two profiles
+ t = TestClient()
+ t.save({'br_nested.py': GenConanfile("br_nested", "v1").with_provides("libjpeg"),
+ 'br.py': GenConanfile("br", "v1").with_provides("libjpeg")
+ .with_build_require_plain("br_nested/v1"),
+ 'app.py': GenConanfile("app", "v1").with_provides("libjpeg")
+ .with_build_require_plain("br/v1")})
+ t.run("export br_nested.py")
+ t.run("export br.py")
+ t.run("install app.py --profile:host=default --profile:build=default --build")
diff --git a/conans/test/functional/provides/test_conditional_provides.py b/conans/test/functional/provides/test_conditional_provides.py
new file mode 100644
index 00000000000..417bf36c7f4
--- /dev/null
+++ b/conans/test/functional/provides/test_conditional_provides.py
@@ -0,0 +1,31 @@
+import textwrap
+import unittest
+
+from conans.test.utils.tools import TestClient, GenConanfile
+
+
+class ConditionalProvidesTestCase(unittest.TestCase):
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Recipe(ConanFile):
+ requires = 'req/v1'
+ options = {'conflict': [True, False]}
+ default_options = {'conflict': False}
+
+ def configure(self):
+ if self.options.conflict:
+ self.provides = 'libjpeg'
+
+ def package_info(self):
+ self.info.header_only()
+ """)
+
+ def test_conflict_requirement(self):
+ t = TestClient()
+ t.save({'requires.py': GenConanfile("req", "v1").with_provides("libjpeg"),
+ 'app.py': self.conanfile})
+ t.run("create requires.py")
+ t.run("install app.py app/version@")
+ t.run("install app.py app/version@ -o app:conflict=True", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'app.py (app/version)', 'req/v1'", t.out)
diff --git a/conans/test/functional/provides/test_requires_conflicts.py b/conans/test/functional/provides/test_requires_conflicts.py
new file mode 100644
index 00000000000..224f4c5bf54
--- /dev/null
+++ b/conans/test/functional/provides/test_requires_conflicts.py
@@ -0,0 +1,58 @@
+import unittest
+import textwrap
+import unittest
+
+from jinja2 import Template
+
+from conans.test.utils.tools import TestClient, GenConanfile
+
+
+class RequiresConflictsTestCase(unittest.TestCase):
+ header_only = Template(textwrap.dedent("""
+ from conans import ConanFile
+
+ class Recipe(ConanFile):
+ requires = '{{ requires|join("', '") }}'
+ def package_info(self):
+ self.info.header_only()
+ """))
+
+ def test_conflict_requirement(self):
+ t = TestClient()
+ t.save({'requires.py': GenConanfile("req", "v1").with_provides("libjpeg"),
+ 'app.py': GenConanfile().with_provides("libjpeg")
+ .with_require_plain("req/v1")})
+ t.run("export requires.py")
+ t.run("install app.py app/version@", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'app.py (app/version)', 'req/v1'", t.out)
+
+ def test_conflict_transitive(self):
+ t = TestClient()
+ t.save({'top.py': GenConanfile("top", "v1").with_provides("libjpeg"),
+ 'middle.py': self.header_only.render(requires=['top/v1', ]),
+ 'app.py': GenConanfile().with_provides("libjpeg")
+ .with_require_plain("middle/v1")})
+ t.run("export top.py")
+ t.run("export middle.py middle/v1@")
+ t.run("install app.py app/version@", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'app.py (app/version)', 'top/v1'", t.out)
+
+ def test_conflict_branches(self):
+ t = TestClient()
+ t.save({'lhs.py': GenConanfile("lhs", "v1").with_provides("libjpeg"),
+ 'rhs.py': GenConanfile("rhs", "v1").with_provides("libjpeg"),
+ 'app.py': GenConanfile().with_require_plain("lhs/v1").with_require_plain("rhs/v1")})
+ t.run("export lhs.py")
+ t.run("export rhs.py")
+ t.run("install app.py app/version@", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'lhs/v1', 'rhs/v1'", t.out)
+
+ def test_conflict_branches_txt(self):
+ t = TestClient()
+ t.save({'lhs.py': GenConanfile("lhs", "v1").with_provides("libjpeg"),
+ 'rhs.py': GenConanfile("rhs", "v1").with_provides("libjpeg"),
+ 'conanfile.txt': "[requires]\nlhs/v1\nrhs/v1"})
+ t.run("export lhs.py")
+ t.run("export rhs.py")
+ t.run("install conanfile.txt", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'lhs/v1', 'rhs/v1'", t.out)
diff --git a/conans/test/functional/provides/test_requires_private.py b/conans/test/functional/provides/test_requires_private.py
new file mode 100644
index 00000000000..f20919ffec8
--- /dev/null
+++ b/conans/test/functional/provides/test_requires_private.py
@@ -0,0 +1,27 @@
+import unittest
+
+from conans.test.utils.tools import TestClient, GenConanfile
+
+
+class RequiresPrivateTestCase(unittest.TestCase):
+
+ def test_conflict_branches_private(self):
+ t = TestClient()
+ t.save({'lhs.py': GenConanfile("lhs", "v1").with_provides("libjpeg"),
+ 'rhs.py': GenConanfile("rhs", "v1").with_provides("libjpeg"),
+ 'app.py': GenConanfile().with_require_plain("lhs/v1", private=True)
+ .with_require_plain("rhs/v1", private=True)})
+ t.run("export lhs.py")
+ t.run("export rhs.py")
+ t.run("install app.py app/version@", assert_error=True)
+ self.assertIn(" - 'libjpeg' provided by 'lhs/v1', 'rhs/v1'", t.out)
+
+ def test_conflict_transitive(self):
+ t = TestClient()
+ t.save({'top.py': GenConanfile("top", "v1").with_provides("libjpeg"),
+ 'middle.py': GenConanfile("middle", "v1").with_require_plain("top/v1", private=True),
+ 'app.py': GenConanfile().with_provides("libjpeg")
+ .with_require_plain("middle/v1", private=True)})
+ t.run("export top.py")
+ t.run("export middle.py middle/v1@")
+ t.run("install app.py app/version@ --build=missing")
diff --git a/conans/test/unittests/util/misc/__init__.py b/conans/test/unittests/util/misc/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/unittests/util/misc/test_make_tuple.py b/conans/test/unittests/util/misc/test_make_tuple.py
new file mode 100644
index 00000000000..2d29c5c9112
--- /dev/null
+++ b/conans/test/unittests/util/misc/test_make_tuple.py
@@ -0,0 +1,22 @@
+import unittest
+
+from conans.util.misc import make_tuple
+
+
+class MakeTupleTestCase(unittest.TestCase):
+ def test_corner_cases(self):
+ self.assertIsNone(make_tuple(None))
+ self.assertTupleEqual(make_tuple("one"), ("one",))
+
+ def test_iterable(self):
+ self.assertTupleEqual(make_tuple([1, 2, 3]), (1, 2, 3))
+ self.assertTupleEqual(make_tuple(("one", "two")), ("one", "two"))
+ self.assertTupleEqual(make_tuple({1: "a", 2: "b", 3: "c"}.keys()), (1, 2, 3))
+ self.assertTupleEqual(make_tuple({1: "a", 2: "b", 3: "c"}.values()), ("a", "b", "c"))
+
+ def test_generator(self):
+ def items():
+ for i in [1, 2, 3]:
+ yield i
+
+ self.assertTupleEqual(make_tuple(items()), (1, 2, 3))
diff --git a/conans/test/utils/genconanfile.py b/conans/test/utils/genconanfile.py
index b7315fc3693..00aaaa29c68 100644
--- a/conans/test/utils/genconanfile.py
+++ b/conans/test/utils/genconanfile.py
@@ -20,6 +20,7 @@ def __init__(self, name=None, version=None):
self._options = {}
self._generators = []
self._default_options = {}
+ self._provides = []
self._package_files = {}
self._package_files_env = {}
self._package_files_link = {}
@@ -28,6 +29,7 @@ def __init__(self, name=None, version=None):
self._requires = []
self._requirements = []
self._build_requires = []
+ self._build_requirements = []
self._revision_mode = None
self._package_info = {}
self._package_id_lines = []
@@ -41,6 +43,10 @@ def with_version(self, version):
self._version = version
return self
+ def with_provides(self, provides):
+ self._provides.append(provides)
+ return self
+
def with_revision_mode(self, revision_mode):
self._revision_mode = revision_mode
return self
@@ -74,6 +80,13 @@ def with_build_require_plain(self, ref_str):
self._build_requires.append(ref_str)
return self
+ def with_build_requirement(self, ref, force_host_context=False):
+ return self.with_build_requirement_plain(ref.full_str(),force_host_context=force_host_context)
+
+ def with_build_requirement_plain(self, ref_str, force_host_context=False):
+ self._build_requirements.append((ref_str, force_host_context))
+ return self
+
def with_import(self, i):
if i not in self._imports:
self._imports.append(i)
@@ -137,6 +150,13 @@ def _version_line(self):
return ""
return "version = '{}'".format(self._version)
+ @property
+ def _provides_line(self):
+ if not self._provides:
+ return ""
+ line = ", ".join('"{}"'.format(provide) for provide in self._provides)
+ return "provides = {}".format(line)
+
@property
def _scm_line(self):
if not self._scm:
@@ -181,6 +201,17 @@ def _default_options_line(self):
tmp = "default_options = {%s}" % line
return tmp
+ @property
+ def _build_requirements_method(self):
+ if not self._build_requirements:
+ return ""
+
+ lines = []
+ for ref, force_host_context in self._build_requirements:
+ force_host = ", force_host_context=True" if force_host_context else ""
+ lines.append(' self.build_requires("{}"{})'.format(ref, force_host))
+ return "def build_requirements(self):\n{}\n".format("\n".join(lines))
+
@property
def _build_requires_line(self):
if not self._build_requires:
@@ -304,14 +335,18 @@ def __repr__(self):
ret.append(" {}".format(self._name_line))
if self._version_line:
ret.append(" {}".format(self._version_line))
+ if self._provides_line:
+ ret.append(" {}".format(self._provides_line))
if self._generators_line:
ret.append(" {}".format(self._generators_line))
if self._requires_line:
ret.append(" {}".format(self._requires_line))
- if self._requirements_method:
- ret.append(" {}".format(self._requirements_method))
if self._build_requires_line:
ret.append(" {}".format(self._build_requires_line))
+ if self._requirements_method:
+ ret.append(" {}".format(self._requirements_method))
+ if self._build_requirements_method:
+ ret.append(" {}".format(self._build_requirements_method))
if self._scm:
ret.append(" {}".format(self._scm_line))
if self._revision_mode_line:
diff --git a/conans/util/misc.py b/conans/util/misc.py
new file mode 100644
index 00000000000..1b3b3bc54b8
--- /dev/null
+++ b/conans/util/misc.py
@@ -0,0 +1,18 @@
+import six
+
+
+def make_tuple(value):
+ """ Converts the value into a tuple if the value is an iterable with the following exceptions:
+ * a `None` value will return `None`
+ * a string value will return a tuple with the string as the unique member
+ """
+ if value is None:
+ return None
+
+ if isinstance(value, six.string_types):
+ return value,
+
+ if isinstance(value, six.moves.collections_abc.Iterable):
+ return tuple(value)
+ else:
+ return value,
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-7303@aafe4de
|
conan-io/conan
|
Python
| 7,303
|
implementing __contains__ for option in self.info.options
|
Changelog: Fix: Implement missing ``__contains__`` method, so checking ``if "myoption" in self.info.options`` is possible in ``package_id()``.
Docs: Omit
Fix https://github.com/conan-io/conan/issues/7299
|
2020-07-02T22:44:15Z
|
[bug] Infinite recursion when checking for an option in the package_id method
Thank you for all the great work in conan! I was working on a base class that would invoke `shared_library_package_id` conditionally if "shared" was in the options. (A try-catch would work for this case, but I was working through some ideas.) I found that the `in` lookup was causing infinite stack recursion.
```python
def package_id(self):
if "not_an_option" in self.info.options:
self.info.shared_library_package_id()
```
### Environment Details (include every applicable attribute)
* Operating System+version: Ubuntu 16.04.5
* Conan version: Conan version 1.27.0
* Python version: Python 3.5.2
Also tested in the docker:
* Python version: Python 3.7.5
* Conan versions: 1.0.0, 1.10.0, 1.27.0
* Docker container: `conanio/gcc5`
### Steps to reproduce (Include if Applicable)
1. Create a new conanfile:
```bash
conan new test/1.0@user/testing
```
1. Append a `package_id` method which looks for an option that doesn't exist:
```bash
printf " def package_id(self):\n 'not_an_option' in self.info.options" >> conanfile.py
```
1. Attempt to create a package:
```bash
conan create . user/testing
```
**Expected:** Builds
**Actual:** 100% CPU and eventually 100% RAM
```
top - 22:12:09 up 6:26, 0 users, load average: 0.89, 0.55, 0.41
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 13.2 us, 0.8 sy, 0.0 ni, 86.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16281688 total, 1775688 free, 10938668 used, 3567332 buff/cache
KiB Swap: 33248252 total, 30537368 free, 2710884 used. 4498560 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
630 conan 20 0 1727356 1.554g 9544 R 99.7 10.0 0:11.36 conan
1 conan 20 0 18272 3408 2896 S 0.0 0.0 0:00.31 bash
313 conan 20 0 18272 3372 2864 S 0.0 0.0 0:00.28 bash
629 conan 20 0 36640 3156 2732 R 0.0 0.0 0:00.00 top
```
### Logs (Executed commands with output) (Include/Attach if Applicable)
When running in pdb, I added a breakpoint at `conans/model/options.py:216` (line number for version 1.27.0), which led to an infinite loop there:
```
(Pdb) n
--Return--
> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(216)__getitem__()-><conans.model...x7f5c503ff5c0>
-> return self._reqs_options.setdefault(item, PackageOptionValues())
(Pdb) n
--Call--
> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(215)__getitem__()
-> def __getitem__(self, item):
(Pdb) n
> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(216)__getitem__()
-> return self._reqs_options.setdefault(item, PackageOptionValues())
(Pdb) n
--Return--
> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(216)__getitem__()-><conans.model...x7f5c503ffda0>
-> return self._reqs_options.setdefault(item, PackageOptionValues())
```
|
Thanks for the detailed report! I have provided the necessary ``__contains__`` method so this check is possible: https://github.com/conan-io/conan/pull/7303
In any case, I would also suggest to reconsider the check. It sounds unusual to do the logic based on the existence of an option and not its value. Which option are you checking? Is it ``shared``? Maybe something that could be built-in in the ``shared_library_package_id()``?
Whoa, that was really fast, thank you! I mirrored it after the check in `shared_library_package_id` itself (why doesn't it's check ` if "shared" not in dep_options` break too?), but with the goal of a base class having similar logic. Specifically, this was a base class for boost, which has some header-only components and some with a compiled library.
Boost has no requirements, so this probably shouldn't be there anyhow, but the logic was something along the lines of "if it's header-only the requirements never matter; if it's has a shared option they should follow the `shared_library_package_id`'s logic."
Either way, I was confused when my computer nearly fell over from standard python, so the fix is quite welcome!
```python
class ConanInfo(object):
...
def shared_library_package_id(self):
if self.full_options.shared:
for dep_name in self.requires.pkg_names:
dep_options = self.full_options[dep_name]
if "shared" not in dep_options or not self.full_options[dep_name].shared:
self.requires[dep_name].package_revision_mode()
...
```
Good point. This is because internally, it is checking against ``self.info.full_options``, and the attribute check there doesn't raise if the attribute (in this case the ``shared`` option) is not defined. Same happens if you directly test the ``self.info.options.whatever``, it will not fail if not existing. I have added those cases to the tests in the PR too.
|
[
{
"body": "Thank you for all the great work in conan! I was working on a base class that would invoke `shared_library_package_id` conditionally if \"shared\" was in the options. (A try-catch would work for this case, but I was working through some ideas.) I found that the `in` lookup was causing infinite stack recursion.\r\n\r\n```python\r\ndef package_id(self):\r\n if \"not_an_option\" in self.info.options:\r\n self.info.shared_library_package_id()\r\n```\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Ubuntu 16.04.5\r\n * Conan version: Conan version 1.27.0\r\n * Python version: Python 3.5.2\r\n\r\nAlso tested in the docker:\r\n * Python version: Python 3.7.5\r\n * Conan versions: 1.0.0, 1.10.0, 1.27.0\r\n * Docker container: `conanio/gcc5`\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n1. Create a new conanfile:\r\n ```bash\r\n conan new test/1.0@user/testing\r\n ```\r\n1. Append a `package_id` method which looks for an option that doesn't exist:\r\n ```bash\r\n printf \" def package_id(self):\\n 'not_an_option' in self.info.options\" >> conanfile.py\r\n ```\r\n1. Attempt to create a package:\r\n ```bash\r\n conan create . user/testing\r\n ```\r\n\r\n**Expected:** Builds \r\n**Actual:** 100% CPU and eventually 100% RAM\r\n\r\n```\r\ntop - 22:12:09 up 6:26, 0 users, load average: 0.89, 0.55, 0.41\r\nTasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie\r\n%Cpu(s): 13.2 us, 0.8 sy, 0.0 ni, 86.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\r\nKiB Mem : 16281688 total, 1775688 free, 10938668 used, 3567332 buff/cache\r\nKiB Swap: 33248252 total, 30537368 free, 2710884 used. 4498560 avail Mem \r\n\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \r\n 630 conan 20 0 1727356 1.554g 9544 R 99.7 10.0 0:11.36 conan \r\n 1 conan 20 0 18272 3408 2896 S 0.0 0.0 0:00.31 bash \r\n 313 conan 20 0 18272 3372 2864 S 0.0 0.0 0:00.28 bash \r\n 629 conan 20 0 36640 3156 2732 R 0.0 0.0 0:00.00 top \r\n```\r\n\r\n\r\n### Logs (Executed commands with output) (Include/Attach if Applicable)\r\n\r\nWhen running in pdb, I added a breakpoint at `conans/model/options.py:216` (line number for version 1.27.0), which led to an infinite loop there:\r\n\r\n```\r\n(Pdb) n\r\n--Return--\r\n> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(216)__getitem__()-><conans.model...x7f5c503ff5c0>\r\n-> return self._reqs_options.setdefault(item, PackageOptionValues())\r\n(Pdb) n\r\n--Call--\r\n> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(215)__getitem__()\r\n-> def __getitem__(self, item):\r\n(Pdb) n\r\n> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(216)__getitem__()\r\n-> return self._reqs_options.setdefault(item, PackageOptionValues())\r\n(Pdb) n\r\n--Return--\r\n> /home/rdesmond/.virtualenvs/conan-1.26.0/lib/python3.5/site-packages/conans/model/options.py(216)__getitem__()-><conans.model...x7f5c503ffda0>\r\n-> return self._reqs_options.setdefault(item, PackageOptionValues())\r\n```",
"number": 7299,
"title": "[bug] Infinite recursion when checking for an option in the package_id method"
}
] |
1ed9b250706c24b222aafcb598fbffc7355f691c
|
{
"head_commit": "aafe4decd4e6d317bed08e9e062cd653918b1918",
"head_commit_message": "fix tests",
"patch_to_review": "diff --git a/conans/model/info.py b/conans/model/info.py\nindex 59bfaf5b186..e1e0b6a2a6a 100644\n--- a/conans/model/info.py\n+++ b/conans/model/info.py\n@@ -617,10 +617,10 @@ def default_std_non_matching(self):\n self.settings.compiler.cppstd = self.full_settings.compiler.cppstd\n \n def shared_library_package_id(self):\n- if self.full_options.shared:\n+ if \"shared\" in self.full_options and self.full_options.shared:\n for dep_name in self.requires.pkg_names:\n dep_options = self.full_options[dep_name]\n- if \"shared\" not in dep_options or not self.full_options[dep_name].shared:\n+ if \"shared\" not in dep_options or not dep_options.shared:\n self.requires[dep_name].package_revision_mode()\n \n def parent_compatible(self, *_, **kwargs):\ndiff --git a/conans/model/options.py b/conans/model/options.py\nindex 403e7ecccca..d58702a6736 100644\n--- a/conans/model/options.py\n+++ b/conans/model/options.py\n@@ -22,8 +22,8 @@ def option_not_exist_msg(option_name, existing_options):\n \"\"\" Someone is referencing an option that is not available in the current package\n options\n \"\"\"\n- result = [\"'options.%s' doesn't exist\" % option_name]\n- result.append(\"Possible options are %s\" % existing_options or \"none\")\n+ result = [\"option '%s' doesn't exist\" % option_name,\n+ \"Possible options are %s\" % existing_options or \"none\"]\n return \"\\n\".join(result)\n \n \n@@ -59,6 +59,7 @@ class PackageOptionValues(object):\n def __init__(self):\n self._dict = {} # {option_name: PackageOptionValue}\n self._modified = {}\n+ self._freeze = False\n \n def __bool__(self):\n return bool(self._dict)\n@@ -71,7 +72,7 @@ def __nonzero__(self):\n \n def __getattr__(self, attr):\n if attr not in self._dict:\n- return None\n+ raise ConanException(option_not_exist_msg(attr, list(self._dict.keys())))\n return self._dict[attr]\n \n def __delattr__(self, attr):\n@@ -212,6 +213,9 @@ def descope_options(self, name):\n def clear_unscoped_options(self):\n self._package_values.clear()\n \n+ def __contains__(self, item):\n+ return item in self._package_values\n+\n def __getitem__(self, item):\n return self._reqs_options.setdefault(item, PackageOptionValues())\n \n@@ -283,16 +287,14 @@ def loads(text):\n \n @property\n def sha(self):\n- result = []\n- result.append(self._package_values.sha)\n+ result = [self._package_values.sha]\n for key in sorted(list(self._reqs_options.keys())):\n result.append(self._reqs_options[key].sha)\n return sha1('\\n'.join(result).encode())\n \n def serialize(self):\n- ret = {}\n- ret[\"options\"] = self._package_values.serialize()\n- ret[\"req_options\"] = {}\n+ ret = {\"options\": self._package_values.serialize(),\n+ \"req_options\": {}}\n for name, values in self._reqs_options.items():\n ret[\"req_options\"][name] = values.serialize()\n return ret\ndiff --git a/conans/test/functional/command/info/info_options_test.py b/conans/test/functional/command/info/info_options_test.py\nindex 4d6c23d7d2f..0d3c49dc303 100644\n--- a/conans/test/functional/command/info/info_options_test.py\n+++ b/conans/test/functional/command/info/info_options_test.py\n@@ -6,8 +6,7 @@\n class InfoOptionsTest(unittest.TestCase):\n \n def info_options_test(self):\n- \"\"\" packages with dash\n- \"\"\"\n+ # packages with dash\n client = TestClient()\n client.run('new My-Package/1.3@myuser/testing -t')\n # assert they are correct at least\n@@ -23,9 +22,9 @@ def info_options_test(self):\n \n # errors\n client.run(\"info . -o shared2=True\", assert_error=True)\n- self.assertIn(\"'options.shared2' doesn't exist\", client.out)\n+ self.assertIn(\"option 'shared2' doesn't exist\", client.out)\n client.run(\"info . -o My-Package:shared2=True\", assert_error=True)\n- self.assertIn(\"'options.shared2' doesn't exist\", client.out)\n+ self.assertIn(\"option 'shared2' doesn't exist\", client.out)\n \n def info_wrong_options_test(self):\n # https://github.com/conan-io/conan/issues/2202\ndiff --git a/conans/test/functional/options/options_test.py b/conans/test/functional/options/options_test.py\nindex 1716f2878db..4b163262376 100644\n--- a/conans/test/functional/options/options_test.py\n+++ b/conans/test/functional/options/options_test.py\n@@ -15,11 +15,7 @@ class Pkg(ConanFile):\n def configure(self):\n self.output.info(\"BUILD SHARED: %s\" % self.options.shared)\n \"\"\"\n- test = \"\"\"from conans import ConanFile\n-class Pkg(ConanFile):\n- def test(self):\n- pass\n-\"\"\"\n+ test = GenConanfile().with_test(\"pass\")\n client.save({\"conanfile.py\": conanfile})\n client.run(\"create . Pkg/0.1@user/testing -o *:shared=1\")\n self.assertIn(\"Pkg/0.1@user/testing: BUILD SHARED: 1\", client.out)\n@@ -33,37 +29,29 @@ def test(self):\n client.run(\"create . Pkg/0.1@user/testing -o Pkg:shared=2\")\n self.assertIn(\"Pkg/0.1@user/testing: BUILD SHARED: 2\", client.out)\n client.run(\"create . Pkg/0.1@user/testing -o shared=1\", assert_error=True)\n- self.assertIn(\"'options.shared' doesn't exist\", client.out)\n+ self.assertIn(\"option 'shared' doesn't exist\", client.out)\n \n- conanfile = \"\"\"from conans import ConanFile\n-class Pkg(ConanFile):\n- pass\n-\"\"\"\n- client.save({\"conanfile.py\": conanfile}, clean_first=True)\n+ client.save({\"conanfile.py\": GenConanfile()}, clean_first=True)\n client.run(\"create . Pkg/0.1@user/testing -o *:shared=True\")\n self.assertIn(\"Pkg/0.1@user/testing: Calling build()\", client.out)\n client.run(\"create . Pkg/0.1@user/testing -o shared=False\", assert_error=True)\n- self.assertIn(\"'options.shared' doesn't exist\", client.out)\n+ self.assertIn(\"option 'shared' doesn't exist\", client.out)\n # With test_package\n client.save({\"conanfile.py\": conanfile,\n \"test_package/conanfile.py\": test})\n- client.run(\"create . Pkg/0.1@user/testing -o *:shared=True\")\n- self.assertIn(\"Pkg/0.1@user/testing: Calling build()\", client.out)\n- self.assertIn(\"Pkg/0.1@user/testing (test package): Calling build()\", client.out)\n+ client.run(\"create . Pkg/0.1@user/testing -o *:shared=True\", assert_error=True)\n+ self.assertIn(\"ERROR: Pkg/0.1@user/testing: 'True' is not a valid 'options.shared' value\",\n+ client.out)\n \n def general_scope_priorities_test(self):\n client = TestClient()\n- conanfile = \"\"\"from conans import ConanFile\n-class Pkg(ConanFile):\n- options = {\"shared\": [\"1\", \"2\", \"3\"]}\n- def configure(self):\n- self.output.info(\"BUILD SHARED: %s\" % self.options.shared)\n-\"\"\"\n- test = \"\"\"from conans import ConanFile\n-class Pkg(ConanFile):\n- def test(self):\n- pass\n-\"\"\"\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class Pkg(ConanFile):\n+ options = {\"shared\": [\"1\", \"2\", \"3\"]}\n+ def configure(self):\n+ self.output.info(\"BUILD SHARED: %s\" % self.options.shared)\n+ \"\"\")\n client.save({\"conanfile.py\": conanfile})\n # Consumer has priority\n client.run(\"create . Pkg/0.1@user/testing -o *:shared=1 -o shared=2\")\n@@ -73,7 +61,7 @@ def test(self):\n self.assertIn(\"Pkg/0.1@user/testing: BUILD SHARED: 3\", client.out)\n # With test_package\n client.save({\"conanfile.py\": conanfile,\n- \"test_package/conanfile.py\": test})\n+ \"test_package/conanfile.py\": GenConanfile().with_test(\"pass\")})\n # Sorted (longest, alphabetical) patterns, have priority\n client.run(\"create . Pkg/0.1@user/testing -o *:shared=1 -o Pkg:shared=2\")\n self.assertIn(\"Pkg/0.1@user/testing: BUILD SHARED: 2\", client.out)\n@@ -205,32 +193,34 @@ def configure(self):\n def general_scope_options_test(self):\n # https://github.com/conan-io/conan/issues/2538\n client = TestClient()\n- conanfile_libA = \"\"\"from conans import ConanFile\n-class LibA(ConanFile):\n- options = {\"shared\": [True, False]}\n+ conanfile_liba = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class LibA(ConanFile):\n+ options = {\"shared\": [True, False]}\n \n- def configure(self):\n- self.output.info(\"shared=%s\" % self.options.shared)\n- \"\"\"\n- client.save({\"conanfile.py\": conanfile_libA})\n+ def configure(self):\n+ self.output.info(\"shared=%s\" % self.options.shared)\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile_liba})\n client.run(\"create . libA/0.1@danimtb/testing -o *:shared=True\")\n self.assertIn(\"libA/0.1@danimtb/testing: shared=True\", client.out)\n \n- conanfile_libB = \"\"\"from conans import ConanFile\n-class LibB(ConanFile):\n- options = {\"shared\": [True, False]}\n- requires = \"libA/0.1@danimtb/testing\"\n+ conanfile_libb = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class LibB(ConanFile):\n+ options = {\"shared\": [True, False]}\n+ requires = \"libA/0.1@danimtb/testing\"\n \n- def configure(self):\n- self.options[\"*\"].shared = self.options.shared\n- self.output.info(\"shared=%s\" % self.options.shared)\n- \"\"\"\n+ def configure(self):\n+ self.options[\"*\"].shared = self.options.shared\n+ self.output.info(\"shared=%s\" % self.options.shared)\n+ \"\"\")\n \n for without_configure_line in [True, False]:\n if without_configure_line:\n- conanfile = conanfile_libB.replace(\n- \" self.options[\\\"*\\\"].shared = self.options.shared\", \"\")\n-\n+ conanfile = conanfile_libb.replace(\"self.options[\", \"#\")\n+ else:\n+ conanfile = conanfile_libb\n client.save({\"conanfile.py\": conanfile})\n \n # Test info\n@@ -336,3 +326,16 @@ def package_id(self):\n client.run(\"create . pkg/0.1@user/testing %s\" % options)\n self.assertIn(\"liba/0.1@user/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache\",\n client.out)\n+\n+ def missing_shared_option_package_id_test(self):\n+ client = TestClient()\n+\n+ consumer = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class Pkg(ConanFile):\n+ def package_id(self):\n+ self.info.shared_library_package_id()\n+ \"\"\")\n+ client.save({\"conanfile.py\": consumer})\n+ client.run(\"create . pkg/0.1@user/testing\")\n+ self.assertIn(\"pkg/0.1@user/testing: Created package \", client.out)\ndiff --git a/conans/test/functional/package_id/package_id_test.py b/conans/test/functional/package_id/package_id_test.py\nindex 94580f7c188..b3f9d162837 100644\n--- a/conans/test/functional/package_id/package_id_test.py\n+++ b/conans/test/functional/package_id/package_id_test.py\n@@ -1,3 +1,4 @@\n+import textwrap\n import unittest\n \n from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient, TestServer\n@@ -78,3 +79,40 @@ def package(self):\n client.run(\"install test/0.1@danimtb/testing\")\n client.run(\"search test/0.1@danimtb/testing\")\n self.assertIn(\"compiler.version: kk=kk\", client.out)\n+\n+ def option_in_test(self):\n+ # https://github.com/conan-io/conan/issues/7299\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+\n+ class TestConan(ConanFile):\n+ options = {\"fpic\": [True, False]}\n+ default_options = {\"fpic\": True}\n+ def package_id(self):\n+ if \"fpic\" in self.options:\n+ self.output.info(\"fpic is an option!!!\")\n+ if \"fpic\" in self.info.options:\n+ self.output.info(\"fpic is an info.option!!!\")\n+ if \"other\" not in self.options:\n+ self.output.info(\"other is not an option!!!\")\n+ if \"other\" not in self.info.options:\n+ self.output.info(\"other is not an info.option!!!\")\n+ try:\n+ self.options.whatever\n+ except Exception as e:\n+ self.output.error(\"OPTIONS: %s\" % e)\n+ try:\n+ self.info.options.whatever\n+ except Exception as e:\n+ self.output.error(\"INFO: %s\" % e)\n+\n+ \"\"\")\n+ client = TestClient()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . Pkg/0.1@user/testing\")\n+ self.assertIn(\"fpic is an option!!!\", client.out)\n+ self.assertIn(\"fpic is an info.option!!!\", client.out)\n+ self.assertIn(\"other is not an option!!!\", client.out)\n+ self.assertIn(\"other is not an info.option!!!\", client.out)\n+ self.assertIn(\"ERROR: OPTIONS: option 'whatever' doesn't exist\", client.out)\n+ self.assertIn(\"ERROR: INFO: option 'whatever' doesn't exist\", client.out)\ndiff --git a/conans/test/functional/settings/remove_subsetting_test.py b/conans/test/functional/settings/remove_subsetting_test.py\nindex d89f08d0cec..40bbe38c142 100644\n--- a/conans/test/functional/settings/remove_subsetting_test.py\n+++ b/conans/test/functional/settings/remove_subsetting_test.py\n@@ -27,7 +27,7 @@ def build(self):\n client.current_folder = build_folder\n client.run(\"install ..\")\n client.run(\"build ..\", assert_error=True)\n- self.assertIn(\"ConanException: 'options.opt2' doesn't exist\", client.out)\n+ self.assertIn(\"ConanException: option 'opt2' doesn't exist\", client.out)\n self.assertIn(\"Possible options are ['opt1']\", client.out)\n \n def remove_setting_test(self):\n"
}
|
[
{
"diff_hunk": "@@ -33,37 +29,29 @@ def test(self):\n client.run(\"create . Pkg/0.1@user/testing -o Pkg:shared=2\")\n self.assertIn(\"Pkg/0.1@user/testing: BUILD SHARED: 2\", client.out)\n client.run(\"create . Pkg/0.1@user/testing -o shared=1\", assert_error=True)\n- self.assertIn(\"'options.shared' doesn't exist\", client.out)\n+ self.assertIn(\"option 'shared' doesn't exist\", client.out)\n \n- conanfile = \"\"\"from conans import ConanFile\n-class Pkg(ConanFile):\n- pass\n-\"\"\"\n- client.save({\"conanfile.py\": conanfile}, clean_first=True)\n+ client.save({\"conanfile.py\": GenConanfile()}, clean_first=True)\n client.run(\"create . Pkg/0.1@user/testing -o *:shared=True\")\n self.assertIn(\"Pkg/0.1@user/testing: Calling build()\", client.out)\n client.run(\"create . Pkg/0.1@user/testing -o shared=False\", assert_error=True)\n- self.assertIn(\"'options.shared' doesn't exist\", client.out)\n+ self.assertIn(\"option 'shared' doesn't exist\", client.out)\n # With test_package\n client.save({\"conanfile.py\": conanfile,\n \"test_package/conanfile.py\": test})\n- client.run(\"create . Pkg/0.1@user/testing -o *:shared=True\")\n- self.assertIn(\"Pkg/0.1@user/testing: Calling build()\", client.out)\n- self.assertIn(\"Pkg/0.1@user/testing (test package): Calling build()\", client.out)\n+ client.run(\"create . Pkg/0.1@user/testing -o *:shared=True\", assert_error=True)\n+ self.assertIn(\"ERROR: Pkg/0.1@user/testing: 'True' is not a valid 'options.shared' value\",\n+ client.out)",
"line": null,
"original_line": 44,
"original_start_line": null,
"path": "conans/test/functional/options/options_test.py",
"start_line": null,
"text": "@user2:\nThis test is checking that `shared` in `conan create . Pkg/0.1@user1/testing -o shared=False` refers to a specific package and raises because the option doesn't exist; but with `conan create . Pkg/0.1@user1/testing -o *:shared=True` it doesn't raise because it is not specific to any package.\r\n\r\n\n\n@author:\nIt tries to assign a ``True`` value to an option that only has ``[1, 2]`` as possible values. I find it reasonable to fail, isn't it?\n\n@user2:\noh! I see, you have changed the `conanfile` itself. Before it was:\r\n\r\n```\r\nconanfile = \"\"\"from conans import ConanFile\r\nclass Pkg(ConanFile):\r\n pass\r\n\"\"\"\r\n```\r\n\r\nnow it is a different conanfile. I would keep the same behavior in this test, it is just about writing a `GenConanfile()` \n\n@author:\nNevermind, I think there is an error, let me check it.\n\n@author:\nYeah, an annoying bug because of using the same test() to test 2 different things. I have splitted the test in 2."
}
] |
9b4b513d562fc3e437c70a6f589e87f8592885c7
|
diff --git a/conans/model/info.py b/conans/model/info.py
index 59bfaf5b186..e1e0b6a2a6a 100644
--- a/conans/model/info.py
+++ b/conans/model/info.py
@@ -617,10 +617,10 @@ def default_std_non_matching(self):
self.settings.compiler.cppstd = self.full_settings.compiler.cppstd
def shared_library_package_id(self):
- if self.full_options.shared:
+ if "shared" in self.full_options and self.full_options.shared:
for dep_name in self.requires.pkg_names:
dep_options = self.full_options[dep_name]
- if "shared" not in dep_options or not self.full_options[dep_name].shared:
+ if "shared" not in dep_options or not dep_options.shared:
self.requires[dep_name].package_revision_mode()
def parent_compatible(self, *_, **kwargs):
diff --git a/conans/model/options.py b/conans/model/options.py
index 403e7ecccca..d58702a6736 100644
--- a/conans/model/options.py
+++ b/conans/model/options.py
@@ -22,8 +22,8 @@ def option_not_exist_msg(option_name, existing_options):
""" Someone is referencing an option that is not available in the current package
options
"""
- result = ["'options.%s' doesn't exist" % option_name]
- result.append("Possible options are %s" % existing_options or "none")
+ result = ["option '%s' doesn't exist" % option_name,
+ "Possible options are %s" % existing_options or "none"]
return "\n".join(result)
@@ -59,6 +59,7 @@ class PackageOptionValues(object):
def __init__(self):
self._dict = {} # {option_name: PackageOptionValue}
self._modified = {}
+ self._freeze = False
def __bool__(self):
return bool(self._dict)
@@ -71,7 +72,7 @@ def __nonzero__(self):
def __getattr__(self, attr):
if attr not in self._dict:
- return None
+ raise ConanException(option_not_exist_msg(attr, list(self._dict.keys())))
return self._dict[attr]
def __delattr__(self, attr):
@@ -212,6 +213,9 @@ def descope_options(self, name):
def clear_unscoped_options(self):
self._package_values.clear()
+ def __contains__(self, item):
+ return item in self._package_values
+
def __getitem__(self, item):
return self._reqs_options.setdefault(item, PackageOptionValues())
@@ -283,16 +287,14 @@ def loads(text):
@property
def sha(self):
- result = []
- result.append(self._package_values.sha)
+ result = [self._package_values.sha]
for key in sorted(list(self._reqs_options.keys())):
result.append(self._reqs_options[key].sha)
return sha1('\n'.join(result).encode())
def serialize(self):
- ret = {}
- ret["options"] = self._package_values.serialize()
- ret["req_options"] = {}
+ ret = {"options": self._package_values.serialize(),
+ "req_options": {}}
for name, values in self._reqs_options.items():
ret["req_options"][name] = values.serialize()
return ret
diff --git a/conans/test/functional/command/info/info_options_test.py b/conans/test/functional/command/info/info_options_test.py
index 4d6c23d7d2f..0d3c49dc303 100644
--- a/conans/test/functional/command/info/info_options_test.py
+++ b/conans/test/functional/command/info/info_options_test.py
@@ -6,8 +6,7 @@
class InfoOptionsTest(unittest.TestCase):
def info_options_test(self):
- """ packages with dash
- """
+ # packages with dash
client = TestClient()
client.run('new My-Package/1.3@myuser/testing -t')
# assert they are correct at least
@@ -23,9 +22,9 @@ def info_options_test(self):
# errors
client.run("info . -o shared2=True", assert_error=True)
- self.assertIn("'options.shared2' doesn't exist", client.out)
+ self.assertIn("option 'shared2' doesn't exist", client.out)
client.run("info . -o My-Package:shared2=True", assert_error=True)
- self.assertIn("'options.shared2' doesn't exist", client.out)
+ self.assertIn("option 'shared2' doesn't exist", client.out)
def info_wrong_options_test(self):
# https://github.com/conan-io/conan/issues/2202
diff --git a/conans/test/functional/options/options_test.py b/conans/test/functional/options/options_test.py
index 1716f2878db..8dd46db184f 100644
--- a/conans/test/functional/options/options_test.py
+++ b/conans/test/functional/options/options_test.py
@@ -9,17 +9,14 @@ class OptionsTest(unittest.TestCase):
def general_scope_options_test_package_test(self):
client = TestClient()
- conanfile = """from conans import ConanFile
-class Pkg(ConanFile):
- options = {"shared": ["1", "2"]}
- def configure(self):
- self.output.info("BUILD SHARED: %s" % self.options.shared)
-"""
- test = """from conans import ConanFile
-class Pkg(ConanFile):
- def test(self):
- pass
-"""
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class Pkg(ConanFile):
+ options = {"shared": ["1", "2"]}
+ def configure(self):
+ self.output.info("BUILD SHARED: %s" % self.options.shared)
+ """)
+ test = GenConanfile().with_test("pass")
client.save({"conanfile.py": conanfile})
client.run("create . Pkg/0.1@user/testing -o *:shared=1")
self.assertIn("Pkg/0.1@user/testing: BUILD SHARED: 1", client.out)
@@ -33,37 +30,32 @@ def test(self):
client.run("create . Pkg/0.1@user/testing -o Pkg:shared=2")
self.assertIn("Pkg/0.1@user/testing: BUILD SHARED: 2", client.out)
client.run("create . Pkg/0.1@user/testing -o shared=1", assert_error=True)
- self.assertIn("'options.shared' doesn't exist", client.out)
+ self.assertIn("option 'shared' doesn't exist", client.out)
- conanfile = """from conans import ConanFile
-class Pkg(ConanFile):
- pass
-"""
- client.save({"conanfile.py": conanfile}, clean_first=True)
+ def general_scope_options_test_package_notdefined_test(self):
+ client = TestClient()
+ conanfile = GenConanfile()
+ client.save({"conanfile.py": conanfile})
client.run("create . Pkg/0.1@user/testing -o *:shared=True")
self.assertIn("Pkg/0.1@user/testing: Calling build()", client.out)
client.run("create . Pkg/0.1@user/testing -o shared=False", assert_error=True)
- self.assertIn("'options.shared' doesn't exist", client.out)
+ self.assertIn("option 'shared' doesn't exist", client.out)
# With test_package
client.save({"conanfile.py": conanfile,
- "test_package/conanfile.py": test})
+ "test_package/conanfile.py": GenConanfile().with_test("pass")})
client.run("create . Pkg/0.1@user/testing -o *:shared=True")
self.assertIn("Pkg/0.1@user/testing: Calling build()", client.out)
self.assertIn("Pkg/0.1@user/testing (test package): Calling build()", client.out)
def general_scope_priorities_test(self):
client = TestClient()
- conanfile = """from conans import ConanFile
-class Pkg(ConanFile):
- options = {"shared": ["1", "2", "3"]}
- def configure(self):
- self.output.info("BUILD SHARED: %s" % self.options.shared)
-"""
- test = """from conans import ConanFile
-class Pkg(ConanFile):
- def test(self):
- pass
-"""
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class Pkg(ConanFile):
+ options = {"shared": ["1", "2", "3"]}
+ def configure(self):
+ self.output.info("BUILD SHARED: %s" % self.options.shared)
+ """)
client.save({"conanfile.py": conanfile})
# Consumer has priority
client.run("create . Pkg/0.1@user/testing -o *:shared=1 -o shared=2")
@@ -73,7 +65,7 @@ def test(self):
self.assertIn("Pkg/0.1@user/testing: BUILD SHARED: 3", client.out)
# With test_package
client.save({"conanfile.py": conanfile,
- "test_package/conanfile.py": test})
+ "test_package/conanfile.py": GenConanfile().with_test("pass")})
# Sorted (longest, alphabetical) patterns, have priority
client.run("create . Pkg/0.1@user/testing -o *:shared=1 -o Pkg:shared=2")
self.assertIn("Pkg/0.1@user/testing: BUILD SHARED: 2", client.out)
@@ -205,32 +197,34 @@ def configure(self):
def general_scope_options_test(self):
# https://github.com/conan-io/conan/issues/2538
client = TestClient()
- conanfile_libA = """from conans import ConanFile
-class LibA(ConanFile):
- options = {"shared": [True, False]}
+ conanfile_liba = textwrap.dedent("""
+ from conans import ConanFile
+ class LibA(ConanFile):
+ options = {"shared": [True, False]}
- def configure(self):
- self.output.info("shared=%s" % self.options.shared)
- """
- client.save({"conanfile.py": conanfile_libA})
+ def configure(self):
+ self.output.info("shared=%s" % self.options.shared)
+ """)
+ client.save({"conanfile.py": conanfile_liba})
client.run("create . libA/0.1@danimtb/testing -o *:shared=True")
self.assertIn("libA/0.1@danimtb/testing: shared=True", client.out)
- conanfile_libB = """from conans import ConanFile
-class LibB(ConanFile):
- options = {"shared": [True, False]}
- requires = "libA/0.1@danimtb/testing"
+ conanfile_libb = textwrap.dedent("""
+ from conans import ConanFile
+ class LibB(ConanFile):
+ options = {"shared": [True, False]}
+ requires = "libA/0.1@danimtb/testing"
- def configure(self):
- self.options["*"].shared = self.options.shared
- self.output.info("shared=%s" % self.options.shared)
- """
+ def configure(self):
+ self.options["*"].shared = self.options.shared
+ self.output.info("shared=%s" % self.options.shared)
+ """)
for without_configure_line in [True, False]:
if without_configure_line:
- conanfile = conanfile_libB.replace(
- " self.options[\"*\"].shared = self.options.shared", "")
-
+ conanfile = conanfile_libb.replace("self.options[", "#")
+ else:
+ conanfile = conanfile_libb
client.save({"conanfile.py": conanfile})
# Test info
@@ -336,3 +330,16 @@ def package_id(self):
client.run("create . pkg/0.1@user/testing %s" % options)
self.assertIn("liba/0.1@user/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache",
client.out)
+
+ def missing_shared_option_package_id_test(self):
+ client = TestClient()
+
+ consumer = textwrap.dedent("""
+ from conans import ConanFile
+ class Pkg(ConanFile):
+ def package_id(self):
+ self.info.shared_library_package_id()
+ """)
+ client.save({"conanfile.py": consumer})
+ client.run("create . pkg/0.1@user/testing")
+ self.assertIn("pkg/0.1@user/testing: Created package ", client.out)
diff --git a/conans/test/functional/package_id/package_id_test.py b/conans/test/functional/package_id/package_id_test.py
index 94580f7c188..945f91d81e3 100644
--- a/conans/test/functional/package_id/package_id_test.py
+++ b/conans/test/functional/package_id/package_id_test.py
@@ -1,3 +1,4 @@
+import textwrap
import unittest
from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient, TestServer
@@ -78,3 +79,40 @@ def package(self):
client.run("install test/0.1@danimtb/testing")
client.run("search test/0.1@danimtb/testing")
self.assertIn("compiler.version: kk=kk", client.out)
+
+ def option_in_test(self):
+ # https://github.com/conan-io/conan/issues/7299
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class TestConan(ConanFile):
+ options = {"fpic": [True, False]}
+ default_options = {"fpic": True}
+ def package_id(self):
+ if "fpic" in self.options:
+ self.output.info("fpic is an option!!!")
+ if "fpic" in self.info.options: # Not documented
+ self.output.info("fpic is an info.option!!!")
+ if "other" not in self.options:
+ self.output.info("other is not an option!!!")
+ if "other" not in self.info.options: # Not documented
+ self.output.info("other is not an info.option!!!")
+ try:
+ self.options.whatever
+ except Exception as e:
+ self.output.error("OPTIONS: %s" % e)
+ try:
+ self.info.options.whatever
+ except Exception as e:
+ self.output.error("INFO: %s" % e)
+
+ """)
+ client = TestClient()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . Pkg/0.1@user/testing")
+ self.assertIn("fpic is an option!!!", client.out)
+ self.assertIn("fpic is an info.option!!!", client.out)
+ self.assertIn("other is not an option!!!", client.out)
+ self.assertIn("other is not an info.option!!!", client.out)
+ self.assertIn("ERROR: OPTIONS: option 'whatever' doesn't exist", client.out)
+ self.assertIn("ERROR: INFO: option 'whatever' doesn't exist", client.out)
diff --git a/conans/test/functional/settings/remove_subsetting_test.py b/conans/test/functional/settings/remove_subsetting_test.py
index d89f08d0cec..40bbe38c142 100644
--- a/conans/test/functional/settings/remove_subsetting_test.py
+++ b/conans/test/functional/settings/remove_subsetting_test.py
@@ -27,7 +27,7 @@ def build(self):
client.current_folder = build_folder
client.run("install ..")
client.run("build ..", assert_error=True)
- self.assertIn("ConanException: 'options.opt2' doesn't exist", client.out)
+ self.assertIn("ConanException: option 'opt2' doesn't exist", client.out)
self.assertIn("Possible options are ['opt1']", client.out)
def remove_setting_test(self):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7183@5431589
|
conan-io/conan
|
Python
| 7,183
|
Required Conan version
|
The general configuration `required_conan_version` validates the current Conan client version to the required version.
If the current version is out of the range, a warning message
is displayed before executing any command.
Changelog: Feature: Configuration for checking the required Conan client version.
Docs: https://github.com/conan-io/docs/pull/1740
closes #7136
/cc @memsharded
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2020-06-10T21:58:25Z
|
[feature] Min (max) conan version checks from conan.conf
Companies would like to introduce a check to guarantee that everyone is using the same version of conan. That should be an easy check with a config in conan.conf.
Possibilities?:
- A min-version check
- A max-version check
- An exact version check
- A range
The modes could be raise or warn.
|
[
{
"body": "Companies would like to introduce a check to guarantee that everyone is using the same version of conan. That should be an easy check with a config in conan.conf.\r\n\r\nPossibilities?:\r\n\r\n- A min-version check\r\n- A max-version check\r\n- An exact version check\r\n- A range\r\n\r\n\r\nThe modes could be raise or warn.",
"number": 7136,
"title": "[feature] Min (max) conan version checks from conan.conf"
}
] |
741f3d60a1e955c7294a9f2a00a0d670a0b0d4f0
|
{
"head_commit": "5431589a45312d30720395dc9d61285affaf1060",
"head_commit_message": "#7136 Remove env vars\n\nSigned-off-by: Uilian Ries <[email protected]>",
"patch_to_review": "diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex b75a3eb299c..72c21aa74cb 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -20,6 +20,7 @@\n from conans.client.cmd.uploader import CmdUpload\n from conans.client.cmd.user import user_set, users_clean, users_list, token_present\n from conans.client.conanfile.package import run_package_method\n+from conans.client.conf.required_version import check_required_conan_version\n from conans.client.graph.graph import RECIPE_EDITABLE\n from conans.client.graph.graph_binaries import GraphBinariesAnalyzer\n from conans.client.graph.graph_manager import GraphManager\n@@ -233,6 +234,7 @@ def __init__(self, cache_folder=None, output=None, user_io=None, http_requester=\n # Migration system\n migrator = ClientMigrator(self.cache_folder, Version(client_version), self.out)\n migrator.migrate()\n+ check_required_conan_version(self.cache_folder, self.out)\n if not get_env(CONAN_V2_MODE_ENVVAR, False):\n # FIXME Remove in Conan 2.0\n sys.path.append(os.path.join(self.cache_folder, \"python\"))\ndiff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py\nindex 97dccd54f0a..9bbd6e40c24 100644\n--- a/conans/client/conf/__init__.py\n+++ b/conans/client/conf/__init__.py\n@@ -177,6 +177,7 @@ def get_default_settings_yml(force_v1=False):\n {% endif %}\n \n # config_install_interval = 1h\n+ # required_conan_version = >=1.26\n \n [storage]\n # This is the default path, but you can write your own. It must be an absolute path or a\n@@ -709,3 +710,10 @@ def config_install_interval(self):\n except Exception as e:\n raise ConanException(\"Incorrect definition of general.config_install_interval: %s\"\n % interval)\n+\n+ @property\n+ def required_conan_version(self):\n+ try:\n+ return self.get_item(\"general.required_conan_version\")\n+ except ConanException:\n+ return None\ndiff --git a/conans/client/conf/required_version.py b/conans/client/conf/required_version.py\nnew file mode 100644\nindex 00000000000..53062a7c787\n--- /dev/null\n+++ b/conans/client/conf/required_version.py\n@@ -0,0 +1,28 @@\n+from conans.client.cache.cache import ClientCache\n+from conans.client.graph.range_resolver import satisfying\n+from conans import __version__ as client_version\n+from conans.errors import ConanException\n+\n+\n+def check_required_conan_version(cache_folder, out):\n+ \"\"\" Check if the required Conan version in config file matches to the current Conan version\n+\n+ When required_conan_version is not configured, it's skipped\n+ When required_conan_version is configured, Conan's version must matches the required\n+ version\n+ When it doesn't match, an ConanException is raised\n+\n+ :param cache_folder: Conan cache folder\n+ :param out: Output stream\n+ :return: None\n+ \"\"\"\n+ cache = ClientCache(cache_folder, out)\n+ required_version = cache.config.required_conan_version\n+ if required_version:\n+ output = \"\"\n+ result = satisfying([client_version], required_version, output)\n+ if not result:\n+ raise ConanException(\"The current Conan version ({}) does not match to the required version ({}).\"\n+ .format(client_version, required_version))\n+ elif result != client_version:\n+ raise ConanException(result)\ndiff --git a/conans/test/functional/configuration/required_version_test.py b/conans/test/functional/configuration/required_version_test.py\nnew file mode 100644\nindex 00000000000..62d6565f848\n--- /dev/null\n+++ b/conans/test/functional/configuration/required_version_test.py\n@@ -0,0 +1,48 @@\n+import unittest\n+import mock\n+from conans.test.utils.tools import TestClient\n+from conans.errors import ConanException\n+\n+\n+class RequiredVersionTest(unittest.TestCase):\n+\n+ @mock.patch(\"conans.client.conf.required_version.client_version\", \"1.26.0\")\n+ def test_wrong_version(self):\n+ required_version = \"1.23.0\"\n+ client = TestClient()\n+ client.run(\"config set general.required_conan_version={}\".format(required_version))\n+ with self.assertRaises(ConanException) as error:\n+ client.run(\"help\")\n+ self.assertIn(\"The current Conan version ({}) \"\n+ \"does not match to the required version ({}).\"\n+ .format( \"1.26.0\", required_version), str(error.exception))\n+\n+ @mock.patch(\"conans.client.conf.required_version.client_version\", \"1.22.0\")\n+ def test_exact_version(self):\n+ client = TestClient()\n+ client.run(\"config set general.required_conan_version={}\".format(\"1.22.0\"))\n+ client.run(\"help\")\n+ self.assertNotIn(\"ERROR\", client.out)\n+\n+ @mock.patch(\"conans.client.conf.required_version.client_version\", \"2.1.0\")\n+ def test_lesser_version(self):\n+ client = TestClient()\n+ client.run(\"config set general.required_conan_version=<3\")\n+ client.run(\"help\")\n+ self.assertNotIn(\"ERROR\", client.out)\n+\n+ @mock.patch(\"conans.client.conf.required_version.client_version\", \"1.0.0\")\n+ def test_greater_version(self):\n+ client = TestClient()\n+ client.run(\"config set general.required_conan_version=>0.1.0\")\n+ client.run(\"help\")\n+ self.assertNotIn(\"ERROR\", client.out)\n+\n+ def test_bad_format(self):\n+ client = TestClient()\n+ required_version = \"1.0.0.0-foobar\"\n+ client.run(\"config set general.required_conan_version={}\".format(required_version))\n+ with self.assertRaises(ConanException) as error:\n+ client.run(\"help\", assert_error=True)\n+ self.assertIn(\"version range expression '1.0.0.0-foobar' is not valid\",\n+ str(error.exception))\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,48 @@\n+import unittest\n+import mock\n+from conans.test.utils.tools import TestClient\n+from conans.errors import ConanException\n+\n+\n+class RequiredVersionTest(unittest.TestCase):\n+\n+ @mock.patch(\"conans.client.conf.required_version.client_version\", \"1.26.0\")\n+ def test_wrong_version(self):\n+ required_version = \"1.23.0\"\n+ client = TestClient()\n+ client.run(\"config set general.required_conan_version={}\".format(required_version))\n+ with self.assertRaises(ConanException) as error:\n+ client.run(\"help\")\n+ self.assertIn(\"The current Conan version ({}) \"\n+ \"does not match to the required version ({}).\"\n+ .format( \"1.26.0\", required_version), str(error.exception))\n+\n+ @mock.patch(\"conans.client.conf.required_version.client_version\", \"1.22.0\")\n+ def test_exact_version(self):\n+ client = TestClient()\n+ client.run(\"config set general.required_conan_version={}\".format(\"1.22.0\"))",
"line": null,
"original_line": 23,
"original_start_line": null,
"path": "conans/test/functional/configuration/required_version_test.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n client.run(\"config set general.required_conan_version=1.22.0\")\r\n```"
},
{
"diff_hunk": "@@ -0,0 +1,28 @@\n+from conans.client.cache.cache import ClientCache\n+from conans.client.graph.range_resolver import satisfying\n+from conans import __version__ as client_version\n+from conans.errors import ConanException\n+\n+\n+def check_required_conan_version(cache_folder, out):\n+ \"\"\" Check if the required Conan version in config file matches to the current Conan version\n+\n+ When required_conan_version is not configured, it's skipped\n+ When required_conan_version is configured, Conan's version must matches the required\n+ version\n+ When it doesn't match, an ConanException is raised\n+\n+ :param cache_folder: Conan cache folder\n+ :param out: Output stream\n+ :return: None\n+ \"\"\"\n+ cache = ClientCache(cache_folder, out)\n+ required_version = cache.config.required_conan_version\n+ if required_version:\n+ output = \"\"\n+ result = satisfying([client_version], required_version, output)",
"line": null,
"original_line": 23,
"original_start_line": null,
"path": "conans/client/conf/required_version.py",
"start_line": null,
"text": "@user1:\nIMO we shouldn't couple this functionality to the way we resolve version ranges. It's unlikely that we are going to change it soon, but there was an issue with the proposal. We can still use the same python library, but here we only need one line:\r\n\r\n```python\r\nfrom semver import satisfies\r\n\r\ntrue/false = satisfies(client_version, required_version) \r\n \r\n``` \r\n\r\nDo you think so?\n\n@author:\nif it works, I agree that is better! Thanks @user1"
}
] |
b5f3ff0f111e5bb646eed53b0da97a405f214705
|
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index b75a3eb299c..72c21aa74cb 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -20,6 +20,7 @@
from conans.client.cmd.uploader import CmdUpload
from conans.client.cmd.user import user_set, users_clean, users_list, token_present
from conans.client.conanfile.package import run_package_method
+from conans.client.conf.required_version import check_required_conan_version
from conans.client.graph.graph import RECIPE_EDITABLE
from conans.client.graph.graph_binaries import GraphBinariesAnalyzer
from conans.client.graph.graph_manager import GraphManager
@@ -233,6 +234,7 @@ def __init__(self, cache_folder=None, output=None, user_io=None, http_requester=
# Migration system
migrator = ClientMigrator(self.cache_folder, Version(client_version), self.out)
migrator.migrate()
+ check_required_conan_version(self.cache_folder, self.out)
if not get_env(CONAN_V2_MODE_ENVVAR, False):
# FIXME Remove in Conan 2.0
sys.path.append(os.path.join(self.cache_folder, "python"))
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index 97dccd54f0a..9bbd6e40c24 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -177,6 +177,7 @@ def get_default_settings_yml(force_v1=False):
{% endif %}
# config_install_interval = 1h
+ # required_conan_version = >=1.26
[storage]
# This is the default path, but you can write your own. It must be an absolute path or a
@@ -709,3 +710,10 @@ def config_install_interval(self):
except Exception as e:
raise ConanException("Incorrect definition of general.config_install_interval: %s"
% interval)
+
+ @property
+ def required_conan_version(self):
+ try:
+ return self.get_item("general.required_conan_version")
+ except ConanException:
+ return None
diff --git a/conans/client/conf/required_version.py b/conans/client/conf/required_version.py
new file mode 100644
index 00000000000..bb6b1bf6bfe
--- /dev/null
+++ b/conans/client/conf/required_version.py
@@ -0,0 +1,30 @@
+from conans.client.cache.cache import ClientCache
+from semver import satisfies, Range
+from conans import __version__ as client_version
+from conans.errors import ConanException
+
+
+def check_required_conan_version(cache_folder, out):
+ """ Check if the required Conan version in config file matches to the current Conan version
+
+ When required_conan_version is not configured, it's skipped
+ When required_conan_version is configured, Conan's version must matches the required
+ version
+ When it doesn't match, an ConanException is raised
+
+ :param cache_folder: Conan cache folder
+ :param out: Output stream
+ :return: None
+ """
+ cache = ClientCache(cache_folder, out)
+ required_version = cache.config.required_conan_version
+ if required_version:
+ try:
+ Range(required_version, False)
+ except ValueError:
+ raise ConanException("The required version expression '{}' is not valid."
+ .format(required_version))
+ result = satisfies(client_version, required_version)
+ if not result:
+ raise ConanException("The current Conan version ({}) does not match to the required"
+ " version ({}).".format(client_version, required_version))
diff --git a/conans/test/functional/configuration/required_version_test.py b/conans/test/functional/configuration/required_version_test.py
new file mode 100644
index 00000000000..a00189e9a6f
--- /dev/null
+++ b/conans/test/functional/configuration/required_version_test.py
@@ -0,0 +1,48 @@
+import unittest
+import mock
+from conans.test.utils.tools import TestClient
+from conans.errors import ConanException
+
+
+class RequiredVersionTest(unittest.TestCase):
+
+ @mock.patch("conans.client.conf.required_version.client_version", "1.26.0")
+ def test_wrong_version(self):
+ required_version = "1.23.0"
+ client = TestClient()
+ client.run("config set general.required_conan_version={}".format(required_version))
+ with self.assertRaises(ConanException) as error:
+ client.run("help")
+ self.assertIn("The current Conan version ({}) "
+ "does not match to the required version ({})."
+ .format("1.26.0", required_version), str(error.exception))
+
+ @mock.patch("conans.client.conf.required_version.client_version", "1.22.0")
+ def test_exact_version(self):
+ client = TestClient()
+ client.run("config set general.required_conan_version=1.22.0")
+ client.run("help")
+ self.assertNotIn("ERROR", client.out)
+
+ @mock.patch("conans.client.conf.required_version.client_version", "2.1.0")
+ def test_lesser_version(self):
+ client = TestClient()
+ client.run("config set general.required_conan_version=<3")
+ client.run("help")
+ self.assertNotIn("ERROR", client.out)
+
+ @mock.patch("conans.client.conf.required_version.client_version", "1.0.0")
+ def test_greater_version(self):
+ client = TestClient()
+ client.run("config set general.required_conan_version=>0.1.0")
+ client.run("help")
+ self.assertNotIn("ERROR", client.out)
+
+ def test_bad_format(self):
+ client = TestClient()
+ required_version = "1.0.0.0-foobar"
+ client.run("config set general.required_conan_version={}".format(required_version))
+ with self.assertRaises(ConanException) as error:
+ client.run("help", assert_error=True)
+ self.assertIn("The required version expression '{}' is not valid.".format(required_version),
+ str(error.exception))
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
conan-io__conan-6475@2febb60
|
conan-io/conan
|
Python
| 6,475
|
fixing mixing apis for download cache
|
Changelog: Fix: Avoid caching revision "0" under api V2 (revisions enabled) in the download cache.
Docs: https://github.com/conan-io/docs/pull/1552
Fix https://github.com/conan-io/conan/issues/6467
|
2020-02-05T22:03:58Z
|
[bug] Download cache should handle v1 and v2 clients simultaneously
Download cache uses URLs for caching things in api V2, which works well for revisioned things, because the revision is a hash. But this fails if clients using v1 keep re-uploading the revision "0", which shouldn't be cached.
To do:
- Implement return of checksums in Api V2 (@jgsogo). The api is prepared for that.
- Skip the caching of revision "0" artifacts in Api V2.
|
What do you mean with...?
> Implement return of checksums in Api V2
```python
class RestV2Methods(RestCommonMethods):
def _get_file_list_json(self, url):
data = self.get_json(url)
# Discarding (.keys()) still empty metadata for files
data["files"] = list(data["files"].keys())
return data
```
The api is prepared so data can be retrieved when asked for files in the server. We should return here the checksum of the files, as it is done in the "snapshot" methods of api V1, and propagate those checksums. The first necessary change is to implement that in the server side.
|
[
{
"body": "Download cache uses URLs for caching things in api V2, which works well for revisioned things, because the revision is a hash. But this fails if clients using v1 keep re-uploading the revision \"0\", which shouldn't be cached. \r\nTo do:\r\n\r\n- Implement return of checksums in Api V2 (@jgsogo). The api is prepared for that.\r\n- Skip the caching of revision \"0\" artifacts in Api V2.",
"number": 6467,
"title": "[bug] Download cache should handle v1 and v2 clients simultaneously"
}
] |
2782e27c89d8c32550fe1fdf640c2cf9b38fe91f
|
{
"head_commit": "2febb60e969496d5a7a0d4e3a285f861d51a0028",
"head_commit_message": "removed defaults",
"patch_to_review": "diff --git a/conans/client/rest/rest_client_v2.py b/conans/client/rest/rest_client_v2.py\nindex 99dc9db1ffe..0cc5b4929c8 100644\n--- a/conans/client/rest/rest_client_v2.py\n+++ b/conans/client/rest/rest_client_v2.py\n@@ -37,10 +37,10 @@ def _get_file_list_json(self, url):\n data[\"files\"] = list(data[\"files\"].keys())\n return data\n \n- def _get_remote_file_contents(self, url):\n+ def _get_remote_file_contents(self, url, cache):\n # We don't want traces in output of these downloads, they are ugly in output\n downloader = FileDownloader(self.requester, None, self.verify_ssl, self._config)\n- if self._config.download_cache:\n+ if cache and self._config.download_cache:\n downloader = CachedFileDownloader(self._config.download_cache, downloader)\n contents = downloader.download(url, auth=self.auth)\n return contents\n@@ -58,12 +58,14 @@ def get_recipe_manifest(self, ref):\n if not ref.revision:\n ref = self.get_latest_recipe_revision(ref)\n url = self.router.recipe_manifest(ref)\n- content = self._get_remote_file_contents(url)\n+ cache = (ref.revision != \"0\")\n+ content = self._get_remote_file_contents(url, cache=cache)\n return FileTreeManifest.loads(decode_text(content))\n \n def get_package_manifest(self, pref):\n url = self.router.package_manifest(pref)\n- content = self._get_remote_file_contents(url)\n+ cache = (pref.revision != \"0\")\n+ content = self._get_remote_file_contents(url, cache=cache)\n try:\n return FileTreeManifest.loads(decode_text(content))\n except Exception as e:\n@@ -75,7 +77,8 @@ def get_package_manifest(self, pref):\n \n def get_package_info(self, pref):\n url = self.router.package_info(pref)\n- content = self._get_remote_file_contents(url)\n+ cache = (pref.revision != \"0\")\n+ content = self._get_remote_file_contents(url, cache=cache)\n return ConanInfo.loads(decode_text(content))\n \n def get_recipe(self, ref, dest_folder):\n@@ -88,7 +91,8 @@ def get_recipe(self, ref, dest_folder):\n \n # If we didn't indicated reference, server got the latest, use absolute now, it's safer\n urls = {fn: self.router.recipe_file(ref, fn) for fn in files}\n- self._download_and_save_files(urls, dest_folder, files)\n+ cache = (ref.revision != \"0\")\n+ self._download_and_save_files(urls, dest_folder, files, cache=cache)\n ret = {fn: os.path.join(dest_folder, fn) for fn in files}\n return ret\n \n@@ -106,7 +110,8 @@ def get_recipe_sources(self, ref, dest_folder):\n \n # If we didn't indicated reference, server got the latest, use absolute now, it's safer\n urls = {fn: self.router.recipe_file(ref, fn) for fn in files}\n- self._download_and_save_files(urls, dest_folder, files)\n+ cache = (ref.revision != \"0\")\n+ self._download_and_save_files(urls, dest_folder, files, cache=cache)\n ret = {fn: os.path.join(dest_folder, fn) for fn in files}\n return ret\n \n@@ -117,7 +122,8 @@ def get_package(self, pref, dest_folder):\n check_compressed_files(PACKAGE_TGZ_NAME, files)\n # If we didn't indicated reference, server got the latest, use absolute now, it's safer\n urls = {fn: self.router.package_file(pref, fn) for fn in files}\n- self._download_and_save_files(urls, dest_folder, files)\n+ cache = (pref.revision != \"0\")\n+ self._download_and_save_files(urls, dest_folder, files, cache=cache)\n ret = {fn: os.path.join(dest_folder, fn) for fn in files}\n return ret\n \n@@ -128,7 +134,8 @@ def get_recipe_path(self, ref, path):\n return self._list_dir_contents(path, files)\n else:\n url = self.router.recipe_file(ref, path)\n- content = self._get_remote_file_contents(url)\n+ cache = (ref.revision != \"0\")\n+ content = self._get_remote_file_contents(url, cache=cache)\n return decode_text(content)\n \n def get_package_path(self, pref, path):\n@@ -136,7 +143,8 @@ def get_package_path(self, pref, path):\n url = self.router.package_snapshot(pref)\n files = self._get_file_list_json(url)\n if self._is_dir(path, files):\n- return self._list_dir_contents(path, files)\n+ cache = (pref.revision != \"0\")\n+ return self._list_dir_contents(path, files, cache=cache)\n else:\n url = self.router.package_file(pref, path)\n content = self._get_remote_file_contents(url)\n@@ -205,9 +213,9 @@ def _upload_files(self, files, urls, retry, retry_wait, display_name=None):\n else:\n logger.debug(\"\\nUPLOAD: All uploaded! Total time: %s\\n\" % str(time.time() - t1))\n \n- def _download_and_save_files(self, urls, dest_folder, files):\n+ def _download_and_save_files(self, urls, dest_folder, files, cache):\n downloader = FileDownloader(self.requester, self._output, self.verify_ssl, self._config)\n- if self._config.download_cache:\n+ if cache and self._config.download_cache:\n downloader = CachedFileDownloader(self._config.download_cache, downloader)\n # Take advantage of filenames ordering, so that conan_package.tgz and conan_export.tgz\n # can be < conanfile, conaninfo, and sent always the last, so smaller files go first\ndiff --git a/conans/test/functional/download_cache_test.py b/conans/test/functional/download_cache_test.py\nindex 986017e9f55..0a324306ba2 100644\n--- a/conans/test/functional/download_cache_test.py\n+++ b/conans/test/functional/download_cache_test.py\n@@ -168,6 +168,41 @@ def source(self):\n self.assertIn(\"ERROR: conanfile.py: Error in source() method, line 7\", client.out)\n self.assertIn(\"Not found: http://localhost\", client.out)\n \n+ @unittest.skipIf(get_env(\"TESTING_REVISIONS_ENABLED\", False), \"Hybrid test with both v1 and v2\")\n+ def test_revision0_v2_skip(self):\n+ client = TestClient(default_server_user=True)\n+ client.run(\"config set general.revisions_enabled=False\")\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class Pkg(ConanFile):\n+ exports = \"*\"\n+ def package(self):\n+ self.copy(\"*\")\n+ def deploy(self):\n+ self.copy(\"*\")\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile,\n+ \"header.h\": \"header\"})\n+ client.run(\"create . mypkg/0.1@user/testing\")\n+ client.run(\"upload * --all --confirm\")\n+\n+ client2 = TestClient(servers=client.servers)\n+ client2.run(\"config set general.revisions_enabled=True\")\n+ cache_folder = temp_folder()\n+ client2.run('config set storage.download_cache=\"%s\"' % cache_folder)\n+ client2.run(\"install mypkg/0.1@user/testing\")\n+ self.assertEqual(\"header\", client2.load(\"header.h\"))\n+\n+ # modify non-revisioned pkg\n+ client.save({\"conanfile.py\": conanfile,\n+ \"header.h\": \"header2\"})\n+ client.run(\"create . mypkg/0.1@user/testing\")\n+ client.run(\"upload * --all --confirm\")\n+\n+ client2.run(\"remove * -f\")\n+ client2.run(\"install mypkg/0.1@user/testing\")\n+ self.assertEqual(\"header2\", client2.load(\"header.h\"))\n+\n \n class CachedDownloaderUnitTest(unittest.TestCase):\n def setUp(self):\n"
}
|
[
{
"diff_hunk": "@@ -58,12 +58,14 @@ def get_recipe_manifest(self, ref):\n if not ref.revision:\n ref = self.get_latest_recipe_revision(ref)\n url = self.router.recipe_manifest(ref)\n- content = self._get_remote_file_contents(url)\n+ cache = (ref.revision != \"0\")",
"line": null,
"original_line": 61,
"original_start_line": null,
"path": "conans/client/rest/rest_client_v2.py",
"start_line": null,
"text": "@user1:\nIn the sources, when we refer to this _magic number_ we use `DEFAULT_REVISION_V1`\r\n\r\n```suggestion\r\n cache = (ref.revision != DEFAULT_REVISION_V1)\r\n```"
},
{
"diff_hunk": "@@ -37,10 +37,10 @@ def _get_file_list_json(self, url):\n data[\"files\"] = list(data[\"files\"].keys())\n return data\n \n- def _get_remote_file_contents(self, url):\n+ def _get_remote_file_contents(self, url, cache):",
"line": null,
"original_line": 40,
"original_start_line": null,
"path": "conans/client/rest/rest_client_v2.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n def _get_remote_file_contents(self, url, use_cache):\r\n```\r\n\r\nCan you rename it to `use_cache`?"
}
] |
c31cf9aa6fbb47a4c65f44112b48129803316bea
|
diff --git a/conans/client/rest/rest_client_v2.py b/conans/client/rest/rest_client_v2.py
index 99dc9db1ffe..98bfc157c2f 100644
--- a/conans/client/rest/rest_client_v2.py
+++ b/conans/client/rest/rest_client_v2.py
@@ -2,6 +2,7 @@
import time
import traceback
+from conans import DEFAULT_REVISION_V1
from conans.client.remote_manager import check_compressed_files
from conans.client.rest.client_routes import ClientV2Router
from conans.client.rest.download_cache import CachedFileDownloader
@@ -37,10 +38,10 @@ def _get_file_list_json(self, url):
data["files"] = list(data["files"].keys())
return data
- def _get_remote_file_contents(self, url):
+ def _get_remote_file_contents(self, url, use_cache):
# We don't want traces in output of these downloads, they are ugly in output
downloader = FileDownloader(self.requester, None, self.verify_ssl, self._config)
- if self._config.download_cache:
+ if use_cache and self._config.download_cache:
downloader = CachedFileDownloader(self._config.download_cache, downloader)
contents = downloader.download(url, auth=self.auth)
return contents
@@ -58,12 +59,14 @@ def get_recipe_manifest(self, ref):
if not ref.revision:
ref = self.get_latest_recipe_revision(ref)
url = self.router.recipe_manifest(ref)
- content = self._get_remote_file_contents(url)
+ cache = (ref.revision != DEFAULT_REVISION_V1)
+ content = self._get_remote_file_contents(url, use_cache=cache)
return FileTreeManifest.loads(decode_text(content))
def get_package_manifest(self, pref):
url = self.router.package_manifest(pref)
- content = self._get_remote_file_contents(url)
+ cache = (pref.revision != DEFAULT_REVISION_V1)
+ content = self._get_remote_file_contents(url, use_cache=cache)
try:
return FileTreeManifest.loads(decode_text(content))
except Exception as e:
@@ -75,7 +78,8 @@ def get_package_manifest(self, pref):
def get_package_info(self, pref):
url = self.router.package_info(pref)
- content = self._get_remote_file_contents(url)
+ cache = (pref.revision != DEFAULT_REVISION_V1)
+ content = self._get_remote_file_contents(url, use_cache=cache)
return ConanInfo.loads(decode_text(content))
def get_recipe(self, ref, dest_folder):
@@ -88,7 +92,8 @@ def get_recipe(self, ref, dest_folder):
# If we didn't indicated reference, server got the latest, use absolute now, it's safer
urls = {fn: self.router.recipe_file(ref, fn) for fn in files}
- self._download_and_save_files(urls, dest_folder, files)
+ cache = (ref.revision != DEFAULT_REVISION_V1)
+ self._download_and_save_files(urls, dest_folder, files, use_cache=cache)
ret = {fn: os.path.join(dest_folder, fn) for fn in files}
return ret
@@ -106,7 +111,8 @@ def get_recipe_sources(self, ref, dest_folder):
# If we didn't indicated reference, server got the latest, use absolute now, it's safer
urls = {fn: self.router.recipe_file(ref, fn) for fn in files}
- self._download_and_save_files(urls, dest_folder, files)
+ cache = (ref.revision != DEFAULT_REVISION_V1)
+ self._download_and_save_files(urls, dest_folder, files, use_cache=cache)
ret = {fn: os.path.join(dest_folder, fn) for fn in files}
return ret
@@ -117,7 +123,8 @@ def get_package(self, pref, dest_folder):
check_compressed_files(PACKAGE_TGZ_NAME, files)
# If we didn't indicated reference, server got the latest, use absolute now, it's safer
urls = {fn: self.router.package_file(pref, fn) for fn in files}
- self._download_and_save_files(urls, dest_folder, files)
+ cache = (pref.revision != DEFAULT_REVISION_V1)
+ self._download_and_save_files(urls, dest_folder, files, use_cache=cache)
ret = {fn: os.path.join(dest_folder, fn) for fn in files}
return ret
@@ -128,7 +135,8 @@ def get_recipe_path(self, ref, path):
return self._list_dir_contents(path, files)
else:
url = self.router.recipe_file(ref, path)
- content = self._get_remote_file_contents(url)
+ cache = (ref.revision != DEFAULT_REVISION_V1)
+ content = self._get_remote_file_contents(url, use_cache=cache)
return decode_text(content)
def get_package_path(self, pref, path):
@@ -139,7 +147,8 @@ def get_package_path(self, pref, path):
return self._list_dir_contents(path, files)
else:
url = self.router.package_file(pref, path)
- content = self._get_remote_file_contents(url)
+ cache = (pref.revision != DEFAULT_REVISION_V1)
+ content = self._get_remote_file_contents(url, use_cache=cache)
return decode_text(content)
@staticmethod
@@ -205,9 +214,9 @@ def _upload_files(self, files, urls, retry, retry_wait, display_name=None):
else:
logger.debug("\nUPLOAD: All uploaded! Total time: %s\n" % str(time.time() - t1))
- def _download_and_save_files(self, urls, dest_folder, files):
+ def _download_and_save_files(self, urls, dest_folder, files, use_cache):
downloader = FileDownloader(self.requester, self._output, self.verify_ssl, self._config)
- if self._config.download_cache:
+ if use_cache and self._config.download_cache:
downloader = CachedFileDownloader(self._config.download_cache, downloader)
# Take advantage of filenames ordering, so that conan_package.tgz and conan_export.tgz
# can be < conanfile, conaninfo, and sent always the last, so smaller files go first
diff --git a/conans/test/functional/download_cache_test.py b/conans/test/functional/download_cache_test.py
index 986017e9f55..0a324306ba2 100644
--- a/conans/test/functional/download_cache_test.py
+++ b/conans/test/functional/download_cache_test.py
@@ -168,6 +168,41 @@ def source(self):
self.assertIn("ERROR: conanfile.py: Error in source() method, line 7", client.out)
self.assertIn("Not found: http://localhost", client.out)
+ @unittest.skipIf(get_env("TESTING_REVISIONS_ENABLED", False), "Hybrid test with both v1 and v2")
+ def test_revision0_v2_skip(self):
+ client = TestClient(default_server_user=True)
+ client.run("config set general.revisions_enabled=False")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class Pkg(ConanFile):
+ exports = "*"
+ def package(self):
+ self.copy("*")
+ def deploy(self):
+ self.copy("*")
+ """)
+ client.save({"conanfile.py": conanfile,
+ "header.h": "header"})
+ client.run("create . mypkg/0.1@user/testing")
+ client.run("upload * --all --confirm")
+
+ client2 = TestClient(servers=client.servers)
+ client2.run("config set general.revisions_enabled=True")
+ cache_folder = temp_folder()
+ client2.run('config set storage.download_cache="%s"' % cache_folder)
+ client2.run("install mypkg/0.1@user/testing")
+ self.assertEqual("header", client2.load("header.h"))
+
+ # modify non-revisioned pkg
+ client.save({"conanfile.py": conanfile,
+ "header.h": "header2"})
+ client.run("create . mypkg/0.1@user/testing")
+ client.run("upload * --all --confirm")
+
+ client2.run("remove * -f")
+ client2.run("install mypkg/0.1@user/testing")
+ self.assertEqual("header2", client2.load("header.h"))
+
class CachedDownloaderUnitTest(unittest.TestCase):
def setUp(self):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-6947@b057db2
|
conan-io/conan
|
Python
| 6,947
|
fix package_id computation for mixed modes
|
Changelog: Bugfix: Prevent crash when mixing package_id modes for the same dependency.
Docs: Omit
Close https://github.com/conan-io/conan/issues/6942
|
2020-05-03T12:58:58Z
|
[bug] unable to build packages with package_revision_mode enabled.
Conan crashing after enabled package_revision_mode and tried to rebuild all of our projects.
stacktrace attached.
### Environment Details (include every applicable attribute)
* Operating System+version: rh7
* Compiler+version: gcc8
* Conan version: 1.24.0
* Python version: 3.6.10
### Steps to reproduce (Include if Applicable)
Have a complex graph tree.
I dont have mve at the moment, but i'll try to pinpoint something.
Global package_mode would be set to 'package_revision_mode'.
In some of the recipes we are using default versioning_schema.
In some of the recipes we are specifying use semver_mode for some of its dependencies.
In some of the recipes we are specifying use of full_package_mode for some of its dependencies.
### Logs stacktrace.
invoked: conan create . --build missing
```
30-Apr-2020 16:37:19 xyz/abc: Unknown binary for xyz/abc, computing updated ID
30-Apr-2020 16:37:19 Traceback (most recent call last):
30-Apr-2020 16:37:19 File "conan/conans/client/command.py", line 2002, in run
30-Apr-2020 16:37:19 File "conan/conans/client/command.py", line 369, in create
30-Apr-2020 16:37:19 File "conan/conans/client/conan_api.py", line 89, in wrapper
30-Apr-2020 16:37:19 File "conan/conans/client/conan_api.py", line 368, in create
30-Apr-2020 16:37:19 File "conan/conans/client/cmd/create.py", line 57, in create
30-Apr-2020 16:37:19 File "conan/conans/client/manager.py", line 75, in deps_install
30-Apr-2020 16:37:19 File "conan/conans/client/installer.py", line 309, in install
30-Apr-2020 16:37:19 File "conan/conans/client/installer.py", line 404, in _build
30-Apr-2020 16:37:19 File "conan/conans/client/graph/graph_binaries.py", line 347, in reevaluate_node
30-Apr-2020 16:37:19 File "conan/conans/client/graph/graph_binaries.py", line 319, in _compute_package_id
30-Apr-2020 16:37:19 File "conan/conans/model/info.py", line 540, in package_id
30-Apr-2020 16:37:19 File "conan/conans/model/info.py", line 216, in sha
30-Apr-2020 16:37:19 TypeError: '<' not supported between instances of 'NoneType' and 'str'
```
|
Checking the trace, I cannot see how a None gets there. Please keep us tuned if you can reproduce, and if not, I will try to provide a branch with some traces so you can run in your environment.
@memsharded its 100% reproducible for me, i just have to wait for full build ( we only upload at the end of the build ) so it takes ~25 imnutes to build.
Just checkd - same stacktrace, if you want me to drop somethig - np.
Actually I was thinking that I will have to postpone this migration.
Hi @fulara
Quick question, do you have enabled ``full_transitive_package_id`` in your conan.conf configuration?
Also adding these two lines might help isolating the origin of the bug:
```patch
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 16de60e89..e0729103e 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -294,6 +294,9 @@ class GraphBinariesAnalyzer(object):
direct_reqs = node.id_direct_prefs
indirect_reqs = node.id_indirect_prefs
+ assert None not in direct_reqs, "None found in direct_reqs"
+ assert None not in indirect_reqs, "None found in indirect_reqs"
+
python_requires = getattr(conanfile, "python_requires", None)
if python_requires:
if isinstance(python_requires, dict):
```
@memsharded unless I made mistake ( I dont think I have ) this assert didnt trigger.
yes, of course i am using `full_transitive_package_id` after all it wouldnt be fair if i hadnt? :)
https://github.com/fulara/conan/tree/package_revision_mode
EDIT:
i added one more silly print:
```
print("PRINTING NOW STUFF! \n")
for key, value in self._data.items():
print("KEY IS" + str(key) + " value is: " + str(value) + + " dumps: " + value.dumps() + " \n ")
```
the result i got just before failing is:
```
PRINTING NOW STUFF!
KEY ISboost/1.72.0:634ce480c172f753ce13327c4f9f6d3a5eabcc32 value is: <conans.model.info.RequirementInfo object at 0x7f5a07491f60> dumps: boost/1.72.0:634ce480c172f753ce13327c4f9f6d3a5eabcc32
KEY ISrfa-server-lib/4.13:4e6ce351510b1e0711182fec674aed81b210cd32 value is: <conans.model.info.RequirementInfo object at 0x7f5a07491ba8> dumps: rfa-server-lib/4.Y.Z
KEY ISrfa-convert-lib/2.5:d62d48a0c0e3e9eced23f5c6a0926139d0ff8478 value is: <conans.model.info.RequirementInfo object at 0x7f5a07d0ceb8> dumps: rfa-convert-lib/2.Y.Z
KEY ISigcounters/1.3:ddb66222e853666802a54adae5ac3e9befedc54b value is: <conans.model.info.RequirementInfo object at 0x7f5a07d0c6a0> dumps: igcounters/1.Y.Z
KEY ISpoco/1.9.4:ed4013c58aa4bd9377abede5e8e1db2513b6c7d0 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8ae80> dumps: poco/1.9.4:ed4013c58aa4bd9377abede5e8e1db2513b6c7d0
KEY ISdisruptor/2.4:55d8a52d22c8588a7455fc66d9546a9464da6adb value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a550> dumps: disruptor/2.Y.Z
KEY ISopenssl/1.0.2t:e54af4a8e0cd6901bb01dd9c8925a8859b6246b2 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8aa90> dumps: openssl/1.0.2t:e54af4a8e0cd6901bb01dd9c8925a8859b6246b2
KEY ISlibcli/1.9.7:f10ff948ad9a5779ed97e7f1c2f2e4c8cd675372 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8aeb8> dumps: libcli/1.Y.Z
KEY ISbzip2/1.0.8:0d28bbf593474851d2bc7d2ef0a546fcdc0233fe value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8af60> dumps: bzip2/1.0.8:0d28bbf593474851d2bc7d2ef0a546fcdc0233fe
KEY ISrfa/8.0.1.E1:dc7e592c7c92901df7003480a162da6bd8500cbd value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a048> dumps: rfa/8.0.1.E1:dc7e592c7c92901df7003480a162da6bd8500cbd
KEY ISzlib/1.2.11:7d0f19a52d7be613ac3eb2f1ea1b8cc359e0bfe0 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a1d0> dumps: zlib/1.2.11:7d0f19a52d7be613ac3eb2f1ea1b8cc359e0bfe0
KEY ISboost/1.72.0:634ce480c172f753ce13327c4f9f6d3a5eabcc32 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8af28> dumps: boost/1.72.0#3c1a4170f35bcad9109cb8bc720d56d1:634ce480c172f753ce13327c4f9f6d3a5eabcc32#PREV unknown
KEY ISfin-pricing-utils/2.3:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a0b8> dumps: fin-pricing-utils/2.Y.Z
```
So its some member variable of RequirementInfo thats null.
@memsharded looking at the stacktrace I think its because one of the 'boost' is depended on with package_recipe_mode and the other one is full_package_mode ( which is valid scenario ) and you are missing a none check somewhere where comparing the pprev or rrev.
> yes, of course i am using full_transitive_package_id after all it wouldnt be fair if i hadnt? :)
Well, I wouldn't blame you, sometimes it takes some time to migrate things :)
I don't think the mode is involved here, because the key should be always the ``PackageReference``, the full one. It is more like some of those keys of type PackageReference (which is a namedtuple) contains a None in one of their fields. If you could print instead:
```python
print("PRINTING NOW STUFF! \n")
for key, value in self._data.items():
print("KEY IS", key.ref.name, key.ref.version, key.ref.user, key.ref.channel, key.ref.revision,
key.id, key.revision, value.dumps() )
```
That should give us all the fields and we could identify the offending None (I still cannot figure out why there is None coming there)
here you go:
code is:
```
for key, value in self._data.items():
print("ref.name: ", key.ref.name, " ref.version ", key.ref.version, " ref.user ", key.ref.user, " ref.channel ", key.ref.channel, " ref.revision ", key.ref.revision, " id ",
key.id, " revision ", key.revision, " dumps ", value.dumps(), "\n" )
```
```
PRINTING NOW STUFF!
ref.name: rfa-convert-lib ref.version 2.5 ref.user None ref.channel None ref.revision 58f27ceb315f99e273c5ecadf75fcad3 id 3fea224fa96ea0aee3fd04b66788247769ce5a88 revision bee3e3c900c64414ab91cb812fb9a8f7 dumps rfa-convert-lib/2.Y.Z
ref.name: poco ref.version 1.9.4 ref.user None ref.channel None ref.revision 42c1907520edbda9833bec9ee7b22c5f id e8c0afb6bd27a5c10d28c2b8222e300f4f94381d revision bd15d0f9475a3207ca1f4611a110ef56 dumps poco/1.9.4:e8c0afb6bd27a5c10d28c2b8222e300f4f94381d
ref.name: rfa-server-lib ref.version 4.13 ref.user None ref.channel None ref.revision 04fe1a0bb78e44568b6052b7323168a9 id 49323f7cb1e8c34a8222d4bb4b82d19fa2bafdaa revision ec125f5a77a6bd3a8e628d276691608d dumps rfa-server-lib/4.Y.Z
ref.name: boost ref.version 1.72.0 ref.user None ref.channel None ref.revision 3c1a4170f35bcad9109cb8bc720d56d1 id 3f890e8db573d1fea921ff792e1d7a3e17718ab8 revision 33089a35430b60e799c76bb6b7d1a043 dumps boost/1.72.0:3f890e8db573d1fea921ff792e1d7a3e17718ab8
ref.name: igcounters ref.version 1.3 ref.user None ref.channel None ref.revision eee56f3742689bd079b9b246073cb700 id 75e3e5f7724fc192123090cbe892c40afa841ea3 revision adcafc3c3df160389b226cb3380d5c1c dumps igcounters/1.Y.Z
ref.name: fin-pricing-utils ref.version 2.3 ref.user None ref.channel None ref.revision f6f35138752d691fd1aac8b0114479ad id 5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 revision 262873877900a3059e1d60de48b37d1c dumps fin-pricing-utils/2.Y.Z
ref.name: openssl ref.version 1.0.2t ref.user None ref.channel None ref.revision 98b72be1284eb54893f45b1f043ecead id 135f0560b485ffdf5b9f993d63f2aaecc8fb281e revision df8c72fcd905d9c2885adefdfc0cc2d9 dumps openssl/1.0.2t:135f0560b485ffdf5b9f993d63f2aaecc8fb281e
ref.name: libcli ref.version 1.9.7 ref.user None ref.channel None ref.revision 990f469c93b05f229ff6d711cca6cda5 id 4a09d987d91684aecef7218066b1c8edb559d34e revision 72640db512697f14b8cc1f13423ca80b dumps libcli/1.Y.Z
ref.name: boost ref.version 1.72.0 ref.user None ref.channel None ref.revision 3c1a4170f35bcad9109cb8bc720d56d1 id 3f890e8db573d1fea921ff792e1d7a3e17718ab8 revision None dumps boost/1.72.0#3c1a4170f35bcad9109cb8bc720d56d1:3f890e8db573d1fea921ff792e1d7a3e17718ab8#PREV unknown
ref.name: bzip2 ref.version 1.0.8 ref.user None ref.channel None ref.revision ad6efb7d25adcbde4984125a43434af2 id 76a4a7324a2083cb6964ea8321da67a1ceb31b50 revision 7be9a190f5446bbb47422a8558050bdd dumps bzip2/1.0.8:76a4a7324a2083cb6964ea8321da67a1ceb31b50
ref.name: rfa ref.version 8.0.1.E1 ref.user None ref.channel None ref.revision a7f8a61e064b5d8ee8d46fa5b5219a1d id 2f48c782b34dfcb01aab706713eda393eed4a638 revision 45463dd5c2c58f243ae6a1f4f2c94986 dumps rfa/8.0.1.E1:2f48c782b34dfcb01aab706713eda393eed4a638
ref.name: disruptor ref.version 2.4 ref.user None ref.channel None ref.revision c11f9a2b83faf9ea36d5a829237cb247 id d4eab5eaba08a2639f6f809e50b7e7a7606d8829 revision 1fd0fade0d84bb48d3e62c384b9e3d34 dumps disruptor/2.Y.Z
ref.name: zlib ref.version 1.2.11 ref.user None ref.channel None ref.revision ddccdddea098293f5202c5e8eb29967b id 9fdb4217a0bb5bac441d7e17705c9172eeeb6cfe revision 0bf6e52e8a2a2bfaeca54f6c72adcc51 dumps zlib/1.2.11:9fdb4217a0bb5bac441d7e17705c9172eeeb6cfe
```
Ok, I start to see where it comes from. Working on a fix.
Trying to reproduce with a test first. I guess you are using ``private`` dependencies somewhere in the graph, aren't you?
Nope @memsharded we dont use that kind of magic.
|
[
{
"body": "Conan crashing after enabled package_revision_mode and tried to rebuild all of our projects.\r\nstacktrace attached.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: rh7\r\n * Compiler+version: gcc8\r\n * Conan version: 1.24.0\r\n * Python version: 3.6.10\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nHave a complex graph tree.\r\nI dont have mve at the moment, but i'll try to pinpoint something.\r\n\r\nGlobal package_mode would be set to 'package_revision_mode'.\r\nIn some of the recipes we are using default versioning_schema.\r\nIn some of the recipes we are specifying use semver_mode for some of its dependencies.\r\nIn some of the recipes we are specifying use of full_package_mode for some of its dependencies.\r\n\r\n### Logs stacktrace.\r\ninvoked: conan create . --build missing\r\n```\r\n30-Apr-2020 16:37:19 xyz/abc: Unknown binary for xyz/abc, computing updated ID\r\n30-Apr-2020 16:37:19 Traceback (most recent call last):\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/command.py\", line 2002, in run\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/command.py\", line 369, in create\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/conan_api.py\", line 89, in wrapper\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/conan_api.py\", line 368, in create\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/cmd/create.py\", line 57, in create\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/manager.py\", line 75, in deps_install\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/installer.py\", line 309, in install\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/installer.py\", line 404, in _build\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/graph/graph_binaries.py\", line 347, in reevaluate_node\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/graph/graph_binaries.py\", line 319, in _compute_package_id\r\n30-Apr-2020 16:37:19 File \"conan/conans/model/info.py\", line 540, in package_id\r\n30-Apr-2020 16:37:19 File \"conan/conans/model/info.py\", line 216, in sha\r\n30-Apr-2020 16:37:19 TypeError: '<' not supported between instances of 'NoneType' and 'str'\r\n```\r\n",
"number": 6942,
"title": "[bug] unable to build packages with package_revision_mode enabled."
}
] |
e498cefbc8e89308d2f0570707d51a3405401368
|
{
"head_commit": "b057db2d6cafafce62c13d16f3fee38f32e2d2f2",
"head_commit_message": "more fixes",
"patch_to_review": "diff --git a/conans/client/build/build.py b/conans/client/build/build.py\nindex 85311e9e174..df8422dbdae 100644\n--- a/conans/client/build/build.py\n+++ b/conans/client/build/build.py\n@@ -9,7 +9,8 @@\n def run_build_method(conanfile, hook_manager, **hook_kwargs):\n hook_manager.execute(\"pre_build\", conanfile=conanfile, **hook_kwargs)\n \n- logger.debug(\"Call conanfile.build() with files in build folder: %s\", os.listdir(conanfile.build_folder))\n+ logger.debug(\"Call conanfile.build() with files in build folder: %s\",\n+ os.listdir(conanfile.build_folder))\n with get_env_context_manager(conanfile):\n conanfile.output.highlight(\"Calling build()\")\n with conanfile_exception_formatter(str(conanfile), \"build\"):\ndiff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py\nindex f4f8ccaf391..9e2afdfb46c 100644\n--- a/conans/client/graph/graph.py\n+++ b/conans/client/graph/graph.py\n@@ -86,6 +86,24 @@ def __init__(self, ref, conanfile, context, recipe=None, path=None):\n self._ancestors = _NodeOrderedDict() # set{ref.name}\n self._id = None # Unique ID (uuid at the moment) of a node in the graph\n self.graph_lock_node = None # the locking information can be None\n+ self.id_direct_prefs = None\n+ self.id_indirect_prefs = None\n+\n+ def package_id_transitive_reqs(self):\n+ \"\"\"\n+ accumulate the direct and transitive requirements prefs necessary to compute the\n+ package_id\n+ :return: set(prefs) of direct deps, set(prefs) of transitive deps\n+ \"\"\"\n+ self.id_direct_prefs = set() # of PackageReference\n+ self.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n+ for neighbor in self.neighbors():\n+ self.id_direct_prefs.add(neighbor.pref)\n+ self.id_indirect_prefs.update(neighbor.id_direct_prefs)\n+ self.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n+ # Make sure not duplicated, totally necessary\n+ self.id_indirect_prefs.difference_update(self.id_direct_prefs)\n+ return self.id_direct_prefs, self.id_indirect_prefs\n \n @property\n def id(self):\ndiff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py\nindex 16de60e89af..ef33c8d918d 100644\n--- a/conans/client/graph/graph_binaries.py\n+++ b/conans/client/graph/graph_binaries.py\n@@ -283,16 +283,7 @@ def _compute_package_id(self, node, default_package_id_mode, default_python_requ\n # Make sure not duplicated\n indirect_reqs.difference_update(direct_reqs)\n else:\n- node.id_direct_prefs = set() # of PackageReference\n- node.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n- for neighbor in neighbors:\n- node.id_direct_prefs.add(neighbor.pref)\n- node.id_indirect_prefs.update(neighbor.id_direct_prefs)\n- node.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n- # Make sure not duplicated, totally necessary\n- node.id_indirect_prefs.difference_update(node.id_direct_prefs)\n- direct_reqs = node.id_direct_prefs\n- indirect_reqs = node.id_indirect_prefs\n+ direct_reqs, indirect_reqs = node.package_id_transitive_reqs()\n \n python_requires = getattr(conanfile, \"python_requires\", None)\n if python_requires:\ndiff --git a/conans/client/installer.py b/conans/client/installer.py\nindex ffcaf9d3b42..c19b152a012 100644\n--- a/conans/client/installer.py\n+++ b/conans/client/installer.py\n@@ -396,6 +396,8 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._propagate_info(node, using_build_profile)\n if node.binary == BINARY_EDITABLE:\n self._handle_node_editable(node, graph_info)\n+ # Need a temporary package revision for package_revision_mode\n+ node.prev = \"0\"\n else:\n if node.binary == BINARY_SKIP: # Privates not necessary\n continue\n@@ -404,6 +406,8 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._binaries_analyzer.reevaluate_node(node, remotes, build_mode, update)\n _handle_system_requirements(conan_file, node.pref, self._cache, output)\n self._handle_node_cache(node, keep_build, processed_package_refs, remotes)\n+ # After a node has been managed, better reset its transitive info\n+ node.package_id_transitive_reqs()\n \n # Finally, propagate information to root node (ref=None)\n self._propagate_info(root_node, using_build_profile)\ndiff --git a/conans/model/info.py b/conans/model/info.py\nindex 1d549c70516..59bfaf5b186 100644\n--- a/conans/model/info.py\n+++ b/conans/model/info.py\n@@ -189,6 +189,7 @@ def add(self, prefs_indirect, default_package_id_mode):\n def refs(self):\n \"\"\" used for updating downstream requirements with this\n \"\"\"\n+ # FIXME: This is a very bad name, it return prefs, not refs\n return list(self._data.keys())\n \n def _get_key(self, item):\ndiff --git a/conans/model/ref.py b/conans/model/ref.py\nindex f1bd7312ed1..0f8ce34ffe8 100644\n--- a/conans/model/ref.py\n+++ b/conans/model/ref.py\n@@ -287,6 +287,11 @@ def __repr__(self):\n def __str__(self):\n return \"%s:%s\" % (self.ref, self.id)\n \n+ def __lt__(self, other):\n+ me = self.ref, self.id, self.revision or \"\"\n+ other = other.ref, other.id, other.revision or \"\"\n+ return me < other\n+\n def full_str(self):\n str_rev = \"#%s\" % self.revision if self.revision else \"\"\n tmp = \"%s:%s%s\" % (self.ref.full_str(), self.id, str_rev)\ndiff --git a/conans/test/functional/package_id/package_id_requires_modes_test.py b/conans/test/functional/package_id/package_id_requires_modes_test.py\nindex 182f2a0b503..30a64d5222a 100644\n--- a/conans/test/functional/package_id/package_id_requires_modes_test.py\n+++ b/conans/test/functional/package_id/package_id_requires_modes_test.py\n@@ -482,10 +482,69 @@ def test_package_id_requires_patch_mode(self):\n self.assertIn(\"\"\"ERROR: Missing binary: libc/0.1.0@user/testing:e12c9d31fa508340bb8d0c4f9dd4c98a5d0ac082\n \n libc/0.1.0@user/testing: WARN: Can't find a 'libc/0.1.0@user/testing' package for the specified settings, options and dependencies:\n-- Settings: \n+- Settings:%s\n - Options: an_option=off, liba:an_option=off, libb:an_option=off, libbar:an_option=off, libfoo:an_option=off\n - Dependencies: libb/0.1.0@user/testing, libfoo/0.1.0@user/testing\n - Requirements: liba/0.1.0, libb/0.1.0, libbar/0.1.0, libfoo/0.1.0\n - Package ID: e12c9d31fa508340bb8d0c4f9dd4c98a5d0ac082\n \n-ERROR: Missing prebuilt package for 'libc/0.1.0@user/testing'\"\"\", self.client.out)\n+ERROR: Missing prebuilt package for 'libc/0.1.0@user/testing'\"\"\" % \" \", self.client.out)\n+\n+\n+class PackageIDErrorTest(unittest.TestCase):\n+\n+ def transitive_multi_mode_package_id_test(self):\n+ # https://github.com/conan-io/conan/issues/6942\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=full_package_mode\")\n+ client.run(\"config set general.full_transitive_package_id=True\")\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"export . dep1/1.0@user/testing\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep1/1.0@user/testing\")})\n+ client.run(\"export . dep2/1.0@user/testing\")\n+\n+ pkg_revision_mode = \"self.info.requires.package_revision_mode()\"\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep1/1.0@user/testing\")\n+ .with_package_id(pkg_revision_mode)})\n+ client.run(\"export . dep3/1.0@user/testing\")\n+\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep2/1.0@user/testing\")\n+ .with_require_plain(\"dep3/1.0@user/testing\")})\n+ client.run('create . consumer/1.0@user/testing --build')\n+ self.assertIn(\"consumer/1.0@user/testing: Created\", client.out)\n+\n+ def transitive_multi_mode2_package_id_test(self):\n+ # https://github.com/conan-io/conan/issues/6942\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=package_revision_mode\")\n+ # This is mandatory, otherwise it doesn't work\n+ client.run(\"config set general.full_transitive_package_id=True\")\n+\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"export . dep1/1.0@user/testing\")\n+\n+ pkg_revision_mode = \"self.info.requires.full_version_mode()\"\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep1/1.0@user/testing\")\n+ .with_package_id(pkg_revision_mode)})\n+ client.run(\"export . dep2/1.0@user/testing\")\n+\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep2/1.0@user/testing\")})\n+ client.run('create . consumer/1.0@user/testing --build')\n+ self.assertIn(\"consumer/1.0@user/testing: Created\", client.out)\n+\n+ def package_revision_mode_editable_test(self):\n+ # Package revision mode crash when using editables\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=package_revision_mode\")\n+ client.run(\"config set general.full_transitive_package_id=True\")\n+\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"editable add . dep1/1.0@user/testing\")\n+\n+ client2 = TestClient(cache_folder=client.cache_folder)\n+ client2.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep1/1.0@user/testing\")})\n+ client2.run(\"export . dep2/1.0@user/testing\")\n+\n+ client2.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep2/1.0@user/testing\")})\n+ client2.run('create . consumer/1.0@user/testing --build')\n+ self.assertIn(\"consumer/1.0@user/testing: Created\", client2.out)\n"
}
|
[
{
"diff_hunk": "@@ -404,6 +406,8 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._binaries_analyzer.reevaluate_node(node, remotes, build_mode, update)\n _handle_system_requirements(conan_file, node.pref, self._cache, output)\n self._handle_node_cache(node, keep_build, processed_package_refs, remotes)\n+ # After a node has been managed, better reset its transitive info",
"line": null,
"original_line": 409,
"original_start_line": null,
"path": "conans/client/installer.py",
"start_line": null,
"text": "@user1:\nIs this necessary? Why is better? I'd like to think that the requirements are not modified after they are computed...\n\n@author:\nThe problem is that when upstream dependencies are built, they get a new PREV, and you need to update it in the downstream consumers, even if they haven't been built, otherwise the ``package_id()`` will not have them updated, and it will fail computing the package_id, because an upstream package has still PREV_UNKNOWN, even if it was built."
},
{
"diff_hunk": "@@ -396,6 +396,8 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._propagate_info(node, using_build_profile)\n if node.binary == BINARY_EDITABLE:\n self._handle_node_editable(node, graph_info)\n+ # Need a temporary package revision for package_revision_mode\n+ node.prev = \"0\"",
"line": null,
"original_line": 400,
"original_start_line": null,
"path": "conans/client/installer.py",
"start_line": null,
"text": "@user1:\n`0` is a valid package revision, can we use any other? If we just need a value, `PREV_UNKNOWN`?\r\n\r\nCan we initialize this member in the `node` class definition?\r\n\r\n---\r\n\r\n...but I don't know if that value has a meaning, I can see in the sources somewhere else:\r\n\r\n```python\r\n# It is requested to use, but not defined (binary not build yet)\r\nself.package_revision = self.full_package_revision or PREV_UNKNOWN\r\n```\n\n@author:\n\"0\" is the revision when we have no revisions. It is not a valid one, it is a placeholder to mean that we have no revisions. I thought that for editables it made kind of sense, as we are not revisioning, but yes, I guess a PREV_UNKNOWN could be better."
},
{
"diff_hunk": "@@ -86,6 +86,24 @@ def __init__(self, ref, conanfile, context, recipe=None, path=None):\n self._ancestors = _NodeOrderedDict() # set{ref.name}\n self._id = None # Unique ID (uuid at the moment) of a node in the graph\n self.graph_lock_node = None # the locking information can be None\n+ self.id_direct_prefs = None\n+ self.id_indirect_prefs = None\n+\n+ def package_id_transitive_reqs(self):\n+ \"\"\"\n+ accumulate the direct and transitive requirements prefs necessary to compute the\n+ package_id\n+ :return: set(prefs) of direct deps, set(prefs) of transitive deps\n+ \"\"\"\n+ self.id_direct_prefs = set() # of PackageReference\n+ self.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n+ for neighbor in self.neighbors():\n+ self.id_direct_prefs.add(neighbor.pref)\n+ self.id_indirect_prefs.update(neighbor.id_direct_prefs)\n+ self.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n+ # Make sure not duplicated, totally necessary",
"line": null,
"original_line": 104,
"original_start_line": null,
"path": "conans/client/graph/graph.py",
"start_line": null,
"text": "@user1:\nThe comment should explain why this is \"totally necessary\"\n\n@author:\nThe comment is not really new, it was there. \r\nBut I think that it means that without it, there will be duplicated requirements in the indirect ones, causing bugs in the package_id"
}
] |
898311ff00be6e1e79df7fb06111a752a3f2a262
|
diff --git a/conans/client/build/build.py b/conans/client/build/build.py
index 85311e9e174..df8422dbdae 100644
--- a/conans/client/build/build.py
+++ b/conans/client/build/build.py
@@ -9,7 +9,8 @@
def run_build_method(conanfile, hook_manager, **hook_kwargs):
hook_manager.execute("pre_build", conanfile=conanfile, **hook_kwargs)
- logger.debug("Call conanfile.build() with files in build folder: %s", os.listdir(conanfile.build_folder))
+ logger.debug("Call conanfile.build() with files in build folder: %s",
+ os.listdir(conanfile.build_folder))
with get_env_context_manager(conanfile):
conanfile.output.highlight("Calling build()")
with conanfile_exception_formatter(str(conanfile), "build"):
diff --git a/conans/client/installer.py b/conans/client/installer.py
index ffcaf9d3b42..ddf412dc1d4 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -396,6 +396,9 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui
self._propagate_info(node, using_build_profile)
if node.binary == BINARY_EDITABLE:
self._handle_node_editable(node, graph_info)
+ # Need a temporary package revision for package_revision_mode
+ # Cannot be PREV_UNKNOWN otherwise the consumers can't compute their packageID
+ node.prev = "editable"
else:
if node.binary == BINARY_SKIP: # Privates not necessary
continue
diff --git a/conans/model/info.py b/conans/model/info.py
index 1d549c70516..59bfaf5b186 100644
--- a/conans/model/info.py
+++ b/conans/model/info.py
@@ -189,6 +189,7 @@ def add(self, prefs_indirect, default_package_id_mode):
def refs(self):
""" used for updating downstream requirements with this
"""
+ # FIXME: This is a very bad name, it return prefs, not refs
return list(self._data.keys())
def _get_key(self, item):
diff --git a/conans/model/ref.py b/conans/model/ref.py
index f1bd7312ed1..65ac252a5f4 100644
--- a/conans/model/ref.py
+++ b/conans/model/ref.py
@@ -287,6 +287,13 @@ def __repr__(self):
def __str__(self):
return "%s:%s" % (self.ref, self.id)
+ def __lt__(self, other):
+ # We need this operator to sort prefs to compute the package_id
+ # package_id() -> ConanInfo.package_id() -> RequirementsInfo.sha() -> sorted(prefs) -> lt
+ me = self.ref, self.id, self.revision or ""
+ other = other.ref, other.id, other.revision or ""
+ return me < other
+
def full_str(self):
str_rev = "#%s" % self.revision if self.revision else ""
tmp = "%s:%s%s" % (self.ref.full_str(), self.id, str_rev)
diff --git a/conans/test/functional/package_id/package_id_requires_modes_test.py b/conans/test/functional/package_id/package_id_requires_modes_test.py
index 182f2a0b503..b4fd4db2b00 100644
--- a/conans/test/functional/package_id/package_id_requires_modes_test.py
+++ b/conans/test/functional/package_id/package_id_requires_modes_test.py
@@ -482,10 +482,50 @@ def test_package_id_requires_patch_mode(self):
self.assertIn("""ERROR: Missing binary: libc/0.1.0@user/testing:e12c9d31fa508340bb8d0c4f9dd4c98a5d0ac082
libc/0.1.0@user/testing: WARN: Can't find a 'libc/0.1.0@user/testing' package for the specified settings, options and dependencies:
-- Settings:
+- Settings:%s
- Options: an_option=off, liba:an_option=off, libb:an_option=off, libbar:an_option=off, libfoo:an_option=off
- Dependencies: libb/0.1.0@user/testing, libfoo/0.1.0@user/testing
- Requirements: liba/0.1.0, libb/0.1.0, libbar/0.1.0, libfoo/0.1.0
- Package ID: e12c9d31fa508340bb8d0c4f9dd4c98a5d0ac082
-ERROR: Missing prebuilt package for 'libc/0.1.0@user/testing'""", self.client.out)
+ERROR: Missing prebuilt package for 'libc/0.1.0@user/testing'""" % " ", self.client.out)
+
+
+class PackageIDErrorTest(unittest.TestCase):
+
+ def transitive_multi_mode_package_id_test(self):
+ # https://github.com/conan-io/conan/issues/6942
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=full_package_mode")
+ client.run("config set general.full_transitive_package_id=True")
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("export . dep1/1.0@user/testing")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("dep1/1.0@user/testing")})
+ client.run("export . dep2/1.0@user/testing")
+
+ pkg_revision_mode = "self.info.requires.package_revision_mode()"
+ client.save({"conanfile.py": GenConanfile().with_require_plain("dep1/1.0@user/testing")
+ .with_package_id(pkg_revision_mode)})
+ client.run("export . dep3/1.0@user/testing")
+
+ client.save({"conanfile.py": GenConanfile().with_require_plain("dep2/1.0@user/testing")
+ .with_require_plain("dep3/1.0@user/testing")})
+ client.run('create . consumer/1.0@user/testing --build')
+ self.assertIn("consumer/1.0@user/testing: Created", client.out)
+
+ def package_revision_mode_editable_test(self):
+ # Package revision mode crash when using editables
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=package_revision_mode")
+ client.run("config set general.full_transitive_package_id=True")
+
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("editable add . dep1/1.0@user/testing")
+
+ client2 = TestClient(cache_folder=client.cache_folder)
+ client2.save({"conanfile.py": GenConanfile().with_require_plain("dep1/1.0@user/testing")})
+ client2.run("export . dep2/1.0@user/testing")
+
+ client2.save({"conanfile.py": GenConanfile().with_require_plain("dep2/1.0@user/testing")})
+ client2.run('create . consumer/1.0@user/testing --build')
+ self.assertIn("consumer/1.0@user/testing: Created", client2.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-7051@2851c1f
|
conan-io/conan
|
Python
| 7,051
|
try to fix package_id with package_revision_mode and mixed modes
|
Changelog: Bugfix: Fix crash while computing the ``package_id`` of a package when different ``package_id_mode`` are mixed and include ``package_revision_mode``.
Docs: Omit
Continuation of https://github.com/conan-io/conan/pull/6947
Fix #6942
#tags: slow
cc/ @fulara
|
2020-05-19T14:43:12Z
|
[bug] unable to build packages with package_revision_mode enabled.
Conan crashing after enabled package_revision_mode and tried to rebuild all of our projects.
stacktrace attached.
### Environment Details (include every applicable attribute)
* Operating System+version: rh7
* Compiler+version: gcc8
* Conan version: 1.24.0
* Python version: 3.6.10
### Steps to reproduce (Include if Applicable)
Have a complex graph tree.
I dont have mve at the moment, but i'll try to pinpoint something.
Global package_mode would be set to 'package_revision_mode'.
In some of the recipes we are using default versioning_schema.
In some of the recipes we are specifying use semver_mode for some of its dependencies.
In some of the recipes we are specifying use of full_package_mode for some of its dependencies.
### Logs stacktrace.
invoked: conan create . --build missing
```
30-Apr-2020 16:37:19 xyz/abc: Unknown binary for xyz/abc, computing updated ID
30-Apr-2020 16:37:19 Traceback (most recent call last):
30-Apr-2020 16:37:19 File "conan/conans/client/command.py", line 2002, in run
30-Apr-2020 16:37:19 File "conan/conans/client/command.py", line 369, in create
30-Apr-2020 16:37:19 File "conan/conans/client/conan_api.py", line 89, in wrapper
30-Apr-2020 16:37:19 File "conan/conans/client/conan_api.py", line 368, in create
30-Apr-2020 16:37:19 File "conan/conans/client/cmd/create.py", line 57, in create
30-Apr-2020 16:37:19 File "conan/conans/client/manager.py", line 75, in deps_install
30-Apr-2020 16:37:19 File "conan/conans/client/installer.py", line 309, in install
30-Apr-2020 16:37:19 File "conan/conans/client/installer.py", line 404, in _build
30-Apr-2020 16:37:19 File "conan/conans/client/graph/graph_binaries.py", line 347, in reevaluate_node
30-Apr-2020 16:37:19 File "conan/conans/client/graph/graph_binaries.py", line 319, in _compute_package_id
30-Apr-2020 16:37:19 File "conan/conans/model/info.py", line 540, in package_id
30-Apr-2020 16:37:19 File "conan/conans/model/info.py", line 216, in sha
30-Apr-2020 16:37:19 TypeError: '<' not supported between instances of 'NoneType' and 'str'
```
|
Checking the trace, I cannot see how a None gets there. Please keep us tuned if you can reproduce, and if not, I will try to provide a branch with some traces so you can run in your environment.
@memsharded its 100% reproducible for me, i just have to wait for full build ( we only upload at the end of the build ) so it takes ~25 imnutes to build.
Just checkd - same stacktrace, if you want me to drop somethig - np.
Actually I was thinking that I will have to postpone this migration.
Hi @fulara
Quick question, do you have enabled ``full_transitive_package_id`` in your conan.conf configuration?
Also adding these two lines might help isolating the origin of the bug:
```patch
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 16de60e89..e0729103e 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -294,6 +294,9 @@ class GraphBinariesAnalyzer(object):
direct_reqs = node.id_direct_prefs
indirect_reqs = node.id_indirect_prefs
+ assert None not in direct_reqs, "None found in direct_reqs"
+ assert None not in indirect_reqs, "None found in indirect_reqs"
+
python_requires = getattr(conanfile, "python_requires", None)
if python_requires:
if isinstance(python_requires, dict):
```
@memsharded unless I made mistake ( I dont think I have ) this assert didnt trigger.
yes, of course i am using `full_transitive_package_id` after all it wouldnt be fair if i hadnt? :)
https://github.com/fulara/conan/tree/package_revision_mode
EDIT:
i added one more silly print:
```
print("PRINTING NOW STUFF! \n")
for key, value in self._data.items():
print("KEY IS" + str(key) + " value is: " + str(value) + + " dumps: " + value.dumps() + " \n ")
```
the result i got just before failing is:
```
PRINTING NOW STUFF!
KEY ISboost/1.72.0:634ce480c172f753ce13327c4f9f6d3a5eabcc32 value is: <conans.model.info.RequirementInfo object at 0x7f5a07491f60> dumps: boost/1.72.0:634ce480c172f753ce13327c4f9f6d3a5eabcc32
KEY ISrfa-server-lib/4.13:4e6ce351510b1e0711182fec674aed81b210cd32 value is: <conans.model.info.RequirementInfo object at 0x7f5a07491ba8> dumps: rfa-server-lib/4.Y.Z
KEY ISrfa-convert-lib/2.5:d62d48a0c0e3e9eced23f5c6a0926139d0ff8478 value is: <conans.model.info.RequirementInfo object at 0x7f5a07d0ceb8> dumps: rfa-convert-lib/2.Y.Z
KEY ISigcounters/1.3:ddb66222e853666802a54adae5ac3e9befedc54b value is: <conans.model.info.RequirementInfo object at 0x7f5a07d0c6a0> dumps: igcounters/1.Y.Z
KEY ISpoco/1.9.4:ed4013c58aa4bd9377abede5e8e1db2513b6c7d0 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8ae80> dumps: poco/1.9.4:ed4013c58aa4bd9377abede5e8e1db2513b6c7d0
KEY ISdisruptor/2.4:55d8a52d22c8588a7455fc66d9546a9464da6adb value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a550> dumps: disruptor/2.Y.Z
KEY ISopenssl/1.0.2t:e54af4a8e0cd6901bb01dd9c8925a8859b6246b2 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8aa90> dumps: openssl/1.0.2t:e54af4a8e0cd6901bb01dd9c8925a8859b6246b2
KEY ISlibcli/1.9.7:f10ff948ad9a5779ed97e7f1c2f2e4c8cd675372 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8aeb8> dumps: libcli/1.Y.Z
KEY ISbzip2/1.0.8:0d28bbf593474851d2bc7d2ef0a546fcdc0233fe value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8af60> dumps: bzip2/1.0.8:0d28bbf593474851d2bc7d2ef0a546fcdc0233fe
KEY ISrfa/8.0.1.E1:dc7e592c7c92901df7003480a162da6bd8500cbd value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a048> dumps: rfa/8.0.1.E1:dc7e592c7c92901df7003480a162da6bd8500cbd
KEY ISzlib/1.2.11:7d0f19a52d7be613ac3eb2f1ea1b8cc359e0bfe0 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a1d0> dumps: zlib/1.2.11:7d0f19a52d7be613ac3eb2f1ea1b8cc359e0bfe0
KEY ISboost/1.72.0:634ce480c172f753ce13327c4f9f6d3a5eabcc32 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8af28> dumps: boost/1.72.0#3c1a4170f35bcad9109cb8bc720d56d1:634ce480c172f753ce13327c4f9f6d3a5eabcc32#PREV unknown
KEY ISfin-pricing-utils/2.3:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 value is: <conans.model.info.RequirementInfo object at 0x7f5a03b8a0b8> dumps: fin-pricing-utils/2.Y.Z
```
So its some member variable of RequirementInfo thats null.
@memsharded looking at the stacktrace I think its because one of the 'boost' is depended on with package_recipe_mode and the other one is full_package_mode ( which is valid scenario ) and you are missing a none check somewhere where comparing the pprev or rrev.
> yes, of course i am using full_transitive_package_id after all it wouldnt be fair if i hadnt? :)
Well, I wouldn't blame you, sometimes it takes some time to migrate things :)
I don't think the mode is involved here, because the key should be always the ``PackageReference``, the full one. It is more like some of those keys of type PackageReference (which is a namedtuple) contains a None in one of their fields. If you could print instead:
```python
print("PRINTING NOW STUFF! \n")
for key, value in self._data.items():
print("KEY IS", key.ref.name, key.ref.version, key.ref.user, key.ref.channel, key.ref.revision,
key.id, key.revision, value.dumps() )
```
That should give us all the fields and we could identify the offending None (I still cannot figure out why there is None coming there)
here you go:
code is:
```
for key, value in self._data.items():
print("ref.name: ", key.ref.name, " ref.version ", key.ref.version, " ref.user ", key.ref.user, " ref.channel ", key.ref.channel, " ref.revision ", key.ref.revision, " id ",
key.id, " revision ", key.revision, " dumps ", value.dumps(), "\n" )
```
```
PRINTING NOW STUFF!
ref.name: rfa-convert-lib ref.version 2.5 ref.user None ref.channel None ref.revision 58f27ceb315f99e273c5ecadf75fcad3 id 3fea224fa96ea0aee3fd04b66788247769ce5a88 revision bee3e3c900c64414ab91cb812fb9a8f7 dumps rfa-convert-lib/2.Y.Z
ref.name: poco ref.version 1.9.4 ref.user None ref.channel None ref.revision 42c1907520edbda9833bec9ee7b22c5f id e8c0afb6bd27a5c10d28c2b8222e300f4f94381d revision bd15d0f9475a3207ca1f4611a110ef56 dumps poco/1.9.4:e8c0afb6bd27a5c10d28c2b8222e300f4f94381d
ref.name: rfa-server-lib ref.version 4.13 ref.user None ref.channel None ref.revision 04fe1a0bb78e44568b6052b7323168a9 id 49323f7cb1e8c34a8222d4bb4b82d19fa2bafdaa revision ec125f5a77a6bd3a8e628d276691608d dumps rfa-server-lib/4.Y.Z
ref.name: boost ref.version 1.72.0 ref.user None ref.channel None ref.revision 3c1a4170f35bcad9109cb8bc720d56d1 id 3f890e8db573d1fea921ff792e1d7a3e17718ab8 revision 33089a35430b60e799c76bb6b7d1a043 dumps boost/1.72.0:3f890e8db573d1fea921ff792e1d7a3e17718ab8
ref.name: igcounters ref.version 1.3 ref.user None ref.channel None ref.revision eee56f3742689bd079b9b246073cb700 id 75e3e5f7724fc192123090cbe892c40afa841ea3 revision adcafc3c3df160389b226cb3380d5c1c dumps igcounters/1.Y.Z
ref.name: fin-pricing-utils ref.version 2.3 ref.user None ref.channel None ref.revision f6f35138752d691fd1aac8b0114479ad id 5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 revision 262873877900a3059e1d60de48b37d1c dumps fin-pricing-utils/2.Y.Z
ref.name: openssl ref.version 1.0.2t ref.user None ref.channel None ref.revision 98b72be1284eb54893f45b1f043ecead id 135f0560b485ffdf5b9f993d63f2aaecc8fb281e revision df8c72fcd905d9c2885adefdfc0cc2d9 dumps openssl/1.0.2t:135f0560b485ffdf5b9f993d63f2aaecc8fb281e
ref.name: libcli ref.version 1.9.7 ref.user None ref.channel None ref.revision 990f469c93b05f229ff6d711cca6cda5 id 4a09d987d91684aecef7218066b1c8edb559d34e revision 72640db512697f14b8cc1f13423ca80b dumps libcli/1.Y.Z
ref.name: boost ref.version 1.72.0 ref.user None ref.channel None ref.revision 3c1a4170f35bcad9109cb8bc720d56d1 id 3f890e8db573d1fea921ff792e1d7a3e17718ab8 revision None dumps boost/1.72.0#3c1a4170f35bcad9109cb8bc720d56d1:3f890e8db573d1fea921ff792e1d7a3e17718ab8#PREV unknown
ref.name: bzip2 ref.version 1.0.8 ref.user None ref.channel None ref.revision ad6efb7d25adcbde4984125a43434af2 id 76a4a7324a2083cb6964ea8321da67a1ceb31b50 revision 7be9a190f5446bbb47422a8558050bdd dumps bzip2/1.0.8:76a4a7324a2083cb6964ea8321da67a1ceb31b50
ref.name: rfa ref.version 8.0.1.E1 ref.user None ref.channel None ref.revision a7f8a61e064b5d8ee8d46fa5b5219a1d id 2f48c782b34dfcb01aab706713eda393eed4a638 revision 45463dd5c2c58f243ae6a1f4f2c94986 dumps rfa/8.0.1.E1:2f48c782b34dfcb01aab706713eda393eed4a638
ref.name: disruptor ref.version 2.4 ref.user None ref.channel None ref.revision c11f9a2b83faf9ea36d5a829237cb247 id d4eab5eaba08a2639f6f809e50b7e7a7606d8829 revision 1fd0fade0d84bb48d3e62c384b9e3d34 dumps disruptor/2.Y.Z
ref.name: zlib ref.version 1.2.11 ref.user None ref.channel None ref.revision ddccdddea098293f5202c5e8eb29967b id 9fdb4217a0bb5bac441d7e17705c9172eeeb6cfe revision 0bf6e52e8a2a2bfaeca54f6c72adcc51 dumps zlib/1.2.11:9fdb4217a0bb5bac441d7e17705c9172eeeb6cfe
```
Ok, I start to see where it comes from. Working on a fix.
Trying to reproduce with a test first. I guess you are using ``private`` dependencies somewhere in the graph, aren't you?
Nope @memsharded we dont use that kind of magic.
|
[
{
"body": "Conan crashing after enabled package_revision_mode and tried to rebuild all of our projects.\r\nstacktrace attached.\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: rh7\r\n * Compiler+version: gcc8\r\n * Conan version: 1.24.0\r\n * Python version: 3.6.10\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nHave a complex graph tree.\r\nI dont have mve at the moment, but i'll try to pinpoint something.\r\n\r\nGlobal package_mode would be set to 'package_revision_mode'.\r\nIn some of the recipes we are using default versioning_schema.\r\nIn some of the recipes we are specifying use semver_mode for some of its dependencies.\r\nIn some of the recipes we are specifying use of full_package_mode for some of its dependencies.\r\n\r\n### Logs stacktrace.\r\ninvoked: conan create . --build missing\r\n```\r\n30-Apr-2020 16:37:19 xyz/abc: Unknown binary for xyz/abc, computing updated ID\r\n30-Apr-2020 16:37:19 Traceback (most recent call last):\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/command.py\", line 2002, in run\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/command.py\", line 369, in create\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/conan_api.py\", line 89, in wrapper\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/conan_api.py\", line 368, in create\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/cmd/create.py\", line 57, in create\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/manager.py\", line 75, in deps_install\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/installer.py\", line 309, in install\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/installer.py\", line 404, in _build\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/graph/graph_binaries.py\", line 347, in reevaluate_node\r\n30-Apr-2020 16:37:19 File \"conan/conans/client/graph/graph_binaries.py\", line 319, in _compute_package_id\r\n30-Apr-2020 16:37:19 File \"conan/conans/model/info.py\", line 540, in package_id\r\n30-Apr-2020 16:37:19 File \"conan/conans/model/info.py\", line 216, in sha\r\n30-Apr-2020 16:37:19 TypeError: '<' not supported between instances of 'NoneType' and 'str'\r\n```\r\n",
"number": 6942,
"title": "[bug] unable to build packages with package_revision_mode enabled."
}
] |
56d1bbd05546733dcb927354dad742043a8219f7
|
{
"head_commit": "2851c1f91e2204abbd63aca63179de503f748abc",
"head_commit_message": "moved inside propagate_info()",
"patch_to_review": "diff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py\nindex f4f8ccaf391..9e2afdfb46c 100644\n--- a/conans/client/graph/graph.py\n+++ b/conans/client/graph/graph.py\n@@ -86,6 +86,24 @@ def __init__(self, ref, conanfile, context, recipe=None, path=None):\n self._ancestors = _NodeOrderedDict() # set{ref.name}\n self._id = None # Unique ID (uuid at the moment) of a node in the graph\n self.graph_lock_node = None # the locking information can be None\n+ self.id_direct_prefs = None\n+ self.id_indirect_prefs = None\n+\n+ def package_id_transitive_reqs(self):\n+ \"\"\"\n+ accumulate the direct and transitive requirements prefs necessary to compute the\n+ package_id\n+ :return: set(prefs) of direct deps, set(prefs) of transitive deps\n+ \"\"\"\n+ self.id_direct_prefs = set() # of PackageReference\n+ self.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n+ for neighbor in self.neighbors():\n+ self.id_direct_prefs.add(neighbor.pref)\n+ self.id_indirect_prefs.update(neighbor.id_direct_prefs)\n+ self.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n+ # Make sure not duplicated, totally necessary\n+ self.id_indirect_prefs.difference_update(self.id_direct_prefs)\n+ return self.id_direct_prefs, self.id_indirect_prefs\n \n @property\n def id(self):\ndiff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py\nindex 16de60e89af..ef33c8d918d 100644\n--- a/conans/client/graph/graph_binaries.py\n+++ b/conans/client/graph/graph_binaries.py\n@@ -283,16 +283,7 @@ def _compute_package_id(self, node, default_package_id_mode, default_python_requ\n # Make sure not duplicated\n indirect_reqs.difference_update(direct_reqs)\n else:\n- node.id_direct_prefs = set() # of PackageReference\n- node.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n- for neighbor in neighbors:\n- node.id_direct_prefs.add(neighbor.pref)\n- node.id_indirect_prefs.update(neighbor.id_direct_prefs)\n- node.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n- # Make sure not duplicated, totally necessary\n- node.id_indirect_prefs.difference_update(node.id_direct_prefs)\n- direct_reqs = node.id_direct_prefs\n- indirect_reqs = node.id_indirect_prefs\n+ direct_reqs, indirect_reqs = node.package_id_transitive_reqs()\n \n python_requires = getattr(conanfile, \"python_requires\", None)\n if python_requires:\ndiff --git a/conans/client/installer.py b/conans/client/installer.py\nindex 4772b5a79ed..a1ae3f6386a 100644\n--- a/conans/client/installer.py\n+++ b/conans/client/installer.py\n@@ -395,13 +395,14 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._raise_missing(missing)\n processed_package_refs = set()\n self._download(downloads, processed_package_refs)\n+ fix_package_id = self._cache.config.full_transitive_package_id\n \n for level in nodes_by_level:\n for node in level:\n ref, conan_file = node.ref, node.conanfile\n output = conan_file.output\n \n- self._propagate_info(node, using_build_profile)\n+ self._propagate_info(node, using_build_profile, fix_package_id)\n if node.binary == BINARY_EDITABLE:\n self._handle_node_editable(node, graph_info)\n # Need a temporary package revision for package_revision_mode\n@@ -417,7 +418,7 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui\n self._handle_node_cache(node, keep_build, processed_package_refs, remotes)\n \n # Finally, propagate information to root node (ref=None)\n- self._propagate_info(root_node, using_build_profile)\n+ self._propagate_info(root_node, using_build_profile, fix_package_id)\n \n def _handle_node_editable(self, node, graph_info):\n # Get source of information\n@@ -507,7 +508,13 @@ def _build_package(self, node, output, keep_build, remotes):\n return pref\n \n @staticmethod\n- def _propagate_info(node, using_build_profile):\n+ def _propagate_info(node, using_build_profile, fixed_package_id):\n+ if fixed_package_id:\n+ # if using config.full_transitive_package_id, it is necessary to recompute\n+ # the node transitive information necessary to compute the package_id\n+ # as it will be used by reevaluate_node() when package_revision_mode is used and\n+ # PACKAGE_ID_UNKNOWN happens due to unknown revisions\n+ node.package_id_transitive_reqs()\n # Get deps_cpp_info from upstream nodes\n node_order = [n for n in node.public_closure if n.binary != BINARY_SKIP]\n # List sort is stable, will keep the original order of the closure, but prioritize levels\ndiff --git a/conans/test/functional/package_id/package_id_requires_modes_test.py b/conans/test/functional/package_id/package_id_requires_modes_test.py\nindex b4fd4db2b00..e722086e75c 100644\n--- a/conans/test/functional/package_id/package_id_requires_modes_test.py\n+++ b/conans/test/functional/package_id/package_id_requires_modes_test.py\n@@ -513,6 +513,25 @@ def transitive_multi_mode_package_id_test(self):\n client.run('create . consumer/1.0@user/testing --build')\n self.assertIn(\"consumer/1.0@user/testing: Created\", client.out)\n \n+ def transitive_multi_mode2_package_id_test(self):\n+ # https://github.com/conan-io/conan/issues/6942\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=package_revision_mode\")\n+ # This is mandatory, otherwise it doesn't work\n+ client.run(\"config set general.full_transitive_package_id=True\")\n+\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"export . dep1/1.0@user/testing\")\n+\n+ pkg_revision_mode = \"self.info.requires.full_version_mode()\"\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep1/1.0@user/testing\")\n+ .with_package_id(pkg_revision_mode)})\n+ client.run(\"export . dep2/1.0@user/testing\")\n+\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"dep2/1.0@user/testing\")})\n+ client.run('create . consumer/1.0@user/testing --build')\n+ self.assertIn(\"consumer/1.0@user/testing: Created\", client.out)\n+\n def package_revision_mode_editable_test(self):\n # Package revision mode crash when using editables\n client = TestClient()\n"
}
|
[
{
"diff_hunk": "@@ -507,7 +508,13 @@ def _build_package(self, node, output, keep_build, remotes):\n return pref\n \n @staticmethod\n- def _propagate_info(node, using_build_profile):\n+ def _propagate_info(node, using_build_profile, fixed_package_id):\n+ if fixed_package_id:\n+ # if using config.full_transitive_package_id, it is necessary to recompute\n+ # the node transitive information necessary to compute the package_id\n+ # as it will be used by reevaluate_node() when package_revision_mode is used and\n+ # PACKAGE_ID_UNKNOWN happens due to unknown revisions\n+ node.package_id_transitive_reqs()",
"line": null,
"original_line": 517,
"original_start_line": null,
"path": "conans/client/installer.py",
"start_line": null,
"text": "@user1:\nIs there a reason to do this only when `full_transitive_package_id` is activated?\n\n@user1:\n`reevaluate_node` will run `_compute_package_id` that calls this same function. Why is it needed to update this information before doing those calls?\n\n@author:\nBecause ``reevaluate_node`` only runs for those nodes that has PACKAGE_ID_UNKNOWN, but the information that needs to be re-computed after being built are the one of the dependencies of the node with PACKAGE_ID_UNKNOWN.\r\n\r\nOnly ``full_transitive_package_id`` is causing the issue in this case, because if not defined, a different computation is done, based on ``conanfile.requires`` (which is updated).\r\n\r\nI think the root cause of this is the immutability of the reference objects, that produces that there are several copies of ConanFileReference and PackageReference. I think for the future we should aim for a central definition of these for each node, in a way that an update to them automatically is seen by everyone else, because they contain a reference to this object.\n\n@user1:\n> Because `reevaluate_node` only runs for those nodes that has PACKAGE_ID_UNKNOWN, but the information that needs to be re-computed after being built are the one of the dependencies of the node with PACKAGE_ID_UNKNOWN.\r\n\r\nHere you mean, the CONSUMERS of the node with `PACKAGE_ID_UNKNOWN`, right?\r\n\r\n----\r\n\r\n> Only full_transitive_package_id is causing the issue in this case, because if not defined, a different computation is done, based on conanfile.requires (which is updated).\r\n\r\nHaving a quick look to `conanfile.requires`, it is built using the _direct_ and the _indirect_ prefs too, so maybe we can simplify all of this a lot moving logic into ConanInfo (not this PR). \n\n@author:\nPkgA->PkgB->PkgC\r\n\r\nPkgA has PACKAGE_ID_UNKNOWN, because we still need to build PkgC or PkgB and awe are in ``package_revision_mode``\r\n\r\n``reevaluate_node`` will only be executed for ``PkgA``. But the computation of transitive updated ``PackageReferences`` need to be computed for both ``PkgB`` and ``PkgC``.\r\n\r\nDoes this clarify the issue a bit?\n\n@user1:\nOk, we are running them to populate `Node::id_direct_prefs` and `Node::id_indirect_prefs` with the new values (are we doing this twice for `pkgC`?)\n\n@author:\nYes, it is being called twice. That is the key of the issue, we need to run this 2 times:\r\n- One to compute the first package_id, while analyzing the graph binaries\r\n- A second one, while installing the binaries, to update for possible changes upstream that will be used in reevaluate_node to compute the package_id of consumers with PACKAGE_ID_UNKNOWN.\r\n\r\nIt seems it would be very challenging to skip running this for leaf nodes that wouldn't need to run this."
},
{
"diff_hunk": "@@ -283,16 +283,7 @@ def _compute_package_id(self, node, default_package_id_mode, default_python_requ\n # Make sure not duplicated\n indirect_reqs.difference_update(direct_reqs)\n else:\n- node.id_direct_prefs = set() # of PackageReference\n- node.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n- for neighbor in neighbors:\n- node.id_direct_prefs.add(neighbor.pref)\n- node.id_indirect_prefs.update(neighbor.id_direct_prefs)\n- node.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n- # Make sure not duplicated, totally necessary\n- node.id_indirect_prefs.difference_update(node.id_direct_prefs)\n- direct_reqs = node.id_direct_prefs\n- indirect_reqs = node.id_indirect_prefs\n+ direct_reqs, indirect_reqs = node.package_id_transitive_reqs()",
"line": null,
"original_line": 286,
"original_start_line": null,
"path": "conans/client/graph/graph_binaries.py",
"start_line": null,
"text": "@user1:\nProbably if we are moving only one branch this function doesn't belong to `node`, but to `self`, or we should move both branches.\n\n@author:\nI put it in the ``node`` because now it needs to be called from 2 different locations: ``graph_binaries.py`` and ``installer.py``. I could leave it here in the ``graph_binaries.py`` as a public static method that receives a single ``node`` as argument, no problem."
}
] |
ff5d8bba17f83ae12258de9f9ca1807785342147
|
diff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py
index f4f8ccaf391..1308eb963ec 100644
--- a/conans/client/graph/graph.py
+++ b/conans/client/graph/graph.py
@@ -86,6 +86,8 @@ def __init__(self, ref, conanfile, context, recipe=None, path=None):
self._ancestors = _NodeOrderedDict() # set{ref.name}
self._id = None # Unique ID (uuid at the moment) of a node in the graph
self.graph_lock_node = None # the locking information can be None
+ self.id_direct_prefs = None
+ self.id_indirect_prefs = None
@property
def id(self):
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 16de60e89af..3c915c2ac4b 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -263,6 +263,24 @@ def _propagate_options(node):
conanfile.options.clear_unused(transitive_reqs)
conanfile.options.freeze()
+ @staticmethod
+ def package_id_transitive_reqs(node):
+ """
+ accumulate the direct and transitive requirements prefs necessary to compute the
+ package_id
+ :return: set(prefs) of direct deps, set(prefs) of transitive deps
+ """
+ node.id_direct_prefs = set() # of PackageReference
+ node.id_indirect_prefs = set() # of PackageReference, avoid duplicates
+ neighbors = [d.dst for d in node.dependencies if not d.build_require]
+ for neighbor in neighbors:
+ node.id_direct_prefs.add(neighbor.pref)
+ node.id_indirect_prefs.update(neighbor.id_direct_prefs)
+ node.id_indirect_prefs.update(neighbor.id_indirect_prefs)
+ # Make sure not duplicated, totally necessary
+ node.id_indirect_prefs.difference_update(node.id_direct_prefs)
+ return node.id_direct_prefs, node.id_indirect_prefs
+
def _compute_package_id(self, node, default_package_id_mode, default_python_requires_id_mode):
"""
Compute the binary package ID of this node
@@ -283,16 +301,7 @@ def _compute_package_id(self, node, default_package_id_mode, default_python_requ
# Make sure not duplicated
indirect_reqs.difference_update(direct_reqs)
else:
- node.id_direct_prefs = set() # of PackageReference
- node.id_indirect_prefs = set() # of PackageReference, avoid duplicates
- for neighbor in neighbors:
- node.id_direct_prefs.add(neighbor.pref)
- node.id_indirect_prefs.update(neighbor.id_direct_prefs)
- node.id_indirect_prefs.update(neighbor.id_indirect_prefs)
- # Make sure not duplicated, totally necessary
- node.id_indirect_prefs.difference_update(node.id_direct_prefs)
- direct_reqs = node.id_direct_prefs
- indirect_reqs = node.id_indirect_prefs
+ direct_reqs, indirect_reqs = self.package_id_transitive_reqs(node)
python_requires = getattr(conanfile, "python_requires", None)
if python_requires:
diff --git a/conans/client/installer.py b/conans/client/installer.py
index 4772b5a79ed..5469d0afa42 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -395,13 +395,14 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui
self._raise_missing(missing)
processed_package_refs = set()
self._download(downloads, processed_package_refs)
+ fix_package_id = self._cache.config.full_transitive_package_id
for level in nodes_by_level:
for node in level:
ref, conan_file = node.ref, node.conanfile
output = conan_file.output
- self._propagate_info(node, using_build_profile)
+ self._propagate_info(node, using_build_profile, fix_package_id)
if node.binary == BINARY_EDITABLE:
self._handle_node_editable(node, graph_info)
# Need a temporary package revision for package_revision_mode
@@ -417,7 +418,7 @@ def _build(self, nodes_by_level, keep_build, root_node, graph_info, remotes, bui
self._handle_node_cache(node, keep_build, processed_package_refs, remotes)
# Finally, propagate information to root node (ref=None)
- self._propagate_info(root_node, using_build_profile)
+ self._propagate_info(root_node, using_build_profile, fix_package_id)
def _handle_node_editable(self, node, graph_info):
# Get source of information
@@ -506,13 +507,19 @@ def _build_package(self, node, output, keep_build, remotes):
node.graph_lock_node.modified = GraphLockNode.MODIFIED_BUILT
return pref
- @staticmethod
- def _propagate_info(node, using_build_profile):
+ def _propagate_info(self, node, using_build_profile, fixed_package_id):
+ if fixed_package_id:
+ # if using config.full_transitive_package_id, it is necessary to recompute
+ # the node transitive information necessary to compute the package_id
+ # as it will be used by reevaluate_node() when package_revision_mode is used and
+ # PACKAGE_ID_UNKNOWN happens due to unknown revisions
+ self._binaries_analyzer.package_id_transitive_reqs(node)
# Get deps_cpp_info from upstream nodes
node_order = [n for n in node.public_closure if n.binary != BINARY_SKIP]
# List sort is stable, will keep the original order of the closure, but prioritize levels
conan_file = node.conanfile
- conan_file._conan_using_build_profile = using_build_profile # FIXME: Not the best place to assign it
+ # FIXME: Not the best place to assign the _conan_using_build_profile
+ conan_file._conan_using_build_profile = using_build_profile
transitive = [it for it in node.transitive_closure.values()]
br_host = []
diff --git a/conans/test/functional/package_id/package_id_requires_modes_test.py b/conans/test/functional/package_id/package_id_requires_modes_test.py
index b4fd4db2b00..bb6e0959b6d 100644
--- a/conans/test/functional/package_id/package_id_requires_modes_test.py
+++ b/conans/test/functional/package_id/package_id_requires_modes_test.py
@@ -1,4 +1,5 @@
import os
+import textwrap
import unittest
from conans.model.info import ConanInfo
@@ -513,6 +514,68 @@ def transitive_multi_mode_package_id_test(self):
client.run('create . consumer/1.0@user/testing --build')
self.assertIn("consumer/1.0@user/testing: Created", client.out)
+ def transitive_multi_mode2_package_id_test(self):
+ # https://github.com/conan-io/conan/issues/6942
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=package_revision_mode")
+ # This is mandatory, otherwise it doesn't work
+ client.run("config set general.full_transitive_package_id=True")
+
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("export . dep1/1.0@user/testing")
+
+ pkg_revision_mode = "self.info.requires.full_version_mode()"
+ package_id_print = "self.output.info('PkgNames: %s' % sorted(self.info.requires.pkg_names))"
+ client.save({"conanfile.py": GenConanfile().with_require_plain("dep1/1.0@user/testing")
+ .with_package_id(pkg_revision_mode)
+ .with_package_id(package_id_print)})
+ client.run("export . dep2/1.0@user/testing")
+
+ consumer = textwrap.dedent("""
+ from conans import ConanFile
+ class Consumer(ConanFile):
+ requires = "dep2/1.0@user/testing"
+ def package_id(self):
+ self.output.info("PKGNAMES: %s" % sorted(self.info.requires.pkg_names))
+ """)
+ client.save({"conanfile.py": consumer})
+ client.run('create . consumer/1.0@user/testing --build')
+ self.assertIn("dep2/1.0@user/testing: PkgNames: ['dep1']", client.out)
+ self.assertIn("consumer/1.0@user/testing: PKGNAMES: ['dep1', 'dep2']", client.out)
+ self.assertIn("consumer/1.0@user/testing: Created", client.out)
+
+ def transitive_multi_mode_build_requires_test(self):
+ # https://github.com/conan-io/conan/issues/6942
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=package_revision_mode")
+ client.run("config set general.full_transitive_package_id=True")
+
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("export . dep1/1.0@user/testing")
+ client.run("create . tool/1.0@user/testing")
+
+ pkg_revision_mode = "self.info.requires.full_version_mode()"
+ package_id_print = "self.output.info('PkgNames: %s' % sorted(self.info.requires.pkg_names))"
+ client.save({"conanfile.py": GenConanfile().with_require_plain("dep1/1.0@user/testing")
+ .with_build_require_plain("tool/1.0@user/testing")
+ .with_package_id(pkg_revision_mode)
+ .with_package_id(package_id_print)})
+ client.run("export . dep2/1.0@user/testing")
+
+ consumer = textwrap.dedent("""
+ from conans import ConanFile
+ class Consumer(ConanFile):
+ requires = "dep2/1.0@user/testing"
+ build_requires = "tool/1.0@user/testing"
+ def package_id(self):
+ self.output.info("PKGNAMES: %s" % sorted(self.info.requires.pkg_names))
+ """)
+ client.save({"conanfile.py": consumer})
+ client.run('create . consumer/1.0@user/testing --build')
+ self.assertIn("dep2/1.0@user/testing: PkgNames: ['dep1']", client.out)
+ self.assertIn("consumer/1.0@user/testing: PKGNAMES: ['dep1', 'dep2']", client.out)
+ self.assertIn("consumer/1.0@user/testing: Created", client.out)
+
def package_revision_mode_editable_test(self):
# Package revision mode crash when using editables
client = TestClient()
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-6451@a17f880
|
conan-io/conan
|
Python
| 6,451
|
explore if this fix header-only package-id
|
Changelog: Feature: Implement a new package-ID computation that includes transitive dependencies even when the direct dependencies have remove them, for example when depending on a header-only library that depends on a static library.
Docs: https://github.com/conan-io/docs/pull/1575
Close https://github.com/conan-io/conan/issues/6450
#tags: slow
#revisions: 1
|
2020-01-31T00:18:22Z
|
[bug] conan is too lenient in its specification of header_only libraries.
~~Note: title seems to indicate that this affects only header_only but thats not the case.~~
To make matters clear I am using:
`default_package_id_mode = full_version_mode` for this example.
Consider four projects:
```
#liba:
class Conan(ConanFile):
name = "liba"
pass
#libb:
class Conan(ConanFile):
name = "libb"
requires="liba/1.0.0"
def package_id(self):
self.info.header_only()
#libc:
class Conan(ConanFile):
name="libc"
requires="libb/1.0.0"
#libd:
class Conan(ConanFile):
name="libd"
requires="libc/1.0.0", "liba/1.0.0"
# build initial versions of libs:
cd liba
conan create . 1.0.0@_/_
conan create . 2.0.0@_/_
cd ../libb
conan create . 1.0.0@_/_
conan create . 2.0.0@_/_
cd ../libc
conan create . 1.0.0@_/_
conan create . 2.0.0@_/_
cd ../libd
conan create . 1.0.0@_/_
```
Okay Now i have following tree built:
`libd` build doesnt matter here because it will be our testcase.
`libc` built with version 1.0.0 and 2.0.0 and in both depending on `libb` 1.0.0
`libb` built with version 1.0.0 and 2.0.0 and in both depending on `liba` 1.0.0 - `libb` is header_only!
`liba` built with version 1.0.0 and 2.0.0
Alls good so far, now what would you expect if we overrode `liba` in `libd`?
Well, the only things that have listed dependency on `liba` in conanfiles are `libb` and `libd`.
Okay - but `libb` is header only - by extension - if `libb` uses any library - then that usage is baked into its `libc`.
Lets now tell `libd` to override the dependency and use `liba 2.0.0`
```
# edit libd' conanfile to point to liba 2.0.0
conan create . 2.0.0@_/_
```
`libd` builds without any issues.
Conan does not require rebuild of the libc - even though it is definite that it depends on liba by proxy - author of `libc` may not even be aware that `liba` exists - but calls will be baked into its code.
What are the users expected to do in this case?
Are they to investigate all dependencies of header only libraries and copy them as is?
This is actually an extreme example of a choice I see as a problem that conan made - that dependencies are transivitely inherited, but their versioning_schema is not.
This may make sense in some cases - for example when you dont expose in header files any types from your dependency, but its not something thats achievable and done in most of the libraries.
Conan relies on users ability to detect these, rather than providing solution.
It is an extreme example because I am using 'header_only' here - but because of the way C++ works ( include files (... )) things that you depend on, usually automatically pollute things that depend on you.
|
[
{
"body": "~~Note: title seems to indicate that this affects only header_only but thats not the case.~~\r\n\r\nTo make matters clear I am using:\r\n`default_package_id_mode = full_version_mode` for this example.\r\n\r\nConsider four projects:\r\n```\r\n#liba:\r\nclass Conan(ConanFile):\r\n name = \"liba\"\r\n pass\r\n#libb:\r\nclass Conan(ConanFile):\r\n name = \"libb\"\r\n requires=\"liba/1.0.0\"\r\n\r\n def package_id(self):\r\n self.info.header_only()\r\n\r\n#libc:\r\nclass Conan(ConanFile):\r\n name=\"libc\"\r\n requires=\"libb/1.0.0\"\r\n#libd:\r\nclass Conan(ConanFile):\r\n name=\"libd\"\r\n requires=\"libc/1.0.0\", \"liba/1.0.0\"\r\n\r\n# build initial versions of libs:\r\ncd liba\r\nconan create . 1.0.0@_/_\r\nconan create . 2.0.0@_/_\r\ncd ../libb\r\nconan create . 1.0.0@_/_\r\nconan create . 2.0.0@_/_\r\ncd ../libc\r\nconan create . 1.0.0@_/_\r\nconan create . 2.0.0@_/_\r\ncd ../libd\r\nconan create . 1.0.0@_/_\r\n```\r\nOkay Now i have following tree built: \r\n`libd` build doesnt matter here because it will be our testcase. \r\n`libc` built with version 1.0.0 and 2.0.0 and in both depending on `libb` 1.0.0 \r\n`libb` built with version 1.0.0 and 2.0.0 and in both depending on `liba` 1.0.0 - `libb` is header_only! \r\n`liba` built with version 1.0.0 and 2.0.0 \r\n\r\nAlls good so far, now what would you expect if we overrode `liba` in `libd`? \r\nWell, the only things that have listed dependency on `liba` in conanfiles are `libb` and `libd`. \r\nOkay - but `libb` is header only - by extension - if `libb` uses any library - then that usage is baked into its `libc`.\r\n\r\nLets now tell `libd` to override the dependency and use `liba 2.0.0` \r\n```\r\n# edit libd' conanfile to point to liba 2.0.0\r\nconan create . 2.0.0@_/_\r\n```\r\n\r\n`libd` builds without any issues. \r\n\r\nConan does not require rebuild of the libc - even though it is definite that it depends on liba by proxy - author of `libc` may not even be aware that `liba` exists - but calls will be baked into its code.\r\n\r\nWhat are the users expected to do in this case?\r\nAre they to investigate all dependencies of header only libraries and copy them as is?\r\n\r\nThis is actually an extreme example of a choice I see as a problem that conan made - that dependencies are transivitely inherited, but their versioning_schema is not. \r\nThis may make sense in some cases - for example when you dont expose in header files any types from your dependency, but its not something thats achievable and done in most of the libraries.\r\nConan relies on users ability to detect these, rather than providing solution. \r\n\r\nIt is an extreme example because I am using 'header_only' here - but because of the way C++ works ( include files (... )) things that you depend on, usually automatically pollute things that depend on you.\r\n\r\n\r\n\r\n",
"number": 6450,
"title": "[bug] conan is too lenient in its specification of header_only libraries."
}
] |
bd5920a40d9707001b31966adb0dfa7d6af00eea
|
{
"head_commit": "a17f8807a4dd53ab20c37420d7fc98a3103051da",
"head_commit_message": "explore if this fix header-only package-id",
"patch_to_review": "diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py\nindex f87d9115a17..06a1dbea58c 100644\n--- a/conans/client/graph/graph_binaries.py\n+++ b/conans/client/graph/graph_binaries.py\n@@ -274,15 +274,15 @@ def _compute_package_id(node, default_package_id_mode, default_python_requires_i\n # A bit risky to be done now\n conanfile = node.conanfile\n neighbors = node.neighbors()\n- direct_reqs = [] # of PackageReference\n- indirect_reqs = set() # of PackageReference, avoid duplicates\n+ node.id_direct_prefs = set() # of PackageReference\n+ node.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n for neighbor in neighbors:\n- ref, nconan = neighbor.ref, neighbor.conanfile\n- direct_reqs.append(neighbor.pref)\n- indirect_reqs.update(nconan.info.requires.refs())\n+ node.id_direct_prefs.add(neighbor.pref)\n+ node.id_indirect_prefs.update(neighbor.id_direct_prefs)\n+ node.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n \n # Make sure not duplicated\n- indirect_reqs.difference_update(direct_reqs)\n+ node.id_indirect_prefs.difference_update(node.id_direct_prefs)\n python_requires = getattr(conanfile, \"python_requires\", None)\n if python_requires:\n if isinstance(python_requires, dict):\n@@ -291,8 +291,8 @@ def _compute_package_id(node, default_package_id_mode, default_python_requires_i\n python_requires = python_requires.all_refs()\n conanfile.info = ConanInfo.create(conanfile.settings.values,\n conanfile.options.values,\n- direct_reqs,\n- indirect_reqs,\n+ node.id_direct_prefs,\n+ node.id_indirect_prefs,\n default_package_id_mode=default_package_id_mode,\n python_requires=python_requires,\n default_python_requires_id_mode=\ndiff --git a/conans/test/functional/package_id/transitive_header_only_test.py b/conans/test/functional/package_id/transitive_header_only_test.py\nnew file mode 100644\nindex 00000000000..344234d056b\n--- /dev/null\n+++ b/conans/test/functional/package_id/transitive_header_only_test.py\n@@ -0,0 +1,30 @@\n+import textwrap\n+import time\n+import unittest\n+\n+from conans.model.ref import ConanFileReference\n+from conans.test.utils.tools import TestClient, GenConanfile\n+\n+\n+class TransitiveHeaderOnlyTest(unittest.TestCase):\n+\n+ def transitive_header_only_test(self):\n+ # https://github.com/conan-io/conan/issues/6450\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=full_version_mode\")\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . liba/1.0@\")\n+ client.run(\"create . liba/2.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"liba/1.0\")\n+ .with_package_id(\"self.info.header_only()\")})\n+ client.run(\"create . libb/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"libb/1.0\")})\n+ client.run(\"create . libc/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"libc/1.0\")\n+ .with_require_plain(\"liba/1.0\")})\n+ client.run(\"create . libd/1.0@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"libc/1.0\")\n+ .with_require_plain(\"liba/2.0\")})\n+ client.run(\"create . libd/1.0@\", assert_error=True)\n+ self.assertIn(\"libc/1.0:bfa6c8f046896806f65c8fe554bd57f235b101e8 - Missing\", client.out)\n+\n"
}
|
[
{
"diff_hunk": "@@ -274,15 +274,15 @@ def _compute_package_id(node, default_package_id_mode, default_python_requires_i\n # A bit risky to be done now\n conanfile = node.conanfile\n neighbors = node.neighbors()\n- direct_reqs = [] # of PackageReference\n- indirect_reqs = set() # of PackageReference, avoid duplicates\n+ node.id_direct_prefs = set() # of PackageReference\n+ node.id_indirect_prefs = set() # of PackageReference, avoid duplicates\n for neighbor in neighbors:\n- ref, nconan = neighbor.ref, neighbor.conanfile\n- direct_reqs.append(neighbor.pref)\n- indirect_reqs.update(nconan.info.requires.refs())\n+ node.id_direct_prefs.add(neighbor.pref)\n+ node.id_indirect_prefs.update(neighbor.id_direct_prefs)\n+ node.id_indirect_prefs.update(neighbor.id_indirect_prefs)\n \n # Make sure not duplicated\n- indirect_reqs.difference_update(direct_reqs)\n+ node.id_indirect_prefs.difference_update(node.id_direct_prefs)",
"line": null,
"original_line": 285,
"original_start_line": null,
"path": "conans/client/graph/graph_binaries.py",
"start_line": null,
"text": "@user1:\nI think this line is not needed. The previous loop adds to the `node.indirect` both, the _direct_ and _indirect_ from its neighbors, so no need to have them all together in `indirect` (and the variable will contain only the indirect ones honoring its name).\n\n@author:\nTotally necessary. Added a check, it is being used. Some direct requirements can slip into the indirect as well."
}
] |
786d91aea43a3ffe629694e45cfc28609fee6e16
|
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index d4f97b6aaf6..03343800190 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -431,6 +431,14 @@ def default_python_requires_id_mode(self):
return "minor_mode"
return default_package_id_mode
+ @property
+ def full_transitive_package_id(self):
+ try:
+ fix_id = self.get_item("general.full_transitive_package_id")
+ return fix_id.lower() in ("1", "true")
+ except ConanException:
+ return None
+
@property
def short_paths_home(self):
short_paths_home = get_env("CONAN_USER_HOME_SHORT")
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 5f79f2389a3..8402bb97b0d 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -20,6 +20,7 @@ def __init__(self, cache, output, remote_manager):
self._remote_manager = remote_manager
# These are the nodes with pref (not including PREV) that have been evaluated
self._evaluated = {} # {pref: [nodes]}
+ self._fixed_package_id = cache.config.full_transitive_package_id
@staticmethod
def _check_update(upstream_manifest, package_folder, output, node):
@@ -262,8 +263,7 @@ def _propagate_options(node):
conanfile.options.clear_unused(transitive_reqs)
conanfile.options.freeze()
- @staticmethod
- def _compute_package_id(node, default_package_id_mode, default_python_requires_id_mode):
+ def _compute_package_id(self, node, default_package_id_mode, default_python_requires_id_mode):
"""
Compute the binary package ID of this node
:param node: the node to compute the package-ID
@@ -273,15 +273,27 @@ def _compute_package_id(node, default_package_id_mode, default_python_requires_i
# A bit risky to be done now
conanfile = node.conanfile
neighbors = node.neighbors()
- direct_reqs = [] # of PackageReference
- indirect_reqs = set() # of PackageReference, avoid duplicates
- for neighbor in neighbors:
- ref, nconan = neighbor.ref, neighbor.conanfile
- direct_reqs.append(neighbor.pref)
- indirect_reqs.update(nconan.info.requires.refs())
+ if not self._fixed_package_id:
+ direct_reqs = [] # of PackageReference
+ indirect_reqs = set() # of PackageReference, avoid duplicates
+ for neighbor in neighbors:
+ ref, nconan = neighbor.ref, neighbor.conanfile
+ direct_reqs.append(neighbor.pref)
+ indirect_reqs.update(nconan.info.requires.refs())
+ # Make sure not duplicated
+ indirect_reqs.difference_update(direct_reqs)
+ else:
+ node.id_direct_prefs = set() # of PackageReference
+ node.id_indirect_prefs = set() # of PackageReference, avoid duplicates
+ for neighbor in neighbors:
+ node.id_direct_prefs.add(neighbor.pref)
+ node.id_indirect_prefs.update(neighbor.id_direct_prefs)
+ node.id_indirect_prefs.update(neighbor.id_indirect_prefs)
+ # Make sure not duplicated, totally necessary
+ node.id_indirect_prefs.difference_update(node.id_direct_prefs)
+ direct_reqs = node.id_direct_prefs
+ indirect_reqs = node.id_indirect_prefs
- # Make sure not duplicated
- indirect_reqs.difference_update(direct_reqs)
python_requires = getattr(conanfile, "python_requires", None)
if python_requires:
if isinstance(python_requires, dict):
diff --git a/conans/test/functional/package_id/transitive_header_only_test.py b/conans/test/functional/package_id/transitive_header_only_test.py
index 8145f04095d..b970ca80370 100644
--- a/conans/test/functional/package_id/transitive_header_only_test.py
+++ b/conans/test/functional/package_id/transitive_header_only_test.py
@@ -65,3 +65,93 @@ def transitive_major_mode_test(self):
# But LibD package ID changes and is missing, because it depends transitively on LibA
self.assertIn("libd/1.0:39906c34335d9ad465711e847688c4a27894af0f - Missing", client.out)
self.assertIn("libe/1.0:204261ad030cca3acf07c7a58b169e4257056ba1 - Build", client.out)
+
+ def transitive_unrelated_test(self):
+ # https://github.com/conan-io/conan/issues/6450
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=full_version_mode")
+ # LibA
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . liba/1.0@")
+ client.run("create . liba/2.0@")
+ # libB -> LibA
+ client.save({"conanfile.py": GenConanfile().with_require_plain("liba/1.0")})
+ client.run("create . libb/1.0@")
+ # libC -> libB
+ unrelated = "self.info.requires['libb'].unrelated_mode()"
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libb/1.0")
+ .with_package_id(unrelated)})
+ client.run("create . libc/1.0@")
+ # LibD -> LibC
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libc/1.0")})
+ client.run("create . libd/1.0@")
+ # LibE -> LibD, LibA/2.0
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libd/1.0")
+ .with_require_plain("liba/2.0")})
+ client.run("create . libe/1.0@", assert_error=True)
+ self.assertIn("liba/2.0:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache", client.out)
+ self.assertIn("libb/1.0:e71235a6f57633221a2b85f9b6aca14cda69e1fd - Missing", client.out)
+ self.assertIn("libc/1.0:e3884c6976eb7debb8ec57aada7c0c2beaabe8ac - Missing", client.out)
+ self.assertIn("libd/1.0:9b0b7b0905c9bc2cb9b7329f842b3b7c6663e8c3 - Missing", client.out)
+
+ def transitive_second_level_header_only_test(self):
+ # https://github.com/conan-io/conan/issues/6450
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=full_version_mode")
+ # LibA
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . liba/1.0@")
+ client.run("create . liba/2.0@")
+ # libB -> LibA
+ client.save({"conanfile.py": GenConanfile().with_require_plain("liba/1.0")})
+ client.run("create . libb/1.0@")
+ # libC -> libB
+
+ unrelated = "self.info.header_only()"
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libb/1.0")
+ .with_package_id(unrelated)})
+ client.run("create . libc/1.0@")
+ # LibD -> LibC
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libc/1.0")})
+ client.run("create . libd/1.0@")
+ self.assertIn("libc/1.0:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache", client.out)
+
+ # LibE -> LibD, LibA/2.0
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libd/1.0")
+ .with_require_plain("liba/2.0")})
+ client.run("create . libe/1.0@", assert_error=True) # LibD is NOT missing!
+ self.assertIn("libd/1.0:119e0b2903330cef59977f8976cb82a665b510c1 - Cache", client.out)
+ # USE THE NEW FIXED PACKAGE_ID
+ client.run("config set general.full_transitive_package_id=1")
+ client.run("create . libe/1.0@", assert_error=True)
+ self.assertIn("liba/2.0:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache", client.out)
+ self.assertIn("libb/1.0:e71235a6f57633221a2b85f9b6aca14cda69e1fd - Missing", client.out)
+ self.assertIn("libc/1.0:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache", client.out)
+ self.assertIn("libd/1.0:95b14a919aa70f9a7e24afbf48d1101cff344a67 - Missing", client.out)
+
+ def transitive_header_only_test(self):
+ # https://github.com/conan-io/conan/issues/6450
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=full_version_mode")
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . liba/1.0@")
+ client.run("create . liba/2.0@")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("liba/1.0")
+ .with_package_id("self.info.header_only()")})
+ client.run("create . libb/1.0@")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libb/1.0")})
+ client.run("create . libc/1.0@")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libc/1.0")
+ .with_require_plain("liba/1.0")})
+ client.run("create . libd/1.0@")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libc/1.0")
+ .with_require_plain("liba/2.0")})
+
+ client.run("create . libd/1.0@") # Doesn't complain it is missing a binary!
+ self.assertIn(" libc/1.0:fd60a00caf13b07bfce8690315c9e953aafd664b - Cache", client.out)
+ # USE THE NEW FIXED PACKAGE_ID
+ client.run("config set general.full_transitive_package_id=1")
+ client.run("create . libd/1.0@", assert_error=True)
+ self.assertIn("liba/2.0:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache", client.out)
+ self.assertIn("libb/1.0:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache", client.out)
+ self.assertIn("libc/1.0:bfa6c8f046896806f65c8fe554bd57f235b101e8 - Missing", client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-7167@b4fccf6
|
conan-io/conan
|
Python
| 7,167
|
Improve '--update' docstring
|
Changelog: Feature: More detailed description for `--update` argument.
Docs: https://github.com/conan-io/docs/pull/1778
This flag appears in many commands: `test`, `create`, `install`, `info`, `workspace`, `graph lock`
close https://github.com/conan-io/conan/issues/7146
|
2020-06-09T08:18:40Z
|
Improve docs for '--update' argument in commands
I got the following problem:
In my project, I normally build my packages with `conan create`.
Now I wanted to use `conan graph lock` to pickup up a previous build from Artifactory.
Now there is a different behavior between both commands.
I use semver versioning and also the `include_prerelease` option.
Unluckily I still got a library, which require a library without `include_prerelease`.
So I got:
**ProjA** require **LibB** & **LibC**
**LibB** require **LibC**.
**ProjA** has the following requirement for **LibC**: `>=1.6.5-dev <2.0.0-dev, include_prerelease=True`
**LibB** this one for **LibC**: `^1.3.0-dev`
**LibC** exits on the Artifactory in two versions:
* `LibC/1.6.6+master.g34a1cfb@demo/testing` -> stable version
* `LibC/1.6.7-dev.0+master.g1fea313@demo/testing` -> prerelease version
With `conan create` is uses:
* `LibC/1.6.7-dev.0+master.g1fea313@demo/testing` -> prerelease version
and with `conan graph lock`:
* `LibC/1.6.6+master.g34a1cfb@demo/testing` -> stable version
It's the question, what is the correct result.
Anyway it should be for both commands the same.
### Environment Details (include every applicable attribute)
* Operating System+version: Ubuntu LTS 18.04 and Win10
* Compiler+version: GCC8
* Conan version: 1.25.2
* Python version: 3.6.9 and 3.7.5
### Steps to reproduce (Include if Applicable)
|
Having a look at this issue, I've created a repo here (https://github.com/jgsogo/issue-7146) to reproduce the scenario with a minimal recipes.
First of all, as it is described, I get an error:
```bash
⇒ conan create projA.py
...
ERROR: Version range '^1.3.0-dev' required by 'LibB/1.0@demo/testing' not valid for downstream requirement 'LibC/1.6.7-dev.0+master.g1fea313@demo/testing'
```
```bash
⇒ conan graph lock projA.py
...
ERROR: Version range '^1.3.0-dev' required by 'LibB/1.0@demo/testing' not valid for downstream requirement 'LibC/1.6.7-dev.0+master.g1fea313@demo/testing'
```
---
I've modified `LibB` to include prereleases:
```
requires = "LibC/[^1.3.0-dev, include_prerelease=True]@demo/testing"
```
then:
```bash
⇒ conan create projA.py
...
WARN: LibB/1.0@demo/testing: requirement LibC/[^1.3.0-dev, include_prerelease=True]@demo/testing overridden by ProjA/1.0 to LibC/1.6.7-dev.0+master.g1fea313@demo/testing
...
Version ranges solved
Version range '>=1.6.5-dev <2.0.0-dev, include_prerelease=True' required by 'ProjA/1.0' resolved to 'LibC/1.6.7-dev.0+master.g1fea313@demo/testing' in local cache
Version range '^1.3.0-dev, include_prerelease=True' required by 'LibB/1.0@demo/testing' valid for downstream requirement 'LibC/1.6.7-dev.0+master.g1fea313@demo/testing'
Installing package: ProjA/1.0
Requirements
LibB/1.0@demo/testing from local cache - Cache
LibC/1.6.7-dev.0+master.g1fea313@demo/testing from local cache - Cache
ProjA/1.0 from local cache - Cache
Packages
LibB/1.0@demo/testing:f065f71c9f26013ecf404d3b8a378899272d5edf - Missing
LibC/1.6.7-dev.0+master.g1fea313@demo/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache
ProjA/1.0:48b788393343665082ca528370945905800b2c44 - Build
```
and
```bash
⇒ conan graph lock projA.py
{'1.6.6+master.g34a1cfb': LibC/1.6.6+master.g34a1cfb@demo/testing, '1.6.7-dev.0+master.g1fea313': LibC/1.6.7-dev.0+master.g1fea313@demo/testing}
1.6.7-dev.0+master.g1fea313
WARN: LibB/1.0@demo/testing: requirement LibC/[^1.3.0-dev, include_prerelease=True]@demo/testing overridden by ProjA/1.0 to LibC/1.6.7-dev.0+master.g1fea313@demo/testing
Version ranges solved
Version range '>=1.6.5-dev <2.0.0-dev, include_prerelease=True' required by 'projA.py (ProjA/1.0)' resolved to 'LibC/1.6.7-dev.0+master.g1fea313@demo/testing' in local cache
Version range '^1.3.0-dev, include_prerelease=True' required by 'LibB/1.0@demo/testing' valid for downstream requirement 'LibC/1.6.7-dev.0+master.g1fea313@demo/testing'
Requirements
LibB/1.0@demo/testing from local cache - Cache
LibC/1.6.7-dev.0+master.g1fea313@demo/testing from local cache - Cache
Packages
LibB/1.0@demo/testing:f065f71c9f26013ecf404d3b8a378899272d5edf - Missing
LibC/1.6.7-dev.0+master.g1fea313@demo/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Cache
```
---
I'm getting the same version using any of the commands. Can you try recipes from the repo? Do you get the same output? Is there any difference that should be taken into account to reproduce your scenario?
About resolution order, Conan should always build the graph in the same order, but when there is a diamond Conan resolves the branch that is computed first and then checks that the requirements satisfy restrictions coming from the different branches (if not, raises an error)
Ok, sorry for the hassle. But it was again the update (this time at `conan graph lock` , see #6294) where I stumbled.
After removing the conan cache everything works fine.
May be the documentation of the misleading "update" should be optimized?
I agree `--update` is confusing. Let's improve the docs and keep the other issue for Conan v2.0 if a new behavior is needed.
Do you have any suggestion for the help string?
Thanks!
"Check updates exist for the current `path_or_reference` from upstream remotes.
It's not checking for updates its dependencies! Clean your local cache instead, when you want to update also the dependencies."
|
[
{
"body": "I got the following problem:\r\nIn my project, I normally build my packages with `conan create`.\r\nNow I wanted to use `conan graph lock` to pickup up a previous build from Artifactory.\r\n\r\nNow there is a different behavior between both commands.\r\nI use semver versioning and also the `include_prerelease` option. \r\nUnluckily I still got a library, which require a library without `include_prerelease`.\r\n\r\nSo I got:\r\n\r\n**ProjA** require **LibB** & **LibC**\r\n**LibB** require **LibC**.\r\n\r\n**ProjA** has the following requirement for **LibC**: `>=1.6.5-dev <2.0.0-dev, include_prerelease=True`\r\n**LibB** this one for **LibC**: `^1.3.0-dev`\r\n\r\n**LibC** exits on the Artifactory in two versions:\r\n * `LibC/1.6.6+master.g34a1cfb@demo/testing` -> stable version\r\n * `LibC/1.6.7-dev.0+master.g1fea313@demo/testing` -> prerelease version\r\n\r\nWith `conan create` is uses:\r\n * `LibC/1.6.7-dev.0+master.g1fea313@demo/testing` -> prerelease version \r\n\r\nand with `conan graph lock`:\r\n * `LibC/1.6.6+master.g34a1cfb@demo/testing` -> stable version\r\n\r\nIt's the question, what is the correct result. \r\nAnyway it should be for both commands the same. \r\n\r\n\r\n### Environment Details (include every applicable attribute)\r\n * Operating System+version: Ubuntu LTS 18.04 and Win10\r\n * Compiler+version: GCC8\r\n * Conan version: 1.25.2\r\n * Python version: 3.6.9 and 3.7.5\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n\r\n\r\n",
"number": 7146,
"title": "Improve docs for '--update' argument in commands"
}
] |
b6223181588b2abbaa549819b6536ed70fe02756
|
{
"head_commit": "b4fccf698c5de3ac1ed60689a9534c3ed78bd2cf",
"head_commit_message": "update --update docstring",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex c055e641936..0d60de6de8e 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -2080,7 +2080,9 @@ def _add_common_install_arguments(parser, build_help, lockfile=True):\n parser.add_argument(\"-r\", \"--remote\", action=OnceArgument,\n help='Look in the specified remote server')\n parser.add_argument(\"-u\", \"--update\", action='store_true', default=False,\n- help=\"Check updates exist from upstream remotes\")\n+ help=\"Check updates exist for the current reference from upstream remotes.\"\n+ \" It doesn't check for updates for the dependencies, clean the local\"\n+ \" cache to update also dependencies.\")\n if lockfile:\n parser.add_argument(\"-l\", \"--lockfile\", action=OnceArgument, nargs='?', const=\".\",\n help=\"Path to a lockfile or folder containing 'conan.lock' file. \"\n"
}
|
[
{
"diff_hunk": "@@ -2080,7 +2080,9 @@ def _add_common_install_arguments(parser, build_help, lockfile=True):\n parser.add_argument(\"-r\", \"--remote\", action=OnceArgument,\n help='Look in the specified remote server')\n parser.add_argument(\"-u\", \"--update\", action='store_true', default=False,\n- help=\"Check updates exist from upstream remotes\")\n+ help=\"Check updates exist for the current reference from upstream remotes.\"\n+ \" It doesn't check for updates for the dependencies, clean the local\"",
"line": null,
"original_line": 2084,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user1:\nbut --update checks for updates in dependencies too.\n\n@author:\nOk. We need to improve this message, noone knows how it works, there are lots of issues around this flag. Let's write it.\r\n\r\nIt appears on many commands: `test`, `create`, `install`, `info`, `workspace`, `graph lock`. Always the same behavior? What is that behavior?"
}
] |
a34d4873db618a405564cce3ca66496833c8cf69
|
diff --git a/conans/client/command.py b/conans/client/command.py
index c7929f2e873..428bd6d4246 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -661,7 +661,10 @@ def info(self, *args):
build_help = ("Given a build policy, return an ordered list of packages that would be built"
" from sources during the install command")
- _add_common_install_arguments(parser, build_help=build_help)
+ update_help = "Will check if updates of the dependencies exist in the remotes " \
+ "(a new version that satisfies a version range, a new revision or a newer " \
+ "recipe if not using revisions)."
+ _add_common_install_arguments(parser, update_help=update_help, build_help=build_help)
args = parser.parse_args(*args)
profile_build = ProfileData(profiles=args.profile_build, settings=args.settings_build,
@@ -2082,14 +2085,22 @@ def _add_manifests_arguments(parser):
action=OnceArgument)
-def _add_common_install_arguments(parser, build_help, lockfile=True):
+def _add_common_install_arguments(parser, build_help, update_help=None, lockfile=True):
if build_help:
parser.add_argument("-b", "--build", action=Extender, nargs="?", help=build_help)
parser.add_argument("-r", "--remote", action=OnceArgument,
help='Look in the specified remote server')
+
+ if not update_help:
+ update_help = "Will check the remote and in case a newer version and/or revision of " \
+ "the dependencies exists there, it will install those in the local cache. " \
+ "When using version ranges, it will install the latest version that satisfies " \
+ "the range. Also, if using revisions, it will update to the latest revision " \
+ "for the resolved version range."
+
parser.add_argument("-u", "--update", action='store_true', default=False,
- help="Check updates exist from upstream remotes")
+ help=update_help)
if lockfile:
parser.add_argument("-l", "--lockfile", action=OnceArgument, nargs='?', const=".",
help="Path to a lockfile or folder containing 'conan.lock' file. "
|
{
"difficulty": "low",
"estimated_review_effort": 1,
"problem_domain": "Documentation Updates"
}
|
conan-io__conan-6138@62ae836
|
conan-io/conan
|
Python
| 6,138
|
Fix several issues with download command and revisions
|
Changelog: Bugfix: Fix different problems when using `conan download` with revisions.
Docs: Omit
This PR fixes the following problems when using revisions with `conan download` command:
- If you had revisions disabled and added a revision to download command it did not fail but even create the revision in the cache.
- With revisions enabled but without user and channel the command failed.
- When revisions enabled and specifying a recipe revision it did not download the correct recipe revision but the last one.
- When revisions enabled and specifying a package revision it did not download the correct package revision.
It also fixes that when using `create` in the `TurboTestClient` without user and channel the command failed.
Closes #6106
- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-11-27T10:53:52Z
|
[bug] conan download pkg/version@user/channel#randomrev downloads latest
Latest 1.20, Windows 10, python 3.7
With revisions disable this doesn't fail (it should). And worse, it creates an entry with revision ``randomrev`` in the cache.
Also, it seems that it is not possible to ``conan download`` a package with a given package revision.
|
It is important to follow up here, this could be problematic or show underlying problems, for example if using lockfiles, like the locked version is not the one being retrieved.
|
[
{
"body": "Latest 1.20, Windows 10, python 3.7\r\n\r\nWith revisions disable this doesn't fail (it should). And worse, it creates an entry with revision ``randomrev`` in the cache.\r\n\r\nAlso, it seems that it is not possible to ``conan download`` a package with a given package revision.",
"number": 6106,
"title": "[bug] conan download pkg/version@user/channel#randomrev downloads latest"
}
] |
d42ec055d459489c184b160cafbd3f200ceb6d41
|
{
"head_commit": "62ae836c07efeaf0f0b126d1baee6309cf58bceb",
"head_commit_message": "add test without user channel",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex e0cc756b3cf..687a1307f62 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -403,8 +403,12 @@ def download(self, *args):\n else:\n reference = repr(pref.ref)\n if pref.ref.user is None:\n- reference += \"@\"\n- packages_list = [pref.id]\n+ if pref.ref.revision:\n+ reference = \"%s/%s@#%s\" % (pref.ref.name, pref.ref.version, pref.ref.revision)\n+ else:\n+ reference += \"@\"\n+ pkgref = \"{}#{}\".format(pref.id, pref.revision) if pref.revision else \"{}\".format(pref.id)\n+ packages_list = [pkgref]\n if args.package:\n raise ConanException(\"Use a full package reference (preferred) or the `--package`\"\n \" command argument, but not both.\")\ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 2d3df0d106a..866e73206f4 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -430,6 +430,9 @@ def download(self, reference, remote_name=None, packages=None, recipe=False):\n # Install packages without settings (fixed ids or all)\n if check_valid_ref(reference):\n ref = ConanFileReference.loads(reference)\n+ if ref.revision and not self.app.config.revisions_enabled:\n+ raise ConanException(\"Revisions not enabled in the client, specify a \"\n+ \"reference without revision\")\n if packages and ref.revision is None:\n for package_id in packages:\n if \"#\" in package_id:\ndiff --git a/conans/test/functional/command/download_test.py b/conans/test/functional/command/download_test.py\nindex d133bf6404e..31dc33ef8d7 100644\n--- a/conans/test/functional/command/download_test.py\n+++ b/conans/test/functional/command/download_test.py\n@@ -220,3 +220,57 @@ def no_user_channel_test(self):\n client.run(\"download pkg/1.0@\")\n self.assertIn(\"pkg/1.0: Downloading pkg/1.0:%s\" % NO_SETTINGS_PACKAGE_ID, client.out)\n self.assertIn(\"pkg/1.0: Package installed %s\" % NO_SETTINGS_PACKAGE_ID, client.out)\n+\n+ def download_revs_disabled_with_rrev_test(self):\n+ # https://github.com/conan-io/conan/issues/6106\n+ client = TestClient(default_server_user=True, revisions_enabled=False)\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . pkg/1.0@user/channel\")\n+ client.run(\"upload * --all --confirm\")\n+ client.run(\"remove * -f\")\n+ client.run(\"download pkg/1.0@user/channel#fakerevision\", assert_error=True)\n+ self.assertIn(\n+ \"ERROR: Revisions not enabled in the client, specify a reference without revision\",\n+ client.out)\n+\n+ def download_revs_enabled_with_rrev_test(self):\n+ ref = ConanFileReference.loads(\"pkg/1.0@user/channel\")\n+ client = TurboTestClient(default_server_user=True, revisions_enabled=True)\n+ pref = client.create(ref, conanfile=GenConanfile())\n+ client.run(\"upload * --all --confirm\")\n+ client.run(\"remove * -f\")\n+ client.run(\"download pkg/1.0@user/channel#{}\".format(pref.ref.revision))\n+ self.assertIn(\"pkg/1.0@user/channel: Package installed {}\".format(pref.id), client.out)\n+ search_result = client.search(\"pkg/1.0@user/channel --revisions\")[0]\n+ self.assertIn(pref.ref.revision, search_result[\"revision\"])\n+\n+ ref = ConanFileReference.loads(\"pkg/1.0@\")\n+ servers = {\"default\": TestServer([(\"*/*@*/*\", \"*\")], [(\"*/*@*/*\", \"*\")],\n+ users={\"user\": \"password\"})}\n+ client = TurboTestClient(servers=servers, revisions_enabled=True,\n+ users={\"default\": [(\"user\", \"password\")]})\n+ pref = client.create(ref, conanfile=GenConanfile())\n+ client.run(\"upload * --all --confirm\")\n+ client.run(\"remove * -f\")\n+ client.run(\"download pkg/1.0@#{}\".format(pref.ref.revision))\n+ self.assertIn(\"pkg/1.0: Package installed {}\".format(pref.id), client.out)\n+ search_result = client.search(\"pkg/1.0@ --revisions\")[0]\n+ self.assertIn(pref.ref.revision, search_result[\"revision\"])\n+\n+\n+ def download_revs_enabled_with_prev_test(self):\n+ # https://github.com/conan-io/conan/issues/6106\n+ ref = ConanFileReference.loads(\"pkg/1.0@user/channel\")\n+ client = TurboTestClient(default_server_user=True, revisions_enabled=True)\n+ pref = client.create(ref, conanfile=GenConanfile())\n+ client.run(\"upload * --all --confirm\")\n+ client.run(\"remove * -f\")\n+ client.run(\"download pkg/1.0@user/channel#{}:{}#{}\".format(pref.ref.revision,\n+ pref.id,\n+ pref.revision))\n+ self.assertIn(\"pkg/1.0@user/channel: Package installed {}\".format(pref.id), client.out)\n+ search_result = client.search(\"pkg/1.0@user/channel --revisions\")[0]\n+ self.assertIn(pref.ref.revision, search_result[\"revision\"])\n+ search_result = client.search(\n+ \"pkg/1.0@user/channel#{}:{} --revisions\".format(pref.ref.revision, pref.id))[0]\n+ self.assertIn(pref.revision, search_result[\"revision\"])\ndiff --git a/conans/test/utils/tools.py b/conans/test/utils/tools.py\nindex aa9e3f82ecb..65fdd689907 100644\n--- a/conans/test/utils/tools.py\n+++ b/conans/test/utils/tools.py\n@@ -1236,7 +1236,8 @@ def export(self, ref, conanfile=GenConanfile(), args=None, assert_error=False):\n def create(self, ref, conanfile=GenConanfile(), args=None, assert_error=False):\n if conanfile:\n self.save({\"conanfile.py\": conanfile})\n- self.run(\"create . {} {} --json {}\".format(ref.full_str(),\n+ full_str = \"{}@\".format(ref.full_str()) if not ref.user else ref.full_str()\n+ self.run(\"create . {} {} --json {}\".format(full_str,\n args or \"\", self.tmp_json_name),\n assert_error=assert_error)\n rrev = self.cache.package_layout(ref).recipe_revision()\n"
}
|
[
{
"diff_hunk": "@@ -220,3 +220,57 @@ def no_user_channel_test(self):\n client.run(\"download pkg/1.0@\")\n self.assertIn(\"pkg/1.0: Downloading pkg/1.0:%s\" % NO_SETTINGS_PACKAGE_ID, client.out)\n self.assertIn(\"pkg/1.0: Package installed %s\" % NO_SETTINGS_PACKAGE_ID, client.out)\n+\n+ def download_revs_disabled_with_rrev_test(self):\n+ # https://github.com/conan-io/conan/issues/6106\n+ client = TestClient(default_server_user=True, revisions_enabled=False)\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"create . pkg/1.0@user/channel\")\n+ client.run(\"upload * --all --confirm\")\n+ client.run(\"remove * -f\")\n+ client.run(\"download pkg/1.0@user/channel#fakerevision\", assert_error=True)",
"line": 241,
"original_line": 231,
"original_start_line": null,
"path": "conans/test/functional/command/download_test.py",
"start_line": null,
"text": "@user1:\nYou probably can write this test without server and uploading."
},
{
"diff_hunk": "@@ -220,3 +220,57 @@ def no_user_channel_test(self):\n client.run(\"download pkg/1.0@\")\n self.assertIn(\"pkg/1.0: Downloading pkg/1.0:%s\" % NO_SETTINGS_PACKAGE_ID, client.out)\n self.assertIn(\"pkg/1.0: Package installed %s\" % NO_SETTINGS_PACKAGE_ID, client.out)\n+\n+ def download_revs_disabled_with_rrev_test(self):\n+ # https://github.com/conan-io/conan/issues/6106\n+ client = TestClient(default_server_user=True, revisions_enabled=False)",
"line": null,
"original_line": 226,
"original_start_line": null,
"path": "conans/test/functional/command/download_test.py",
"start_line": null,
"text": "@user2:\nTests that hardcode revisions should use something like:\r\n\r\n```python\r\[email protected](get_env(\"TESTING_REVISIONS_ENABLED\", False), \"This test is insane to be \"\r\n \"tested with revisions, in \"\r\n \"general all the module\")\r\n```\r\nSo they are only run in the configuration they want to run, not twice."
},
{
"diff_hunk": "@@ -403,8 +403,12 @@ def download(self, *args):\n else:\n reference = repr(pref.ref)\n if pref.ref.user is None:\n- reference += \"@\"\n- packages_list = [pref.id]\n+ if pref.ref.revision:\n+ reference = \"%s/%s@#%s\" % (pref.ref.name, pref.ref.version, pref.ref.revision)\n+ else:\n+ reference += \"@\"\n+ pkgref = \"{}#{}\".format(pref.id, pref.revision) if pref.revision else \"{}\".format(pref.id)",
"line": null,
"original_line": 410,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user1:\nwhy ``\"{}\".format(pref.id)`` instead of just ``pref.id``?"
}
] |
187d395a601d4c9969306cc19f67c1d170c13d15
|
diff --git a/conans/client/command.py b/conans/client/command.py
index e0cc756b3cf..fd13ef6552c 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -403,8 +403,12 @@ def download(self, *args):
else:
reference = repr(pref.ref)
if pref.ref.user is None:
- reference += "@"
- packages_list = [pref.id]
+ if pref.ref.revision:
+ reference = "%s/%s@#%s" % (pref.ref.name, pref.ref.version, pref.ref.revision)
+ else:
+ reference += "@"
+ pkgref = "{}#{}".format(pref.id, pref.revision) if pref.revision else pref.id
+ packages_list = [pkgref]
if args.package:
raise ConanException("Use a full package reference (preferred) or the `--package`"
" command argument, but not both.")
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index 2d3df0d106a..866e73206f4 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -430,6 +430,9 @@ def download(self, reference, remote_name=None, packages=None, recipe=False):
# Install packages without settings (fixed ids or all)
if check_valid_ref(reference):
ref = ConanFileReference.loads(reference)
+ if ref.revision and not self.app.config.revisions_enabled:
+ raise ConanException("Revisions not enabled in the client, specify a "
+ "reference without revision")
if packages and ref.revision is None:
for package_id in packages:
if "#" in package_id:
diff --git a/conans/test/functional/command/download_test.py b/conans/test/functional/command/download_test.py
index d133bf6404e..423a16bc61d 100644
--- a/conans/test/functional/command/download_test.py
+++ b/conans/test/functional/command/download_test.py
@@ -5,6 +5,7 @@
from conans.model.ref import ConanFileReference
from conans.test.utils.tools import (TestClient, TestServer, NO_SETTINGS_PACKAGE_ID, TurboTestClient,
GenConanfile)
+from conans.util.env_reader import get_env
from conans.util.files import load
@@ -220,3 +221,75 @@ def no_user_channel_test(self):
client.run("download pkg/1.0@")
self.assertIn("pkg/1.0: Downloading pkg/1.0:%s" % NO_SETTINGS_PACKAGE_ID, client.out)
self.assertIn("pkg/1.0: Package installed %s" % NO_SETTINGS_PACKAGE_ID, client.out)
+
+ @unittest.skipIf(get_env("TESTING_REVISIONS_ENABLED", False), "No sense with revs")
+ def download_revs_disabled_with_rrev_test(self):
+ # https://github.com/conan-io/conan/issues/6106
+ client = TestClient(revisions_enabled=False)
+ client.run("download pkg/1.0@user/channel#fakerevision", assert_error=True)
+ self.assertIn(
+ "ERROR: Revisions not enabled in the client, specify a reference without revision",
+ client.out)
+
+ @unittest.skipUnless(get_env("TESTING_REVISIONS_ENABLED", False), "Only revisions")
+ def download_revs_enabled_with_fake_rrev_test(self):
+ client = TestClient(default_server_user=True, revisions_enabled=True)
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . pkg/1.0@user/channel")
+ client.run("upload * --all --confirm")
+ client.run("remove * -f")
+ client.run("download pkg/1.0@user/channel#fakerevision", assert_error=True)
+ self.assertIn("ERROR: Recipe not found: 'pkg/1.0@user/channel'", client.out)
+
+ @unittest.skipUnless(get_env("TESTING_REVISIONS_ENABLED", False), "Only revisions")
+ def download_revs_enabled_with_rrev_test(self):
+ ref = ConanFileReference.loads("pkg/1.0@user/channel")
+ client = TurboTestClient(default_server_user=True, revisions_enabled=True)
+ pref = client.create(ref, conanfile=GenConanfile())
+ client.run("upload pkg/1.0@user/channel --all --confirm")
+ # create new revision from recipe
+ client.create(ref, conanfile=GenConanfile().with_build_msg("new revision"))
+ client.run("upload pkg/1.0@user/channel --all --confirm")
+ client.run("remove * -f")
+ client.run("download pkg/1.0@user/channel#{}".format(pref.ref.revision))
+ self.assertIn("pkg/1.0@user/channel: Package installed {}".format(pref.id), client.out)
+ search_result = client.search("pkg/1.0@user/channel --revisions")[0]
+ self.assertIn(pref.ref.revision, search_result["revision"])
+
+ @unittest.skipUnless(get_env("TESTING_REVISIONS_ENABLED", False), "Only revisions")
+ def download_revs_enabled_with_rrev_no_user_channel_test(self):
+ ref = ConanFileReference.loads("pkg/1.0@")
+ servers = {"default": TestServer([("*/*@*/*", "*")], [("*/*@*/*", "*")],
+ users={"user": "password"})}
+ client = TurboTestClient(servers=servers, revisions_enabled=True,
+ users={"default": [("user", "password")]})
+ pref = client.create(ref, conanfile=GenConanfile())
+ client.run("upload pkg/1.0@ --all --confirm")
+ # create new revision from recipe
+ client.create(ref, conanfile=GenConanfile().with_build_msg("new revision"))
+ client.run("upload pkg/1.0@ --all --confirm")
+ client.run("remove * -f")
+ client.run("download pkg/1.0@#{}".format(pref.ref.revision))
+ self.assertIn("pkg/1.0: Package installed {}".format(pref.id), client.out)
+ search_result = client.search("pkg/1.0@ --revisions")[0]
+ self.assertIn(pref.ref.revision, search_result["revision"])
+
+ @unittest.skipUnless(get_env("TESTING_REVISIONS_ENABLED", False), "Only revisions")
+ def download_revs_enabled_with_prev_test(self):
+ # https://github.com/conan-io/conan/issues/6106
+ ref = ConanFileReference.loads("pkg/1.0@user/channel")
+ client = TurboTestClient(default_server_user=True, revisions_enabled=True)
+ pref = client.create(ref, conanfile=GenConanfile())
+ client.run("upload pkg/1.0@user/channel --all --confirm")
+ client.create(ref, conanfile=GenConanfile().with_build_msg("new revision"))
+ client.run("upload pkg/1.0@user/channel --all --confirm")
+ client.run("remove * -f")
+ client.run("download pkg/1.0@user/channel#{}:{}#{}".format(pref.ref.revision,
+ pref.id,
+ pref.revision))
+ self.assertIn("pkg/1.0@user/channel: Package installed {}".format(pref.id), client.out)
+ search_result = client.search("pkg/1.0@user/channel --revisions")[0]
+ self.assertIn(pref.ref.revision, search_result["revision"])
+ search_result = client.search(
+ "pkg/1.0@user/channel#{}:{} --revisions".format(pref.ref.revision, pref.id))[0]
+ self.assertIn(pref.revision, search_result["revision"])
diff --git a/conans/test/utils/tools.py b/conans/test/utils/tools.py
index aa9e3f82ecb..65fdd689907 100644
--- a/conans/test/utils/tools.py
+++ b/conans/test/utils/tools.py
@@ -1236,7 +1236,8 @@ def export(self, ref, conanfile=GenConanfile(), args=None, assert_error=False):
def create(self, ref, conanfile=GenConanfile(), args=None, assert_error=False):
if conanfile:
self.save({"conanfile.py": conanfile})
- self.run("create . {} {} --json {}".format(ref.full_str(),
+ full_str = "{}@".format(ref.full_str()) if not ref.user else ref.full_str()
+ self.run("create . {} {} --json {}".format(full_str,
args or "", self.tmp_json_name),
assert_error=assert_error)
rrev = self.cache.package_layout(ref).recipe_revision()
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-6251@b5a8a7c
|
conan-io/conan
|
Python
| 6,251
|
add unknown package-ID in build-order
|
Changelog: Bugfix: Include "Package ID Unknown" nodes in ``conan graph build-order``, as they need to be processed in that order.
Docs: Omit
Close #6232
|
2019-12-18T00:17:44Z
|
[bug] Wrong conan graph build-order in package_revision_mode
### Environment Details
* Operating System: Red Hat Enterprise Linux 7.4
* Compiler: GCC 4.8.5
* Conan version: 1.21.0
* Python version: 3.7.1
### Steps to reproduce
Assumptions:
- Cache is clean; no binary packages exist so the expectation is a full build on the CI server will be necessary.
- libA depends on libC
- libB has no dependencies
- libC has no dependencies
- appX depends on libA and libB
Steps:
1. conan export libA ...
2. conan export libB ...
3. conan export libC ...
4. conan export appX ...
5. conan graph lock appX --lockfile appX.lock
6. conan build-order --build missing appX.lock
Package libA shows up as Unknown
Package libB shows up as Missing
Package libC shows up as Missing
Package appX shows up as Unknown
The problem is that the conan graph build-order output excludes libA since it is "Unknown". I expect that of appX, I suppose, since it's the application whose dependencies I'm trying to determine need building - but not of libA.
Thanks
|
Hi @radonish
Sorry for the delay in responding this.
I have been having a look at this, first a quick tip: at the moment you should also use ``conan graph lock appX --build=missing`` in the command ``graph lock`` too. Otherwise the lock might not work as expected.
Then, the behavior is correct. It is doing exactly what it is expected to do. "Unknown ID" nodes are not in the list of "to build" nodes because we don't know yet if they need to be built or not. It is possible that they don't. With ``package_revision_mode``, the dependencies resolution is exact, if after evaluating the package ID it is found as binary, then really no need to re-build it.
The way the package_revision_mode and build-order is intended to work is incremental, which by the way can be also more resource efficient:
- Compute the build-order
- Take the first level, fire those in parallel
- The moment any of those parallel jobs finish and some computing resources are available, compute the updated lock and **recompute** the build-order. This recomputation will also compute the final package ID that will be used instead of the Unknown one.
- Take that build order and repeat, making sure not relaunching jobs that are already running.
In this way, the utilization of resources is more effective. Imagine that the first level has 10 packages, 9 header-only libraries and one very heavy to build static library. Until the 10 finish, more builds cannot be launch, so 9 jobs will be not used until the static library finish. Packages depending on the header-only libraries could start as soon as those finish.
Said that, I think that it might be possible to add those to the output of build-order (without the package ID). I'll give it a try.
|
[
{
"body": "### Environment Details\r\n * Operating System: Red Hat Enterprise Linux 7.4\r\n * Compiler: GCC 4.8.5\r\n * Conan version: 1.21.0\r\n * Python version: 3.7.1\r\n\r\n### Steps to reproduce\r\nAssumptions:\r\n- Cache is clean; no binary packages exist so the expectation is a full build on the CI server will be necessary.\r\n- libA depends on libC\r\n- libB has no dependencies\r\n- libC has no dependencies\r\n- appX depends on libA and libB\r\n\r\nSteps:\r\n1. conan export libA ...\r\n2. conan export libB ...\r\n3. conan export libC ...\r\n4. conan export appX ...\r\n5. conan graph lock appX --lockfile appX.lock\r\n6. conan build-order --build missing appX.lock\r\n\r\nPackage libA shows up as Unknown\r\nPackage libB shows up as Missing\r\nPackage libC shows up as Missing\r\nPackage appX shows up as Unknown\r\n\r\nThe problem is that the conan graph build-order output excludes libA since it is \"Unknown\". I expect that of appX, I suppose, since it's the application whose dependencies I'm trying to determine need building - but not of libA.\r\n\r\nThanks",
"number": 6232,
"title": "[bug] Wrong conan graph build-order in package_revision_mode"
}
] |
69bd779799f02797431b9f2c2acf2bccbae182bf
|
{
"head_commit": "b5a8a7ce0c7f0d21728b642510f490a0a5383e4e",
"head_commit_message": "add unknown in build-order",
"patch_to_review": "diff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py\nindex ce1da041a31..92593502ca9 100644\n--- a/conans/client/graph/graph.py\n+++ b/conans/client/graph/graph.py\n@@ -276,7 +276,7 @@ def new_build_order(self):\n for level in reversed(levels):\n new_level = []\n for n in level:\n- if n.binary == BINARY_BUILD and n.pref not in total_prefs:\n+ if n.binary in (BINARY_UNKNOWN, BINARY_BUILD) and n.pref not in total_prefs:\n new_level.append((n.id, n.pref.copy_clear_prev()))\n total_prefs.add(n.pref)\n if new_level:\ndiff --git a/conans/test/functional/graph_lock/graph_lock_test.py b/conans/test/functional/graph_lock/graph_lock_test.py\nindex 69db873fb81..03dce1b66c4 100644\n--- a/conans/test/functional/graph_lock/graph_lock_test.py\n+++ b/conans/test/functional/graph_lock/graph_lock_test.py\n@@ -619,6 +619,39 @@ def consumer_build_order_test(self):\n client.run(\"graph build-order conan.lock --build=missing\")\n self.assertIn(\"test4/0.1\", client.out)\n \n+ def package_revision_mode_build_order_test(self):\n+ # https://github.com/conan-io/conan/issues/6232\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=package_revision_mode\")\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"export . libb/0.1@\")\n+ client.run(\"export . libc/0.1@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"libc/0.1\")})\n+ client.run(\"export . liba/0.1@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"liba/0.1\")\n+ .with_require_plain(\"libb/0.1\")})\n+ client.run(\"export . app/0.1@\")\n+\n+ client.run(\"graph lock app/0.1@ --build=missing\")\n+ client.run(\"graph build-order . --build=missing --json=bo.json\")\n+ self.assertIn(\"app/0.1:Package_ID_unknown - Unknown\", client.out)\n+ self.assertIn(\"liba/0.1:Package_ID_unknown - Unknown\", client.out)\n+ self.assertIn(\"libb/0.1:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Build\", client.out)\n+ self.assertIn(\"libc/0.1:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Build\", client.out)\n+ bo = client.load(\"bo.json\")\n+ bo = json.loads(bo)\n+ libc_level = bo[0]\n+ self.assertEqual(\"3\", libc_level[0][0])\n+ self.assertIn(\"libc/0.1\", libc_level[0][1])\n+ liba_level = bo[1]\n+ self.assertEqual(\"2\", liba_level[0][0])\n+ self.assertIn(\"liba/0.1\", liba_level[0][1])\n+ self.assertEqual(\"4\", liba_level[1][0])\n+ self.assertIn(\"libb/0.1\", liba_level[1][1])\n+ libc_level = bo[2]\n+ self.assertEqual(\"1\", libc_level[0][0])\n+ self.assertIn(\"app/0.1\", libc_level[0][1])\n+\n \n class GraphLockWarningsTestCase(unittest.TestCase):\n \n"
}
|
[
{
"diff_hunk": "@@ -619,6 +619,39 @@ def consumer_build_order_test(self):\n client.run(\"graph build-order conan.lock --build=missing\")\n self.assertIn(\"test4/0.1\", client.out)\n \n+ def package_revision_mode_build_order_test(self):\n+ # https://github.com/conan-io/conan/issues/6232\n+ client = TestClient()\n+ client.run(\"config set general.default_package_id_mode=package_revision_mode\")\n+ client.save({\"conanfile.py\": GenConanfile()})\n+ client.run(\"export . libb/0.1@\")\n+ client.run(\"export . libc/0.1@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"libc/0.1\")})\n+ client.run(\"export . liba/0.1@\")\n+ client.save({\"conanfile.py\": GenConanfile().with_require_plain(\"liba/0.1\")\n+ .with_require_plain(\"libb/0.1\")})\n+ client.run(\"export . app/0.1@\")\n+\n+ client.run(\"graph lock app/0.1@ --build=missing\")\n+ client.run(\"graph build-order . --build=missing --json=bo.json\")\n+ self.assertIn(\"app/0.1:Package_ID_unknown - Unknown\", client.out)\n+ self.assertIn(\"liba/0.1:Package_ID_unknown - Unknown\", client.out)\n+ self.assertIn(\"libb/0.1:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Build\", client.out)\n+ self.assertIn(\"libc/0.1:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Build\", client.out)\n+ bo = client.load(\"bo.json\")\n+ bo = json.loads(bo)",
"line": null,
"original_line": 642,
"original_start_line": null,
"path": "conans/test/functional/graph_lock/graph_lock_test.py",
"start_line": null,
"text": "@user1:\nIMO it would we easier to read if we compare the full JSON, otherwise, the following asserts are quite hard to understand.\r\n\r\nSomething like:\r\n\r\n```\r\nself.assertEqual(sorted(bo.items()), sorted(json.loads(\"\"\"\r\n[\r\n [\r\n [\"3\",\"libc/0.1#f3367e0e7d170aa12abccb175fee5f97:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"]\r\n ], \r\n [\r\n [\"2\",\"liba/0.1#7086607aa6efbad8e2527748e3ee8237:Package_ID_unknown\"],\r\n [\"4\",\"libb/0.1#f3367e0e7d170aa12abccb175fee5f97:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9\"]\r\n ],\r\n [\r\n [\"1\",\"app/0.1#7742ee9e2f19af4f9ed7619f231ca871:Package_ID_unknown\"]\r\n ]\r\n]\r\n\"\"\")))\r\n```"
}
] |
71b319cbec1c68ae826cf9688b83da694da8d857
|
diff --git a/conans/client/graph/graph.py b/conans/client/graph/graph.py
index ce1da041a31..92593502ca9 100644
--- a/conans/client/graph/graph.py
+++ b/conans/client/graph/graph.py
@@ -276,7 +276,7 @@ def new_build_order(self):
for level in reversed(levels):
new_level = []
for n in level:
- if n.binary == BINARY_BUILD and n.pref not in total_prefs:
+ if n.binary in (BINARY_UNKNOWN, BINARY_BUILD) and n.pref not in total_prefs:
new_level.append((n.id, n.pref.copy_clear_prev()))
total_prefs.add(n.pref)
if new_level:
diff --git a/conans/test/functional/graph_lock/graph_lock_test.py b/conans/test/functional/graph_lock/graph_lock_test.py
index a323077c184..48c94ab3ad8 100644
--- a/conans/test/functional/graph_lock/graph_lock_test.py
+++ b/conans/test/functional/graph_lock/graph_lock_test.py
@@ -619,6 +619,40 @@ def consumer_build_order_test(self):
client.run("graph build-order conan.lock --build=missing")
self.assertIn("test4/0.1", client.out)
+ def package_revision_mode_build_order_test(self):
+ # https://github.com/conan-io/conan/issues/6232
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=package_revision_mode")
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("export . libb/0.1@")
+ client.run("export . libc/0.1@")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("libc/0.1")})
+ client.run("export . liba/0.1@")
+ client.save({"conanfile.py": GenConanfile().with_require_plain("liba/0.1")
+ .with_require_plain("libb/0.1")})
+ client.run("export . app/0.1@")
+
+ client.run("graph lock app/0.1@ --build=missing")
+ client.run("graph build-order . --build=missing --json=bo.json")
+ self.assertIn("app/0.1:Package_ID_unknown - Unknown", client.out)
+ self.assertIn("liba/0.1:Package_ID_unknown - Unknown", client.out)
+ self.assertIn("libb/0.1:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Build", client.out)
+ self.assertIn("libc/0.1:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 - Build", client.out)
+ bo = client.load("bo.json")
+ build_order = json.loads(bo)
+ expected = [
+ # First level
+ [['3',
+ 'libc/0.1#f3367e0e7d170aa12abccb175fee5f97:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9']],
+ # second level
+ [['2', 'liba/0.1#7086607aa6efbad8e2527748e3ee8237:Package_ID_unknown'],
+ ['4',
+ 'libb/0.1#f3367e0e7d170aa12abccb175fee5f97:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9']],
+ # last level to build
+ [['1', 'app/0.1#7742ee9e2f19af4f9ed7619f231ca871:Package_ID_unknown']]
+ ]
+ self.assertEqual(build_order, expected)
+
class GraphLockWarningsTestCase(unittest.TestCase):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-6059@34153bc
|
conan-io/conan
|
Python
| 6,059
|
Fixes #6044 Improve error with malformed settings yml
|
Changelog: Fix: Raise a meaningful error when the `settings.yml` file is invalid
Docs: omit
Fixes #6044
|
2019-11-12T08:50:13Z
|
[ux] [bug] Improve error raised when 'settings.yml' cannot be parsed
If the file `settings.yml` is invalid, Conan raises an ugly error with all the traceback.
---
How to reproduce:
* Edit `settings.yml` and make it an invalid YAML
* Run something like `conan create ....`:
```
⇒ conan create dep1
Traceback (most recent call last):
File "/Users/jgsogo/dev/conan/conan/conans/client/command.py", line 1911, in run
method(args[0][1:])
File "/Users/jgsogo/dev/conan/conan/conans/client/command.py", line 355, in create
lockfile=args.lockfile)
.... lots of traceback ...
yaml.scanner.ScannerError: while scanning a simple key
in "<unicode string>", line 14, column 1:
os dasf:ffs
^
could not find expected ':'
in "<unicode string>", line 15, column 12:
Windows:
^
ERROR: while scanning a simple key
in "<unicode string>", line 14, column 1:
os dasf:ffs
^
could not find expected ':'
in "<unicode string>", line 15, column 12:
Windows:
^
```
---
I would expect something easier like: `File 'settings.yml' has invalid YAML format/ cannot be parsed / error reading....` and, if possible, add the information with the line and column.
|
I'll take a look.
|
[
{
"body": "If the file `settings.yml` is invalid, Conan raises an ugly error with all the traceback.\r\n\r\n---\r\n\r\nHow to reproduce:\r\n * Edit `settings.yml` and make it an invalid YAML\r\n * Run something like `conan create ....`:\r\n\r\n```\r\n⇒ conan create dep1 \r\nTraceback (most recent call last):\r\n File \"/Users/jgsogo/dev/conan/conan/conans/client/command.py\", line 1911, in run\r\n method(args[0][1:])\r\n File \"/Users/jgsogo/dev/conan/conan/conans/client/command.py\", line 355, in create\r\n lockfile=args.lockfile)\r\n \r\n.... lots of traceback ...\r\n\r\nyaml.scanner.ScannerError: while scanning a simple key\r\n in \"<unicode string>\", line 14, column 1:\r\n os dasf:ffs \r\n ^\r\ncould not find expected ':'\r\n in \"<unicode string>\", line 15, column 12:\r\n Windows:\r\n ^\r\n\r\nERROR: while scanning a simple key\r\n in \"<unicode string>\", line 14, column 1:\r\n os dasf:ffs \r\n ^\r\ncould not find expected ':'\r\n in \"<unicode string>\", line 15, column 12:\r\n Windows:\r\n ^\r\n```\r\n\r\n---\r\n\r\nI would expect something easier like: `File 'settings.yml' has invalid YAML format/ cannot be parsed / error reading....` and, if possible, add the information with the line and column.",
"number": 6044,
"title": "[ux] [bug] Improve error raised when 'settings.yml' cannot be parsed"
}
] |
c7d09b6148edac5b55c564160406c2268bbb6d7b
|
{
"head_commit": "34153bcd641230c0db8cd7b0ef9f9808851b8a02",
"head_commit_message": "Fixes #6044 Improve error raised when 'settings.yml' cannot be parsed",
"patch_to_review": "diff --git a/conans/model/settings.py b/conans/model/settings.py\nindex 2815d96635d..0835d7a8381 100644\n--- a/conans/model/settings.py\n+++ b/conans/model/settings.py\n@@ -233,7 +233,10 @@ def copy_values(self):\n \n @staticmethod\n def loads(text):\n- return Settings(yaml.safe_load(text) or {})\n+ try:\n+ return Settings(yaml.safe_load(text) or {})\n+ except yaml.YAMLError as ye:\n+ raise ConanException(\"settings:invalid YAML format:%s\" % str(ye))\n \n def validate(self):\n for field in self.fields:\n"
}
|
[
{
"diff_hunk": "@@ -233,7 +233,10 @@ def copy_values(self):\n \n @staticmethod\n def loads(text):\n- return Settings(yaml.safe_load(text) or {})\n+ try:\n+ return Settings(yaml.safe_load(text) or {})\n+ except yaml.YAMLError as ye:\n+ raise ConanException(\"settings:invalid YAML format:%s\" % str(ye))",
"line": null,
"original_line": 239,
"original_start_line": null,
"path": "conans/model/settings.py",
"start_line": null,
"text": "@user1:\n\r\n```suggestion\r\n raise ConanException(\"Invalid settings.yml format:%s\" % str(ye))\r\n```\n\n@user2:\nJust an extra space to the previous suggestion\r\n\r\n```suggestion\r\n raise ConanException(\"Invalid settings.yml format: %s\" % str(ye))\r\n```"
}
] |
1e9f50e18d91fd322380874a569c798fac7eab94
|
diff --git a/conans/model/settings.py b/conans/model/settings.py
index 2815d96635d..6b907572785 100644
--- a/conans/model/settings.py
+++ b/conans/model/settings.py
@@ -233,7 +233,10 @@ def copy_values(self):
@staticmethod
def loads(text):
- return Settings(yaml.safe_load(text) or {})
+ try:
+ return Settings(yaml.safe_load(text) or {})
+ except (yaml.YAMLError, AttributeError) as ye:
+ raise ConanException("Invalid settings.yml format: {}".format(ye))
def validate(self):
for field in self.fields:
diff --git a/conans/test/functional/configuration/invalid_settings_test.py b/conans/test/functional/configuration/invalid_settings_test.py
new file mode 100644
index 00000000000..8dd30e2f622
--- /dev/null
+++ b/conans/test/functional/configuration/invalid_settings_test.py
@@ -0,0 +1,28 @@
+import os
+import textwrap
+import unittest
+
+from conans.test.utils.tools import TestClient
+
+
+class SettingsLoadTestCase(unittest.TestCase):
+ def test_invalid_settings(self):
+ client = TestClient()
+ client.save({os.path.join(client.cache_folder, 'settings.yml'): """your buggy file"""})
+ client.run("new -b hello/1.0")
+ client.run("install .", assert_error=True)
+ self.assertIn("ERROR: Invalid settings.yml format", client.out)
+
+ def test_invalid_yaml(self):
+ client = TestClient()
+ client.save({os.path.join(client.cache_folder, 'settings.yml'):
+ textwrap.dedent("""
+ Almost:
+ - a
+ - valid
+ yaml
+ """)})
+ client.run("new -b hello/1.0")
+ client.run("install .", assert_error=True)
+ self.assertIn("ERROR: Invalid settings.yml format: while parsing a block mapping",
+ client.out)
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-5613@4b64731
|
conan-io/conan
|
Python
| 5,613
|
#5572 Retrieve Conan home directory
|
- Add `config home` command to retrieve Conan home dir
- Add functional tests to validate `config home`
Changelog: Feature: New `conan config home` command for getting Conan home directory
Docs: https://github.com/conan-io/docs/pull/1387
closes https://github.com/conan-io/conan/issues/5572
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-08-12T18:56:29Z
|
Add a command to retrieve the Conan home directory
A command to know where the `conan.conf` is (we call it Conan home) which is evaluated using the `CONAN_USER_HOME` env variable or defaulted to `~/.conan` would be useful.
Proposals:
* `conan config home`
* `conan home`
* `...`
|
Amazing, I would say this feature is relevant, for many times I saw users asking it on Slack, "where is conan home directory?".
When someone wants to know where is Conan home dir you need to guess: look for env vars CONAN_USER_HOME and CONAN_USER_HOME_SHORT, or try ~/.conan or %USERPROFILE%/.conan
I would suggest `conan config get home`
Conan config is about global configurations for Conan, not the user. As `config` requires an action, `get` is the plausible sub-command. `home` is what we want, but there is no `home` property in conan.conf, which makes it as an special keyword.
At same time that I see as a possible solution, I also see a confusion, mixing Conan config file with reserved keywords. Maybe a new sub-command could fix this problem, but creating a new sub-command (e.g. global) only to solve a reserved keyword is excessive I think.
The small problem I see with ``conan config get home`` is that you cannot do ``conan config set home``, so it is not symmetric.
I didn't think about it, good catch. Even if `conan config set home` was able to set CONAN_USER_HOME, it sounds wrong.
What about a new config sub-command?
- `conan config system home`
- `conan config global home`
- `conan config info home`
Or maybe follow Javier's suggestion: `conan home`
I think `conan config` is better for updates, if we need to add more new properties in the future, but I can't think about any other right now, only home is global, any other can be retrieved by conan.conf or env var
I think we should go with ``conan config home``, until we have further evidence that a new subcommand like ``system`` or ``global`` make sense, I cannot think of anything that is not in the conan.conf, as the only thing that is necessary to define the location of conan.conf is the HOME.
|
[
{
"body": "A command to know where the `conan.conf` is (we call it Conan home) which is evaluated using the `CONAN_USER_HOME` env variable or defaulted to `~/.conan` would be useful.\r\n\r\nProposals:\r\n * `conan config home`\r\n * `conan home`\r\n * `...`",
"number": 5572,
"title": "Add a command to retrieve the Conan home directory"
}
] |
2dfadb17a66218226a5ace24d15267b0a3f3bee9
|
{
"head_commit": "4b64731e34a0ff4e2ac20275bc555a93f1217a65",
"head_commit_message": "#5572 Retrieve Conan home directory\n\n- Add `config home` command to retrieve Conan home dir\n- Add functional tests to validate `config home`\n\nSigned-off-by: Uilian Ries <[email protected]>",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex 0892380a71d..5b7b9177222 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -508,11 +508,13 @@ def config(self, *args):\n subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')\n subparsers.required = True\n \n- rm_subparser = subparsers.add_parser('rm', help='Remove an existing config element')\n- set_subparser = subparsers.add_parser('set', help='Set a value for a configuration item')\n get_subparser = subparsers.add_parser('get', help='Get the value of configuration item')\n- install_subparser = subparsers.add_parser('install', help='install a full configuration '\n+ subparsers.add_parser('home', help='Retrieve the Conan home directory')\n+ install_subparser = subparsers.add_parser('install', help='Install a full configuration '\n 'from a local or remote zip file')\n+ rm_subparser = subparsers.add_parser('rm', help='Remove an existing config element')\n+ set_subparser = subparsers.add_parser('set', help='Set a value for a configuration item')\n+\n rm_subparser.add_argument(\"item\", help=\"Item to remove\")\n get_subparser.add_argument(\"item\", nargs=\"?\", help=\"Item to print\")\n set_subparser.add_argument(\"item\", help=\"'item=value' to set\")\n@@ -547,6 +549,8 @@ def config(self, *args):\n return self._conan.config_get(args.item)\n elif args.subcommand == \"rm\":\n return self._conan.config_rm(args.item)\n+ elif args.subcommand == \"home\":\n+ return self._conan.config_home()\n elif args.subcommand == \"install\":\n verify_ssl = get_bool_from_text(args.verify_ssl)\n return self._conan.config_install(args.item, verify_ssl, args.type, args.args,\ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 87d123d43db..10da0a9a479 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -601,6 +601,11 @@ def config_install(self, path_or_url, verify_ssl, config_type=None, args=None,\n args=args,\n source_folder=source_folder, target_folder=target_folder)\n \n+ @api_method\n+ def config_home(self):\n+ self.app.out.info(self.cache_folder)\n+ return self.cache_folder\n+\n def _info_args(self, reference_or_path, install_folder, profile_names, settings, options, env,\n lockfile=None):\n cwd = get_cwd()\ndiff --git a/conans/test/functional/command/config_test.py b/conans/test/functional/command/config_test.py\nindex 34712041329..17b85cb1997 100644\n--- a/conans/test/functional/command/config_test.py\n+++ b/conans/test/functional/command/config_test.py\n@@ -3,6 +3,8 @@\n \n from conans.test.utils.tools import TestClient\n from conans.util.files import load\n+from conans.test.utils.test_files import temp_folder\n+from conans.client.tools import environment_append\n \n \n class ConfigTest(unittest.TestCase):\n@@ -90,3 +92,21 @@ def remove_envvar_test(self):\n def missing_subarguments_test(self):\n self.client.run(\"config\", assert_error=True)\n self.assertIn(\"ERROR: Exiting with code: 2\", self.client.out)\n+\n+ def test_config_home_default(self):\n+ self.client.run(\"config home\")\n+ self.assertIn(self.client.cache.cache_folder, self.client.out)\n+\n+ def test_config_home_custom_home_dir(self):\n+ cache_folder = os.path.join(temp_folder(), \"custom\")\n+ with environment_append({\"CONAN_USER_HOME\": cache_folder}):\n+ client = TestClient(cache_folder=cache_folder)\n+ client.run(\"config home\")\n+ self.assertIn(cache_folder, client.out)\n+\n+ def test_config_home_short_home_dir(self):\n+ cache_folder = os.path.join(temp_folder(), \"custom\")\n+ with environment_append({\"CONAN_USER_HOME_SHORT\": cache_folder}):\n+ client = TestClient(cache_folder=cache_folder)\n+ client.run(\"config home\")\n+ self.assertIn(cache_folder, client.out)\n"
}
|
[
{
"diff_hunk": "@@ -601,6 +601,11 @@ def config_install(self, path_or_url, verify_ssl, config_type=None, args=None,\n args=args,\n source_folder=source_folder, target_folder=target_folder)\n \n+ @api_method\n+ def config_home(self):\n+ self.app.out.info(self.cache_folder)",
"line": null,
"original_line": 606,
"original_start_line": null,
"path": "conans/client/conan_api.py",
"start_line": null,
"text": "@user1:\nActually it is more correct that the api returns the value, and the Command class is the one printing the output.\n\n@author:\nso we need to revisit `config get`, because it follows the same steps.\n\n@author:\nindeed I agree with you, but should I change `config get` ?\n\n@user1:\nNo, you can leave it, there will be also many other things that would need to be fixed exactly the same. Just wanted to say it to outline future direction of the conan_api.\n\n@author:\nokay, I gonna update config home only then\n\n@author:\ndone"
}
] |
d5a177fa8cd47c2e51eeb52eb0edcef787e30926
|
diff --git a/conans/client/command.py b/conans/client/command.py
index 0892380a71d..988c740ce74 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -508,11 +508,13 @@ def config(self, *args):
subparsers = parser.add_subparsers(dest='subcommand', help='sub-command help')
subparsers.required = True
- rm_subparser = subparsers.add_parser('rm', help='Remove an existing config element')
- set_subparser = subparsers.add_parser('set', help='Set a value for a configuration item')
get_subparser = subparsers.add_parser('get', help='Get the value of configuration item')
- install_subparser = subparsers.add_parser('install', help='install a full configuration '
+ subparsers.add_parser('home', help='Retrieve the Conan home directory')
+ install_subparser = subparsers.add_parser('install', help='Install a full configuration '
'from a local or remote zip file')
+ rm_subparser = subparsers.add_parser('rm', help='Remove an existing config element')
+ set_subparser = subparsers.add_parser('set', help='Set a value for a configuration item')
+
rm_subparser.add_argument("item", help="Item to remove")
get_subparser.add_argument("item", nargs="?", help="Item to print")
set_subparser.add_argument("item", help="'item=value' to set")
@@ -547,6 +549,10 @@ def config(self, *args):
return self._conan.config_get(args.item)
elif args.subcommand == "rm":
return self._conan.config_rm(args.item)
+ elif args.subcommand == "home":
+ conan_home = self._conan.config_home()
+ self._out.info(conan_home)
+ return conan_home
elif args.subcommand == "install":
verify_ssl = get_bool_from_text(args.verify_ssl)
return self._conan.config_install(args.item, verify_ssl, args.type, args.args,
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index 87d123d43db..4efa52637eb 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -601,6 +601,10 @@ def config_install(self, path_or_url, verify_ssl, config_type=None, args=None,
args=args,
source_folder=source_folder, target_folder=target_folder)
+ @api_method
+ def config_home(self):
+ return self.cache_folder
+
def _info_args(self, reference_or_path, install_folder, profile_names, settings, options, env,
lockfile=None):
cwd = get_cwd()
diff --git a/conans/test/functional/command/config_test.py b/conans/test/functional/command/config_test.py
index 34712041329..17b85cb1997 100644
--- a/conans/test/functional/command/config_test.py
+++ b/conans/test/functional/command/config_test.py
@@ -3,6 +3,8 @@
from conans.test.utils.tools import TestClient
from conans.util.files import load
+from conans.test.utils.test_files import temp_folder
+from conans.client.tools import environment_append
class ConfigTest(unittest.TestCase):
@@ -90,3 +92,21 @@ def remove_envvar_test(self):
def missing_subarguments_test(self):
self.client.run("config", assert_error=True)
self.assertIn("ERROR: Exiting with code: 2", self.client.out)
+
+ def test_config_home_default(self):
+ self.client.run("config home")
+ self.assertIn(self.client.cache.cache_folder, self.client.out)
+
+ def test_config_home_custom_home_dir(self):
+ cache_folder = os.path.join(temp_folder(), "custom")
+ with environment_append({"CONAN_USER_HOME": cache_folder}):
+ client = TestClient(cache_folder=cache_folder)
+ client.run("config home")
+ self.assertIn(cache_folder, client.out)
+
+ def test_config_home_short_home_dir(self):
+ cache_folder = os.path.join(temp_folder(), "custom")
+ with environment_append({"CONAN_USER_HOME_SHORT": cache_folder}):
+ client = TestClient(cache_folder=cache_folder)
+ client.run("config home")
+ self.assertIn(cache_folder, client.out)
diff --git a/conans/test/functional/conan_api/config.py b/conans/test/functional/conan_api/config.py
index 417d1f01743..12618c532ad 100644
--- a/conans/test/functional/conan_api/config.py
+++ b/conans/test/functional/conan_api/config.py
@@ -5,9 +5,15 @@
class ConfigTest(unittest.TestCase):
+ def setUp(self):
+ self.conan, _, _ = conan_api.ConanAPIV1.factory()
+
def config_rm_test(self):
- conan, _, _ = conan_api.ConanAPIV1.factory()
- conan.config_set("proxies.https", "http://10.10.1.10:1080")
- self.assertIn("proxies", conan._cache.config.sections())
- conan.config_rm('proxies')
- self.assertNotIn("proxies", conan._cache.config.sections())
+ self.conan.config_set("proxies.https", "http://10.10.1.10:1080")
+ self.assertIn("proxies", self.conan._cache.config.sections())
+ self.conan.config_rm('proxies')
+ self.assertNotIn("proxies", self.conan._cache.config.sections())
+
+ def test_config_home(self):
+ conan_home = self.conan.config_home()
+ self.assertEqual(self.conan.cache_folder, conan_home)
|
{
"difficulty": "low",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-5841@7f548b6
|
conan-io/conan
|
Python
| 5,841
|
Issue/5814 fix python_requires with short_paths enabled
|
Changelog: Bugfix: Use imported python requires' `short_path` value instead of the defined in the `conanfile` that imports it.
Docs: omit
Closes #5814
- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-09-30T14:58:39Z
|
Use of python_requires with enabled short_paths fails
In Windows conan fails when a recipe uses `python_requires` and `short_paths = True` at the same time.
To reproduce the issue, it's enough to just add `short_paths = True` into the consumer conanfile of the `python_requires` example.
Conan will end with
```
ERROR: Error while trying to get recipe sources for pyreq/version@user/channel. No remote defined
```
|
Hi @kdsx,
I was able to reproduce the issue when using `export_sources` in the `python_requires`
While we have a look at it, did you try to set the `short_paths = True` in the python requires as well?
Thanks a lot for the feedback
Hi @czoido,
I've already tried it. Unfortunately with no effect. Conan fails with the same message.
Hi!
I haven't been able to reproduce it, please @czoido share a unit test that can reproduce it.
That error message typically appears when a package is downloaded from a remote, then the remote is removed (and the necessary sources aren't there), so when the package need the sources it fails. It is not that directly related to short_paths, but to the remote removal. So any help with more detailed steps to reproduce will help. Thanks!
In my case it happened for a package which exists only in the local cache and has never been to a remote server. Then for simplicity I've reproduced it on the mentioned conan example. In contrast to the example my package does not use `exports_sources`, only `exports` which appear in the local cache in the same directory as `conanfile.py`, thus it should not require any additional sources to download.
I use the latest conan version at the moment (1.18.5) on Windows 10. I doubt it's important, but I run conan from cygwin environment, however conan itself installed by `pip` into `venv` generated by a native (non-cygwin) version of Python 3.
I've looked into code a bit and found that when conan installs the dependencies (including pyreq) it calls `_build_package` and then `complete_recipe_sources` for every dependency and the package itself. In the very beginning it tries to find the sources folder and then retrieve it if it is not available. The problem is probably here:
```
sources_folder = cache.package_layout(ref, conanfile.short_paths).export_sources()
```
It requests the source folder for the package `ref` (which is `pyreq`), but provides `short_paths` flag for the package `conanfile` which is `consumer`. Since `pyreq` does not have `shot_paths` flag, its sources were exported to the normal location, but `sources_folder` here points to a short path which even does not exist.
What is interesting, when I specify `short_paths` for `pyreq` too, it still does not work! When I create `pyreq` package, its `complete_recipe_sources` gets `conanfile.short_paths == False` and sources are still exported to their normal (non-short) location. Furthermore its "short" folder is created, but contains only `real_path.txt` which points to "long" folder. Thus when `complete_recipe_sources` is called from `consumer` context for `pyreq` it still can't find the sources directory.
Thanks a lot for reporting the bug @kdsx
I have just made a PR that should fix the problem :)
|
[
{
"body": "In Windows conan fails when a recipe uses `python_requires` and `short_paths = True` at the same time.\r\n\r\nTo reproduce the issue, it's enough to just add `short_paths = True` into the consumer conanfile of the `python_requires` example.\r\n\r\nConan will end with\r\n```\r\nERROR: Error while trying to get recipe sources for pyreq/version@user/channel. No remote defined\r\n```",
"number": 5814,
"title": "Use of python_requires with enabled short_paths fails"
}
] |
b2cbb3f2bf65fdf3ccdf6cad1ea93eee80415bc7
|
{
"head_commit": "7f548b617c4b25736a33529a6b49e607fc06dca7",
"head_commit_message": "remove line",
"patch_to_review": "diff --git a/conans/client/installer.py b/conans/client/installer.py\nindex 1bd24f6aab7..dac9c92a717 100644\n--- a/conans/client/installer.py\n+++ b/conans/client/installer.py\n@@ -427,7 +427,7 @@ def _build_package(self, node, output, keep_build, remotes):\n assert python_require.ref.revision is not None, \\\n \"Installer should receive python_require.ref always\"\n complete_recipe_sources(self._remote_manager, self._cache,\n- conanfile, python_require.ref, remotes)\n+ python_require.conanfile, python_require.ref, remotes)\n \n builder = _PackageBuilder(self._cache, output, self._hook_manager, self._remote_manager)\n pref = builder.build_package(node, keep_build, self._recorder, remotes)\ndiff --git a/conans/test/functional/python_requires/python_requires_test.py b/conans/test/functional/python_requires/python_requires_test.py\nindex aa238d65553..b96e2e8dfbf 100644\n--- a/conans/test/functional/python_requires/python_requires_test.py\n+++ b/conans/test/functional/python_requires/python_requires_test.py\n@@ -576,6 +576,36 @@ class Lib(ConanFile):\n self.assertIn(\"Same python_requires with different versions not allowed for a conanfile\",\n t.out)\n \n+ def short_paths_test(self):\n+ # https://github.com/conan-io/conan/issues/5814\n+ client = TestClient(servers={\"default\": TestServer()},\n+ users={\"default\": [(\"lasote\", \"mypass\")]})\n+ conanfile = \"\"\"from conans import ConanFile\n+class MyConanfileBase(ConanFile):\n+ license = \"MyLicense\"\n+ author = \"[email protected]\"\n+ exports = \"*.txt\"\n+ exports_sources = \"*.h\"\n+ generators = \"cmake\"\n+ \"\"\"\n+ client.save({\"conanfile.py\": conanfile,\n+ \"file.h\": \"header\",\n+ \"other.txt\": \"text\"})\n+ client.run(\"create . Base/1.2@lasote/testing\")\n+\n+ reuse = \"\"\"from conans import python_requires\n+base = python_requires(\"Base/1.2@lasote/testing\")\n+class PkgTest(base.MyConanfileBase):\n+ short_paths = True\n+ name = \"consumer\"\n+ version = \"1.0.0\"\n+ def build(self):\n+ self.output.info(\"Package built successfully!\")\n+ \"\"\"\n+ client.save({\"conanfile.py\": reuse}, clean_first=True)\n+ client.run(\"create . lasote/testing\")\n+ self.assertIn(\"Package built successfully!\", client.out)\n+\n \n class PythonRequiresNestedTest(unittest.TestCase):\n \n"
}
|
[
{
"diff_hunk": "@@ -576,6 +576,36 @@ class Lib(ConanFile):\n self.assertIn(\"Same python_requires with different versions not allowed for a conanfile\",\n t.out)\n \n+ def short_paths_test(self):\n+ # https://github.com/conan-io/conan/issues/5814\n+ client = TestClient(servers={\"default\": TestServer()},\n+ users={\"default\": [(\"lasote\", \"mypass\")]})\n+ conanfile = \"\"\"from conans import ConanFile\n+class MyConanfileBase(ConanFile):\n+ license = \"MyLicense\"\n+ author = \"[email protected]\"\n+ exports = \"*.txt\"\n+ exports_sources = \"*.h\"\n+ generators = \"cmake\"\n+ \"\"\"\n+ client.save({\"conanfile.py\": conanfile,\n+ \"file.h\": \"header\",\n+ \"other.txt\": \"text\"})\n+ client.run(\"create . Base/1.2@lasote/testing\")\n+\n+ reuse = \"\"\"from conans import python_requires\n+base = python_requires(\"Base/1.2@lasote/testing\")\n+class PkgTest(base.MyConanfileBase):\n+ short_paths = True\n+ name = \"consumer\"\n+ version = \"1.0.0\"\n+ def build(self):",
"line": null,
"original_line": 602,
"original_start_line": null,
"path": "conans/test/functional/python_requires/python_requires_test.py",
"start_line": null,
"text": "@user1:\nDont need build method, just use the normal output."
},
{
"diff_hunk": "@@ -576,6 +576,36 @@ class Lib(ConanFile):\n self.assertIn(\"Same python_requires with different versions not allowed for a conanfile\",\n t.out)\n \n+ def short_paths_test(self):\n+ # https://github.com/conan-io/conan/issues/5814\n+ client = TestClient(servers={\"default\": TestServer()},",
"line": null,
"original_line": 581,
"original_start_line": null,
"path": "conans/test/functional/python_requires/python_requires_test.py",
"start_line": null,
"text": "@user1:\nuse TestClient(default_server_user=True)"
},
{
"diff_hunk": "@@ -576,6 +576,36 @@ class Lib(ConanFile):\n self.assertIn(\"Same python_requires with different versions not allowed for a conanfile\",\n t.out)\n \n+ def short_paths_test(self):\n+ # https://github.com/conan-io/conan/issues/5814\n+ client = TestClient(servers={\"default\": TestServer()},\n+ users={\"default\": [(\"lasote\", \"mypass\")]})\n+ conanfile = \"\"\"from conans import ConanFile",
"line": null,
"original_line": 583,
"original_start_line": null,
"path": "conans/test/functional/python_requires/python_requires_test.py",
"start_line": null,
"text": "@user1:\nKeep it simple, remove unused things:\r\n- license, author, generator\r\n\r\nUse textwrap.dedent() for cleaner layout"
}
] |
4a6069e608b0e9879db7ea51aac1a9b8525aca58
|
diff --git a/conans/client/installer.py b/conans/client/installer.py
index 1bd24f6aab7..dac9c92a717 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -427,7 +427,7 @@ def _build_package(self, node, output, keep_build, remotes):
assert python_require.ref.revision is not None, \
"Installer should receive python_require.ref always"
complete_recipe_sources(self._remote_manager, self._cache,
- conanfile, python_require.ref, remotes)
+ python_require.conanfile, python_require.ref, remotes)
builder = _PackageBuilder(self._cache, output, self._hook_manager, self._remote_manager)
pref = builder.build_package(node, keep_build, self._recorder, remotes)
diff --git a/conans/test/functional/python_requires/python_requires_test.py b/conans/test/functional/python_requires/python_requires_test.py
index aa238d65553..65a85fce683 100644
--- a/conans/test/functional/python_requires/python_requires_test.py
+++ b/conans/test/functional/python_requires/python_requires_test.py
@@ -576,6 +576,32 @@ class Lib(ConanFile):
self.assertIn("Same python_requires with different versions not allowed for a conanfile",
t.out)
+ def short_paths_test(self):
+ # https://github.com/conan-io/conan/issues/5814
+ client = TestClient(default_server_user=True)
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class MyConanfileBase(ConanFile):
+ exports = "*.txt"
+ exports_sources = "*.h"
+ """)
+ client.save({"conanfile.py": conanfile,
+ "file.h": "header",
+ "other.txt": "text"})
+ client.run("create . Base/1.2@lasote/testing")
+
+ reuse = textwrap.dedent("""
+ from conans import python_requires
+ base = python_requires("Base/1.2@lasote/testing")
+ class PkgTest(base.MyConanfileBase):
+ short_paths = True
+ name = "consumer"
+ version = "1.0.0"
+ """)
+ client.save({"conanfile.py": reuse}, clean_first=True)
+ client.run("create . lasote/testing")
+ self.assertIn("consumer/1.0.0@lasote/testing: Created package revision", client.out)
+
class PythonRequiresNestedTest(unittest.TestCase):
|
{
"difficulty": "medium",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-6052@ca23055
|
conan-io/conan
|
Python
| 6,052
|
Intel compiler POC using compatible ids feature
|
Changelog: Feature: Support for Intel compiler.
Docs: https://github.com/conan-io/docs/pull/1479
**This branch is the original @danimtb one but updated and conflict resolved**
- Refactor of compatible packages, (can be extracted to a different PR if needed) I've completely broken the previous experimental approach.
- Added helpers to the info so we can declare that a Visual package can be consumed with Intel and the opposite.
- The settings model looks good as dani suggested. The approach code provided by @ohanar used this model and looks reasonable and could be added later to complete the Intel functionality in Conan.
#revisions: 1
Closes #5590
Related #5699
Supersedes #5626 and #5770 based on the compatible IDs feature.
|
2019-11-11T08:16:32Z
|
Add intel compiler to default settings.yml
Suggested:
```
intel:
version: ["17.0.0", "17.0.1", "17.0.2", "17.0.3", "17.0.4", "17.0.5", "17.0.6", "17.0.7", "17.0.8",
"18.0.0", "18.0.1", "18.0.2", "18.0.3", "18.0.4",
"19.0.0", "19.0.1", "19.0.2", "19.0.3"]
libcxx: [libstdc++, libstdc++11] # Linux only
runtime: [MD, MT, MTd, MDd] # Windows only
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
```
To investigate:
- Binary compatibility of update/patches: reported erorrs on 17.0.4->17.0.6
- Intel compiler also includes fortran, check if it makes sense regarding cppstd/libcxx
- libcxx/runtime in different OS, how to manage?
|
Seems the Intel C++ compiler is compatible with both GCC and Visual Studio with some considerations:
- GCC support is declared to be compatible with "most versions" https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-gcc-compatibility-and-interoperability without any other major consideration apart from optimization flags.
> C language object files created with the Intel® C++ Compiler are binary compatible with gcc* and C/C++ language library. You can use the Intel ® C++ Compiler or the gcc* compiler to pass object files to the linker.
- Visual Studio support is declared to be compatible with VS 2013, 2015 https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-microsoft-compatibility and probably 2017 too as declared in the portability page https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-overview-porting-from-the-microsoft-compiler-to-the-intel-c-compiler. However, many features like preprocessor directives or keywords are not supported and this will reduce the compatibility of this compiler with libraries developed for VS.
- As said here, Intel compiler only supports C++11 standard at most: https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-conformance-to-the-c-c-standards but this seems to be outdated information, as it really supports features of C++17 (haven't found anything about C++20): https://software.intel.com/en-us/articles/c17-features-supported-by-intel-c-compiler
- Some of the supported features in the standard change in patch versions of the compiler (see for example the C++17 link above): The [Template argument deduction for class templates](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0522r0.html) was not supported in Intel 19.0.0 and it is in 19.0.1
--------
The Intel Fortran Compiler allows interoperability with C code (no libcxx) https://software.intel.com/en-us/fortran-compiler-developer-guide-and-reference-standard-fortran-and-c-interoperability and is compatible with Intel C++ Compiler and Visual Studio or GCC
On the Linux side one thing to consider is the version of libstdc++ that the Intel compiler will use.
By default the Intel compiler uses the standard library that it finds from the local GCC install.
If the libstdc++ version isn't tracked then it will be hard to ensure compatibility between builds labeled with the same settings.
It is also worth noting that the Intel build will normally be suitable for linking together with libraries built with GCC, as long as the same standard library version is used.
So thinking out loud a bit gives the below two proposals:
**Add gcc_version:**
intel:
version: ["17.0.0", "17.0.1", "17.0.2", "17.0.3", "17.0.4", "17.0.5", "17.0.6", "17.0.7", "17.0.8",
"18.0.0", "18.0.1", "18.0.2", "18.0.3", "18.0.4",
"19.0.0", "19.0.1", "19.0.2", "19.0.3"]
gcc_version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
"5", "5.1", "5.2", "5.3", "5.4", "5.5",
"6", "6.1", "6.2", "6.3", "6.4",
"7", "7.1", "7.2", "7.3",
"8", "8.1", "8.2", "8.3",
"9", "9.1"] # Linux only
libcxx: [libstdc++, libstdc++11] # Linux only
runtime: [MD, MT, MTd, MDd] # Windows only
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
**Treat existing compiler entry as source of libstdc++ and add other_compiler entry:**
gcc:
version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
"5", "5.1", "5.2", "5.3", "5.4", "5.5",
"6", "6.1", "6.2", "6.3", "6.4",
"7", "7.1", "7.2", "7.3",
"8", "8.1", "8.2", "8.3",
"9", "9.1"]
libcxx: [libstdc++, libstdc++11]
threads: [None, posix, win32] # Windows MinGW
exception: [None, dwarf2, sjlj, seh] # Windows MinGW
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
other_compiler:
None:
intel:
version: ["17.0.0", "17.0.1", "17.0.2", "17.0.3", "17.0.4", "17.0.5", "17.0.6", "17.0.7", "17.0.8",
"18.0.0", "18.0.1", "18.0.2", "18.0.3", "18.0.4",
"19.0.0", "19.0.1", "19.0.2", "19.0.3"]
FYI, we extensively use the intel compiler with conan, and it's binary compatibility with gcc & msvc is very important to us (we have just under 100 packages, and about a third are built with the intel compiler). We ended up using yaml anchors in our settings.yml:
```yaml
compiler:
gcc: &gcc
version: ...
...
Visual Studio: &msvc
...
intel:
version: ["16.0", "17.0", "18.0", "19.0"]
base:
gcc:
<<: *gcc
threads: [None]
exception: [None]
Visual Studio:
<<: *msvc
toolset: [None]
```
Having both the intel compiler version as well as the "base" compiler version is important, as the intel compiler tries (with varying levels of success) to emulate the base compiler. Using anchors made it easy for us to keep our settings.yml in sync with upstream conan, and generally caused few headaches as there was a single source of truth for the libstdcxx and runtime settings.
Describing it in this fashion made it relatively easy to make packages built with the intel compiler to share a package_id with that of the corresponding system compiler:
```python
def package_id(self):
if self.info.full_settings.compiler == "intel":
# Unfortunately assigning values is shallow
base = self.info.settings.compiler.base
self.info.settings.compiler = (
base
) # So now self.info.settings.compiler is basically just a string
# Deep copy the rest
for field, value in base.as_list():
tokens = field.split(".")
attr = self.info.settings.compiler
for token in tokens[:-1]:
attr = getattr(attr, token)
setattr(attr, tokens[-1], value)
```
(We don't currently have MacOS as a target platform, and I'm not sure exactly what compiler the intel compiler tries to emulate their, probably apple-clang.)
As a followup, regarding fortran. We have a number of fortran packages that we manage with conan, all of which are built with ifort. I believe that gfortran provides binary compatibility with gcc as well, however, gfortran and ifort have their own rutimes, and so they might not be completely compatible with each other.
I would say that for now you should just focus on just handling the C/C++ components of the intel compiler, and think about fortran support in a separate issue.
As an aside, we are using conan to manage packages for other languages as well. We found it just easier to keep a single repository of packages, rather than a repository for each language that we use. Thanks to conan's unbiased build system approach, it has been straightforward to (for instance) integrate pip packages into conan (including ugly ones, that depend upon c and fortran, like numpy).
Thanks a lot for the feedback! We are trying to gather the important bits to include the intel C++ compiler in the settings and do not make a mess of settings in the _settings.yml_ file.
@peterSW thanks for pointing out the importance of tracking the version of the gcc compiler. Definitely, this seems something we have to model. The settings structure proposed by @ohanar makes sense to me and it tackles an important issue separating the visual runtime from the gcc libcxx.
I also like the idea of letting the users implement the compatibility with the base compiler in the package ID. The good thing is that the information that the package has been created with the intel compiler will be preserved as metadata, but the ID will be compatible with gcc. Thanks for sharing your solution!
Regarding the Fortran compiler, I agree we should treat this as a different issue and maybe discuss your approach there 😄
Thanks all for the feedback! Really useful.
I think the proposal of @ohanar makes sense, specially if we consider that we could add a ``None`` base for those who want to keep the intel binaries as totally distinct binaries with its own package_id. Even the pieces of the package_id() could be built-in for the intel compiler.
My major question at this point is the intel version <-> other compiler version compatibility. With the presented approach you get that a package compiled with Intel compiler will be usable from exactly one version of either gcc or msvc. Is this the general case? Is there somewhere a table or statament of this in the Intel docs? Wouldn't it be more general to have the ``base: gcc: version: None`` or some other mechanism to specify that it would be valid for any other compiler version?
> My major question at this point is the intel version <-> other compiler version compatibility. With the presented approach you get that a package compiled with Intel compiler will be usable from exactly one version of either gcc or msvc. Is this the general case? Is there somewhere a table or statament of this in the Intel docs? Wouldn't it be more general to have the `base: gcc: version: None` or some other mechanism to specify that it would be valid for any other compiler version?
For most packages I would expect that the `base: gcc: version` will have a bigger significance for compatibility than `intel: version`. That is because the Intel compiler aims for compatibility with the "base" compiler and uses its headers and libraries. The best documentation on this I know about is here: https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-gcc-compatibility-and-interoperability
I think GGC's manual entry on "ABI poilicy and Guidelines" is also quite relevant:
https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
> I think the proposal of @ohanar makes sense, specially if we consider that we could add a `None` base for those who want to keep the intel binaries as totally distinct binaries with its own package_id.
I don't really think this makes any sense, the intel compiler requires another compiler to already be installed on your system, and leverages that compiler while compiling. E.g. On windows you get the following error if `cl.exe` is not in PATH:
```
Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.4.245 Build 20190417
Copyright (C) 1985-2019 Intel Corporation. All rights reserved.
icl: error #10114: Microsoft Visual C++ not found in path
```
On Linux you get a similar error if gcc/g++ is not in PATH. If you don't mess with the package_id at all, you will already have binaries that are only compatible with the intel compiler, so I don't really see the need of having a `None` base (plus as I mentioned, it is non-nonsensical).
I think the best way to handle abi compatibility would be to add a couple of methods -- `ConanInfo.intel_compatible`, `ConanInfo.intel_incompatible` -- and decide on a default. IMO, the default should be abi compatibility with the base compiler, there have only been a couple exceptions to that rule in our usage.
Ok, understood, I didn't know that.
What I would like to model is the possibility for a package to have distinct binary packages for intel and gcc, and not one having a single package-ID and being compatible. We cannot force the compatibility without letting users to opt-out and define they want a real gcc binary and an intel one, and be able to consume and use both in some way, even if they are binary compatible, Conan should be able to manage them as different binaries. We need to think of some way to define this.
@memsharded I think that is perfectly viable by adding following:
```python
class ConanInfo(object):
def __init__(...):
...
# default behaviour is for binaries built with the intel compiler to be
# compatible with the base compiler:
self.intel_compatible()
def intel_compatible(self):
# Basically what I put above in the package_id method
if self.full_settings.compiler != "intel":
return
# Unfortunately assigning values is shallow
self.settings.compiler = (
self.full_settings.compiler.base
) # So now self.settings.compiler is basically just a string
# Deep copy everything
for field, value in self.full_settings.compiler.base.as_list():
tokens = field.split(".")
attr = self.settings.compiler
for token in tokens[:-1]:
attr = getattr(attr, token)
setattr(attr, tokens[-1], value)
def intel_incompatible(self):
# Method to opt out of binary compatibility
if self.full_settings.compiler != "intel":
return
# Unfortunately assigning values is shallow
self.settings.compiler = (
self.full_settings.compiler
) # So now self.settings.compiler is basically just a string
# Deep copy everything
for field, value in self.full_settings.compiler.as_list():
tokens = field.split(".")
attr = self.settings.compiler
for token in tokens[:-1]:
attr = getattr(attr, token)
setattr(attr, tokens[-1], value)
```
|
[
{
"body": "Suggested:\r\n\r\n```\r\nintel:\r\n version: [\"17.0.0\", \"17.0.1\", \"17.0.2\", \"17.0.3\", \"17.0.4\", \"17.0.5\", \"17.0.6\", \"17.0.7\", \"17.0.8\",\r\n \"18.0.0\", \"18.0.1\", \"18.0.2\", \"18.0.3\", \"18.0.4\",\r\n \"19.0.0\", \"19.0.1\", \"19.0.2\", \"19.0.3\"]\r\n libcxx: [libstdc++, libstdc++11] # Linux only\r\n runtime: [MD, MT, MTd, MDd] # Windows only\r\n cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\r\n```\r\n\r\nTo investigate:\r\n- Binary compatibility of update/patches: reported erorrs on 17.0.4->17.0.6\r\n- Intel compiler also includes fortran, check if it makes sense regarding cppstd/libcxx\r\n- libcxx/runtime in different OS, how to manage?\r\n",
"number": 5590,
"title": "Add intel compiler to default settings.yml"
}
] |
9e348424d85fa22b41c93cec4b0a113f25fef24d
|
{
"head_commit": "ca230557717708529c096f84f8ce894d491cd7c0",
"head_commit_message": "Fix py2",
"patch_to_review": "diff --git a/conans/__init__.py b/conans/__init__.py\nindex 8bd5cbf3a7b..7f0450385be 100644\n--- a/conans/__init__.py\n+++ b/conans/__init__.py\n@@ -9,7 +9,6 @@\n from conans.model.conan_file import ConanFile\n from conans.model.options import Options\n from conans.model.settings import Settings\n-from conans.model.compatible_package import CompatiblePackage\n from conans.util.files import load\n \n # complex_search: With ORs and not filtering by not restricted settings\ndiff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py\nindex 4280a4f80b5..09105e1c0b8 100644\n--- a/conans/client/conf/__init__.py\n+++ b/conans/client/conf/__init__.py\n@@ -57,7 +57,7 @@\n version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n threads: [None, posix]\n libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n- gcc:\n+ gcc: &gcc\n version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n@@ -68,7 +68,7 @@\n threads: [None, posix, win32] # Windows MinGW\n exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n- Visual Studio:\n+ Visual Studio: &visual_studio\n runtime: [MD, MT, MTd, MDd]\n version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\", \"16\"]\n toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n@@ -86,6 +86,15 @@\n version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\", \"11.0\"]\n libcxx: [libstdc++, libc++]\n cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+ intel:\n+ version: [\"11\", \"12\", \"13\", \"14\", \"15\", \"16\", \"17\", \"18\", \"19\"]\n+ base:\n+ gcc:\n+ <<: *gcc\n+ threads: [None]\n+ exception: [None]\n+ Visual Studio:\n+ <<: *visual_studio\n qcc:\n version: [\"4.4\", \"5.4\"]\n libcxx: [cxx, gpp, cpp, cpp-ne, accp, acpp-ne, ecpp, ecpp-ne]\ndiff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py\nindex cae69b27f28..b1850ce5373 100644\n--- a/conans/client/graph/graph_binaries.py\n+++ b/conans/client/graph/graph_binaries.py\n@@ -184,8 +184,8 @@ def _evaluate_node(self, node, build_mode, update, remotes):\n % (node.package_id, package_id))\n node._package_id = package_id\n # So they are available in package_info() method\n- node.conanfile.settings = compatible_package.settings\n- node.conanfile.options = compatible_package.options\n+ node.conanfile.settings.values = compatible_package.settings\n+ node.conanfile.options.values = compatible_package.options\n break\n else:\n node.binary = BINARY_MISSING\ndiff --git a/conans/model/compatible_package.py b/conans/model/compatible_package.py\ndeleted file mode 100644\nindex ae38fdaac53..00000000000\n--- a/conans/model/compatible_package.py\n+++ /dev/null\n@@ -1,36 +0,0 @@\n-class CompatiblePackage(object):\n-\n- def __init__(self, conanfile):\n- self._conanfile = conanfile\n- self._settings = None\n- self._options = None\n- self._requires = None\n-\n- @property\n- def settings(self):\n- if not self._settings:\n- self._settings = self._conanfile.settings.copy()\n- return self._settings\n-\n- @property\n- def options(self):\n- if not self._options:\n- self._options = self._conanfile.options.copy()\n- return self._options\n-\n- @property\n- def requires(self):\n- if not self._requires:\n- self._requires = self._conanfile.info.requires.copy()\n- return self._requires\n-\n- def package_id(self):\n- info = self._conanfile.info.copy()\n- if self._settings:\n- info.settings = self._settings.values\n- if self._options:\n- info.options = self._options.values\n- info.options.clear_indirect()\n- if self._requires:\n- info.requires = self._requires\n- return info.package_id()\ndiff --git a/conans/model/info.py b/conans/model/info.py\nindex d6411a0c9ce..dad1c59442b 100644\n--- a/conans/model/info.py\n+++ b/conans/model/info.py\n@@ -315,7 +315,6 @@ def create(settings, options, prefs_direct, prefs_indirect, default_package_id_m\n result.vs_toolset_compatible()\n result.discard_build_settings()\n result.default_std_matching()\n-\n return result\n \n @staticmethod\n@@ -365,6 +364,13 @@ def indent(text):\n \n return '\\n'.join(result) + \"\\n\"\n \n+ def clone(self):\n+ q = self.copy()\n+ q.full_settings = self.full_settings.copy()\n+ q.full_options = self.full_options.copy()\n+ q.full_requires = _PackageReferenceList.loads(self.full_requires.dumps())\n+ return q\n+\n def __eq__(self, other):\n \"\"\" currently just for testing purposes\n \"\"\"\n@@ -402,7 +408,6 @@ def package_id(self):\n if requires_sha is None:\n return PACKAGE_ID_UNKNOWN\n result.append(requires_sha)\n-\n package_id = sha1('\\n'.join(result).encode())\n return package_id\n \n@@ -481,4 +486,30 @@ def shared_library_package_id(self):\n for dep_name in self.requires.pkg_names:\n dep_options = self.full_options[dep_name]\n if \"shared\" not in dep_options or not self.full_options[dep_name].shared:\n- self.requires[dep_name].package_revision_mode()\n\\ No newline at end of file\n+ self.requires[dep_name].package_revision_mode()\n+\n+ def base_compiler_compatible(self, intel_compiler_version):\n+ \"\"\"If a built package for Intel has to be compatible for a Visual/GCC compiler\n+ (consumer). Transform the visual/gcc full_settings into an intel one\"\"\"\n+ if self.full_settings.compiler.base:\n+ return\n+\n+ self.settings.compiler = \"intel\"\n+ # You have to use here a specific version or create more than one version of\n+ # compatible packages\n+ self.settings.compiler.version = intel_compiler_version\n+ self.settings.compiler.base = self.full_settings.compiler\n+ for field in self.full_settings.compiler.fields:\n+ value = getattr(self.full_settings.compiler, field)\n+ setattr(self.settings.compiler.base, field, value)\n+\n+ def intel_compatible(self):\n+ \"\"\"If a built package for Visual/GCC has to be compatible for an Intel compiler\n+ (consumer). Transform the Intel profile into an visual/gcc one\"\"\"\n+ if not self.full_settings.compiler.base:\n+ return\n+\n+ self.settings.compiler = self.full_settings.compiler.base\n+ for field in self.full_settings.compiler.base.fields:\n+ value = getattr(self.full_settings.compiler.base, field)\n+ setattr(self.settings.compiler, field, value)\ndiff --git a/conans/test/functional/package_id/compatible_test.py b/conans/test/functional/package_id/compatible_test.py\nindex 47ef9cce2dd..e05e7d6b824 100644\n--- a/conans/test/functional/package_id/compatible_test.py\n+++ b/conans/test/functional/package_id/compatible_test.py\n@@ -1,7 +1,10 @@\n import textwrap\n+import time\n import unittest\n \n+from conans.model.ref import ConanFileReference\n from conans.test.utils.tools import TestClient, GenConanfile\n+from conans.util.env_reader import get_env\n \n \n class CompatibleIDsTest(unittest.TestCase):\n@@ -9,14 +12,14 @@ class CompatibleIDsTest(unittest.TestCase):\n def compatible_setting_test(self):\n client = TestClient()\n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, CompatiblePackage\n+ from conans import ConanFile\n \n class Pkg(ConanFile):\n settings = \"os\", \"compiler\"\n def package_id(self):\n if self.settings.compiler == \"gcc\" and self.settings.compiler.version == \"4.9\":\n for version in (\"4.8\", \"4.7\", \"4.6\"):\n- compatible_pkg = CompatiblePackage(self)\n+ compatible_pkg = self.info.clone()\n compatible_pkg.settings.compiler.version = version\n self.compatible_packages.append(compatible_pkg)\n def package_info(self):\n@@ -47,14 +50,14 @@ def package_info(self):\n def compatible_setting_no_binary_test(self):\n client = TestClient()\n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, CompatiblePackage\n+ from conans import ConanFile\n \n class Pkg(ConanFile):\n settings = \"os\", \"compiler\"\n def package_id(self):\n if self.settings.compiler == \"gcc\" and self.settings.compiler.version == \"4.9\":\n for version in (\"4.8\", \"4.7\", \"4.6\"):\n- compatible_pkg = CompatiblePackage(self)\n+ compatible_pkg = self.info.clone()\n compatible_pkg.settings.compiler.version = version\n self.compatible_packages.append(compatible_pkg)\n def package_info(self):\n@@ -72,7 +75,7 @@ def package_info(self):\n \"myprofile\": profile})\n # Create package with gcc 4.8\n client.run(\"export . pkg/0.1@user/stable\")\n- self.assertIn(\"pkg/0.1@user/stable: Exported revision: c89d6976443e7a9cd975c5b8210ae212\",\n+ self.assertIn(\"pkg/0.1@user/stable: Exported revision: b27c975bb0d9e40c328bd02bc529b6f8\",\n client.out)\n \n # package can be used with a profile gcc 4.9 falling back to 4.8 binary\n@@ -86,14 +89,14 @@ def package_info(self):\n def compatible_setting_no_user_channel_test(self):\n client = TestClient()\n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, CompatiblePackage\n+ from conans import ConanFile\n \n class Pkg(ConanFile):\n settings = \"os\", \"compiler\"\n def package_id(self):\n if self.settings.compiler == \"gcc\" and self.settings.compiler.version == \"4.9\":\n for version in (\"4.8\", \"4.7\", \"4.6\"):\n- compatible_pkg = CompatiblePackage(self)\n+ compatible_pkg = self.info.clone()\n compatible_pkg.settings.compiler.version = version\n self.compatible_packages.append(compatible_pkg)\n \"\"\")\n@@ -120,14 +123,14 @@ def package_id(self):\n def compatible_option_test(self):\n client = TestClient()\n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, CompatiblePackage\n+ from conans import ConanFile\n \n class Pkg(ConanFile):\n options = {\"optimized\": [1, 2, 3]}\n default_options = {\"optimized\": 1}\n def package_id(self):\n for optimized in range(int(self.options.optimized), 0, -1):\n- compatible_pkg = CompatiblePackage(self)\n+ compatible_pkg = self.info.clone()\n compatible_pkg.options.optimized = optimized\n self.compatible_packages.append(compatible_pkg)\n def package_info(self):\n@@ -158,44 +161,128 @@ def package_info(self):\n client.out)\n self.assertIn(\"pkg/0.1@user/stable: Already installed!\", client.out)\n \n- def error_setting_test(self):\n+ def visual_package_compatible_with_intel_test(self):\n client = TestClient()\n+ ref = ConanFileReference.loads(\"Bye/0.1@us/ch\")\n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, CompatiblePackage\n+ from conans import ConanFile\n \n- class Pkg(ConanFile):\n- settings = \"os\", \"compiler\"\n- def package_id(self):\n- compatible_pkg = CompatiblePackage(self)\n- compatible_pkg.settings.compiler.version = \"bad\"\n- self.compatible_packages.append(self)\n- \"\"\")\n- client.save({\"conanfile.py\": conanfile})\n- client.run(\"create . pkg/0.1@user/stable\", assert_error=True)\n+ class Conan(ConanFile):\n+ settings = \"compiler\"\n \n- self.assertIn('ERROR: pkg/0.1@user/stable: Error in package_id() method, line 8',\n+ def package_id(self):\n+ p = self.info.clone()\n+ p.intel_compatible()\n+ self.compatible_packages.append(p)\n+ \"\"\")\n+ visual_profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ compiler = Visual Studio\n+ compiler.version = 8\n+ compiler.runtime = MD\n+ \"\"\")\n+ intel_profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ compiler = intel\n+ compiler.version = 16\n+ compiler.base = Visual Studio\n+ compiler.base.version = 8\n+ compiler.base.runtime = MD\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile,\n+ \"intel_profile\": intel_profile,\n+ \"visual_profile\": visual_profile})\n+ client.run(\"create . %s --profile visual_profile\" % ref.full_str())\n+ client.run(\"install %s -p intel_profile\" % ref.full_str())\n+ self.assertIn(\"Bye/0.1@us/ch: Main binary package '2ef6f6c768dd0f332dc252\"\n+ \"b72c30dee116632302' missing. Using compatible package \"\n+ \"'1151fe341e6b310f7645a76b4d3d524342835acc'\",\n client.out)\n- self.assertIn('compatible_pkg.settings.compiler.version = \"bad\"', client.out)\n- self.assertIn(\"ConanException: Invalid setting 'bad' is not a valid \"\n- \"'settings.compiler.version' value\", client.out)\n+ self.assertIn(\"Bye/0.1@us/ch:1151fe341e6b310f7645a76b4d3d524342835acc - Cache\", client.out)\n \n- def error_option_test(self):\n+ def intel_package_compatible_with_base_test(self):\n client = TestClient()\n+ ref = ConanFileReference.loads(\"Bye/0.1@us/ch\")\n conanfile = textwrap.dedent(\"\"\"\n- from conans import ConanFile, CompatiblePackage\n+ from conans import ConanFile\n \n- class Pkg(ConanFile):\n- options = {\"shared\": [True, False]}\n- default_options = {\"shared\": True}\n- def package_id(self):\n- compatible_pkg = CompatiblePackage(self)\n- compatible_pkg.options.shared = \"bad\"\n- self.compatible_packages.append(self)\n+ class Conan(ConanFile):\n+ settings = \"compiler\"\n+\n+ def package_id(self):\n+ compatible_pkg = self.info.clone()\n+ compatible_pkg.base_compiler_compatible(intel_compiler_version=16)\n+ self.compatible_packages.append(compatible_pkg)\n+ \n \"\"\")\n- client.save({\"conanfile.py\": conanfile})\n- client.run(\"create . pkg/0.1@user/stable\", assert_error=True)\n+ visual_profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ compiler = Visual Studio\n+ compiler.version = 8\n+ compiler.runtime = MD\n+ \"\"\")\n+ intel_profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ compiler = intel\n+ compiler.version = 16\n+ compiler.base = Visual Studio\n+ compiler.base.version = 8\n+ compiler.base.runtime = MD\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile,\n+ \"intel_profile\": intel_profile,\n+ \"visual_profile\": visual_profile})\n+ client.run(\"create . %s --profile intel_profile\" % ref.full_str())\n+ client.run(\"install %s -p visual_profile\" % ref.full_str())\n+ self.assertIn(\"Bye/0.1@us/ch: Main binary package \"\n+ \"'1151fe341e6b310f7645a76b4d3d524342835acc' missing. Using compatible \"\n+ \"package '2ef6f6c768dd0f332dc252b72c30dee116632302'\",\n+ client.out)\n+ self.assertIn(\"Bye/0.1@us/ch:2ef6f6c768dd0f332dc252b72c30dee116632302 - Cache\", client.out)\n+\n+ def additional_id_mode_test(self):\n+ c1 = GenConanfile().with_name(\"AA\").with_version(\"1.0\")\n+ c2 = GenConanfile().with_name(\"BB\").with_version(\"1.0\").with_require_plain(\"AA/1.0\")\n+ client = TestClient()\n+ # Recipe revision mode\n+ client.run(\"config set general.default_package_id_mode=recipe_revision_mode\")\n+\n+ # Create binaries with recipe revision mode for both\n+ client.save({\"conanfile.py\": c1})\n+ client.run(\"create .\")\n+\n+ client.save({\"conanfile.py\": c2})\n+ client.run(\"create .\")\n+\n+ # Back to semver default\n+ client.run(\"config set general.default_package_id_mode=semver_direct_mode\")\n+ client.run(\"install BB/1.0@\", assert_error=True)\n+ self.assertIn(\"Missing prebuilt package for 'BB/1.0'\", client.out)\n+\n+ # What if client modifies the packages declaring a compatible_package with the recipe mode\n+ # Recipe revision mode\n+ client.run(\"config set general.default_package_id_mode=recipe_revision_mode\")\n+ tmp = \"\"\"\n+ \n+ def package_id(self):\n+ p = self.info.clone()\n+ p.requires.recipe_revision_mode()\n+ self.output.warn(\"Alternative package ID: {}\".format(p.package_id()))\n+ self.compatible_packages.append(p)\n+\"\"\"\n+ c1 = str(c1) + tmp\n+ c2 = str(c2) + tmp\n+ # Create the packages, now with the recipe mode declared as compatible package\n+ time.sleep(1) # new timestamp\n+ client.save({\"conanfile.py\": c1})\n+ client.run(\"create .\")\n+\n+ client.save({\"conanfile.py\": c2})\n+ client.run(\"create .\")\n+ self.assertIn(\"Package '9fc42b36e70615fe97acca0afa27e1731868861c' created\", client.out)\n \n- self.assertIn('ERROR: pkg/0.1@user/stable: Error in package_id() method, line 9',\n+ # Back to semver mode\n+ client.run(\"config set general.default_package_id_mode=semver_direct_mode\")\n+ client.run(\"install BB/1.0@ --update\")\n+ self.assertIn(\"Using compatible package '9fc42b36e70615fe97acca0afa27e1731868861c'\",\n client.out)\n- self.assertIn('compatible_pkg.options.shared = \"bad\"', client.out)\n- self.assertIn(\"ConanException: 'bad' is not a valid 'options.shared' value.\", client.out)\n"
}
|
[
{
"diff_hunk": "@@ -481,4 +486,30 @@ def shared_library_package_id(self):\n for dep_name in self.requires.pkg_names:\n dep_options = self.full_options[dep_name]\n if \"shared\" not in dep_options or not self.full_options[dep_name].shared:\n- self.requires[dep_name].package_revision_mode()\n\\ No newline at end of file\n+ self.requires[dep_name].package_revision_mode()\n+\n+ def base_compiler_compatible(self, intel_compiler_version):",
"line": null,
"original_line": 491,
"original_start_line": null,
"path": "conans/model/info.py",
"start_line": null,
"text": "@user1:\nI think we should aim for the follow ``package_id()``:\r\n\r\n```python\r\ndef package_id(self):\r\n if self.settings.compiler == \"intel\":\r\n compatible = self.info.clone()\r\n compatible.base_compatible()\r\n ...\r\n elif self.settings.compiler == \"Visual Studio\":\r\n compatible = self.info.clone()\r\n compatible.parent_compatible(compiler=\"intel\", version=\"18\")\r\n ....\r\n```\r\n\r\nThis means:\r\n- ``if self.full_settings.compiler.base`` shouldn't be checked, or if checked is to return an error if \"base\" is not there, not silently go away.\r\n- The compatibility methods are fully generic for any other compilers.\r\n\n\n@author:\nYes, very nice ideas."
}
] |
3a1a7aed9282e09fd93fb47724d0ece693287d28
|
diff --git a/conans/__init__.py b/conans/__init__.py
index 8bd5cbf3a7b..7f0450385be 100644
--- a/conans/__init__.py
+++ b/conans/__init__.py
@@ -9,7 +9,6 @@
from conans.model.conan_file import ConanFile
from conans.model.options import Options
from conans.model.settings import Settings
-from conans.model.compatible_package import CompatiblePackage
from conans.util.files import load
# complex_search: With ORs and not filtering by not restricted settings
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index 772318ed9b1..a403ec86163 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -57,7 +57,7 @@
version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
threads: [None, posix]
libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
- gcc:
+ gcc: &gcc
version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
"5", "5.1", "5.2", "5.3", "5.4", "5.5",
"6", "6.1", "6.2", "6.3", "6.4",
@@ -68,7 +68,7 @@
threads: [None, posix, win32] # Windows MinGW
exception: [None, dwarf2, sjlj, seh] # Windows MinGW
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
- Visual Studio:
+ Visual Studio: &visual_studio
runtime: [MD, MT, MTd, MDd]
version: ["8", "9", "10", "11", "12", "14", "15", "16"]
toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,
@@ -86,6 +86,15 @@
version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0", "11.0"]
libcxx: [libstdc++, libc++]
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
+ intel:
+ version: ["11", "12", "13", "14", "15", "16", "17", "18", "19"]
+ base:
+ gcc:
+ <<: *gcc
+ threads: [None]
+ exception: [None]
+ Visual Studio:
+ <<: *visual_studio
qcc:
version: ["4.4", "5.4"]
libcxx: [cxx, gpp, cpp, cpp-ne, accp, acpp-ne, ecpp, ecpp-ne]
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index cae69b27f28..b1850ce5373 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -184,8 +184,8 @@ def _evaluate_node(self, node, build_mode, update, remotes):
% (node.package_id, package_id))
node._package_id = package_id
# So they are available in package_info() method
- node.conanfile.settings = compatible_package.settings
- node.conanfile.options = compatible_package.options
+ node.conanfile.settings.values = compatible_package.settings
+ node.conanfile.options.values = compatible_package.options
break
else:
node.binary = BINARY_MISSING
diff --git a/conans/model/compatible_package.py b/conans/model/compatible_package.py
deleted file mode 100644
index ae38fdaac53..00000000000
--- a/conans/model/compatible_package.py
+++ /dev/null
@@ -1,36 +0,0 @@
-class CompatiblePackage(object):
-
- def __init__(self, conanfile):
- self._conanfile = conanfile
- self._settings = None
- self._options = None
- self._requires = None
-
- @property
- def settings(self):
- if not self._settings:
- self._settings = self._conanfile.settings.copy()
- return self._settings
-
- @property
- def options(self):
- if not self._options:
- self._options = self._conanfile.options.copy()
- return self._options
-
- @property
- def requires(self):
- if not self._requires:
- self._requires = self._conanfile.info.requires.copy()
- return self._requires
-
- def package_id(self):
- info = self._conanfile.info.copy()
- if self._settings:
- info.settings = self._settings.values
- if self._options:
- info.options = self._options.values
- info.options.clear_indirect()
- if self._requires:
- info.requires = self._requires
- return info.package_id()
diff --git a/conans/model/info.py b/conans/model/info.py
index d6411a0c9ce..60dd20d5630 100644
--- a/conans/model/info.py
+++ b/conans/model/info.py
@@ -315,7 +315,6 @@ def create(settings, options, prefs_direct, prefs_indirect, default_package_id_m
result.vs_toolset_compatible()
result.discard_build_settings()
result.default_std_matching()
-
return result
@staticmethod
@@ -365,6 +364,13 @@ def indent(text):
return '\n'.join(result) + "\n"
+ def clone(self):
+ q = self.copy()
+ q.full_settings = self.full_settings.copy()
+ q.full_options = self.full_options.copy()
+ q.full_requires = _PackageReferenceList.loads(self.full_requires.dumps())
+ return q
+
def __eq__(self, other):
""" currently just for testing purposes
"""
@@ -402,7 +408,6 @@ def package_id(self):
if requires_sha is None:
return PACKAGE_ID_UNKNOWN
result.append(requires_sha)
-
package_id = sha1('\n'.join(result).encode())
return package_id
@@ -481,4 +486,36 @@ def shared_library_package_id(self):
for dep_name in self.requires.pkg_names:
dep_options = self.full_options[dep_name]
if "shared" not in dep_options or not self.full_options[dep_name].shared:
- self.requires[dep_name].package_revision_mode()
\ No newline at end of file
+ self.requires[dep_name].package_revision_mode()
+
+ def parent_compatible(self, *_, **kwargs):
+ """If a built package for Intel has to be compatible for a Visual/GCC compiler
+ (consumer). Transform the visual/gcc full_settings into an intel one"""
+
+ if "compiler" not in kwargs:
+ raise ConanException("Specify 'compiler' as a keywork argument. e.g: "
+ "'parent_compiler(compiler=\"intel\")' ")
+
+ self.settings.compiler = kwargs["compiler"]
+ # You have to use here a specific version or create more than one version of
+ # compatible packages
+ kwargs.pop("compiler")
+ for setting_name in kwargs:
+ # Won't fail even if the setting is not valid, there is no validation at info
+ setattr(self.settings.compiler, setting_name, kwargs[setting_name])
+ self.settings.compiler.base = self.full_settings.compiler
+ for field in self.full_settings.compiler.fields:
+ value = getattr(self.full_settings.compiler, field)
+ setattr(self.settings.compiler.base, field, value)
+
+ def base_compatible(self):
+ """If a built package for Visual/GCC has to be compatible for an Intel compiler
+ (consumer). Transform the Intel profile into an visual/gcc one"""
+ if not self.full_settings.compiler.base:
+ raise ConanException("The compiler '{}' has "
+ "no 'base' sub-setting".format(self.full_settings.compiler))
+
+ self.settings.compiler = self.full_settings.compiler.base
+ for field in self.full_settings.compiler.base.fields:
+ value = getattr(self.full_settings.compiler.base, field)
+ setattr(self.settings.compiler, field, value)
diff --git a/conans/test/functional/package_id/compatible_test.py b/conans/test/functional/package_id/compatible_test.py
index 47ef9cce2dd..be42a12ea39 100644
--- a/conans/test/functional/package_id/compatible_test.py
+++ b/conans/test/functional/package_id/compatible_test.py
@@ -1,7 +1,10 @@
import textwrap
+import time
import unittest
+from conans.model.ref import ConanFileReference
from conans.test.utils.tools import TestClient, GenConanfile
+from conans.util.env_reader import get_env
class CompatibleIDsTest(unittest.TestCase):
@@ -9,14 +12,14 @@ class CompatibleIDsTest(unittest.TestCase):
def compatible_setting_test(self):
client = TestClient()
conanfile = textwrap.dedent("""
- from conans import ConanFile, CompatiblePackage
+ from conans import ConanFile
class Pkg(ConanFile):
settings = "os", "compiler"
def package_id(self):
if self.settings.compiler == "gcc" and self.settings.compiler.version == "4.9":
for version in ("4.8", "4.7", "4.6"):
- compatible_pkg = CompatiblePackage(self)
+ compatible_pkg = self.info.clone()
compatible_pkg.settings.compiler.version = version
self.compatible_packages.append(compatible_pkg)
def package_info(self):
@@ -47,14 +50,14 @@ def package_info(self):
def compatible_setting_no_binary_test(self):
client = TestClient()
conanfile = textwrap.dedent("""
- from conans import ConanFile, CompatiblePackage
+ from conans import ConanFile
class Pkg(ConanFile):
settings = "os", "compiler"
def package_id(self):
if self.settings.compiler == "gcc" and self.settings.compiler.version == "4.9":
for version in ("4.8", "4.7", "4.6"):
- compatible_pkg = CompatiblePackage(self)
+ compatible_pkg = self.info.clone()
compatible_pkg.settings.compiler.version = version
self.compatible_packages.append(compatible_pkg)
def package_info(self):
@@ -72,7 +75,7 @@ def package_info(self):
"myprofile": profile})
# Create package with gcc 4.8
client.run("export . pkg/0.1@user/stable")
- self.assertIn("pkg/0.1@user/stable: Exported revision: c89d6976443e7a9cd975c5b8210ae212",
+ self.assertIn("pkg/0.1@user/stable: Exported revision: b27c975bb0d9e40c328bd02bc529b6f8",
client.out)
# package can be used with a profile gcc 4.9 falling back to 4.8 binary
@@ -86,14 +89,14 @@ def package_info(self):
def compatible_setting_no_user_channel_test(self):
client = TestClient()
conanfile = textwrap.dedent("""
- from conans import ConanFile, CompatiblePackage
+ from conans import ConanFile
class Pkg(ConanFile):
settings = "os", "compiler"
def package_id(self):
if self.settings.compiler == "gcc" and self.settings.compiler.version == "4.9":
for version in ("4.8", "4.7", "4.6"):
- compatible_pkg = CompatiblePackage(self)
+ compatible_pkg = self.info.clone()
compatible_pkg.settings.compiler.version = version
self.compatible_packages.append(compatible_pkg)
""")
@@ -120,14 +123,14 @@ def package_id(self):
def compatible_option_test(self):
client = TestClient()
conanfile = textwrap.dedent("""
- from conans import ConanFile, CompatiblePackage
+ from conans import ConanFile
class Pkg(ConanFile):
options = {"optimized": [1, 2, 3]}
default_options = {"optimized": 1}
def package_id(self):
for optimized in range(int(self.options.optimized), 0, -1):
- compatible_pkg = CompatiblePackage(self)
+ compatible_pkg = self.info.clone()
compatible_pkg.options.optimized = optimized
self.compatible_packages.append(compatible_pkg)
def package_info(self):
@@ -158,44 +161,283 @@ def package_info(self):
client.out)
self.assertIn("pkg/0.1@user/stable: Already installed!", client.out)
- def error_setting_test(self):
+ def visual_package_compatible_with_intel_test(self):
client = TestClient()
+ ref = ConanFileReference.loads("Bye/0.1@us/ch")
conanfile = textwrap.dedent("""
- from conans import ConanFile, CompatiblePackage
+ from conans import ConanFile
- class Pkg(ConanFile):
- settings = "os", "compiler"
- def package_id(self):
- compatible_pkg = CompatiblePackage(self)
- compatible_pkg.settings.compiler.version = "bad"
- self.compatible_packages.append(self)
+ class Conan(ConanFile):
+ settings = "compiler"
+
+ def package_id(self):
+ if self.settings.compiler == "intel":
+ p = self.info.clone()
+ p.base_compatible()
+ self.compatible_packages.append(p)
""")
- client.save({"conanfile.py": conanfile})
- client.run("create . pkg/0.1@user/stable", assert_error=True)
+ visual_profile = textwrap.dedent("""
+ [settings]
+ compiler = Visual Studio
+ compiler.version = 8
+ compiler.runtime = MD
+ """)
+ intel_profile = textwrap.dedent("""
+ [settings]
+ compiler = intel
+ compiler.version = 16
+ compiler.base = Visual Studio
+ compiler.base.version = 8
+ compiler.base.runtime = MD
+ """)
+ client.save({"conanfile.py": conanfile,
+ "intel_profile": intel_profile,
+ "visual_profile": visual_profile})
+ client.run("create . %s --profile visual_profile" % ref.full_str())
+ client.run("install %s -p intel_profile" % ref.full_str())
+ self.assertIn("Bye/0.1@us/ch: Main binary package '2ef6f6c768dd0f332dc252"
+ "b72c30dee116632302' missing. Using compatible package "
+ "'1151fe341e6b310f7645a76b4d3d524342835acc'",
+ client.out)
+ self.assertIn("Bye/0.1@us/ch:1151fe341e6b310f7645a76b4d3d524342835acc - Cache", client.out)
+
+ def wrong_base_compatible_test(self):
+ client = TestClient()
+ ref = ConanFileReference.loads("Bye/0.1@us/ch")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Conan(ConanFile):
+ settings = "compiler"
+
+ def package_id(self):
+ p = self.info.clone()
+ p.base_compatible()
+ self.compatible_packages.append(p)
+ """)
+ visual_profile = textwrap.dedent("""
+ [settings]
+ compiler = Visual Studio
+ compiler.version = 8
+ compiler.runtime = MD
+ """)
+ client.save({"conanfile.py": conanfile,
+ "visual_profile": visual_profile})
+ client.run("create . %s --profile visual_profile" % ref.full_str(), assert_error=True)
+ self.assertIn("The compiler 'Visual Studio' has no 'base' sub-setting", client.out)
+
+ def intel_package_compatible_with_base_test(self):
+ client = TestClient()
+ ref = ConanFileReference.loads("Bye/0.1@us/ch")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
- self.assertIn('ERROR: pkg/0.1@user/stable: Error in package_id() method, line 8',
+ class Conan(ConanFile):
+ settings = "compiler"
+
+ def package_id(self):
+ if self.settings.compiler == "Visual Studio":
+ compatible_pkg = self.info.clone()
+ compatible_pkg.parent_compatible(compiler="intel", version=16)
+ self.compatible_packages.append(compatible_pkg)
+
+ """)
+ visual_profile = textwrap.dedent("""
+ [settings]
+ compiler = Visual Studio
+ compiler.version = 8
+ compiler.runtime = MD
+ """)
+ intel_profile = textwrap.dedent("""
+ [settings]
+ compiler = intel
+ compiler.version = 16
+ compiler.base = Visual Studio
+ compiler.base.version = 8
+ compiler.base.runtime = MD
+ """)
+ client.save({"conanfile.py": conanfile,
+ "intel_profile": intel_profile,
+ "visual_profile": visual_profile})
+ client.run("create . %s --profile intel_profile" % ref.full_str())
+ client.run("install %s -p visual_profile" % ref.full_str())
+ self.assertIn("Bye/0.1@us/ch: Main binary package "
+ "'1151fe341e6b310f7645a76b4d3d524342835acc' missing. Using compatible "
+ "package '2ef6f6c768dd0f332dc252b72c30dee116632302'",
client.out)
- self.assertIn('compatible_pkg.settings.compiler.version = "bad"', client.out)
- self.assertIn("ConanException: Invalid setting 'bad' is not a valid "
- "'settings.compiler.version' value", client.out)
+ self.assertIn("Bye/0.1@us/ch:2ef6f6c768dd0f332dc252b72c30dee116632302 - Cache", client.out)
- def error_option_test(self):
+ def no_valid_compiler_keyword_base_test(self):
client = TestClient()
+ ref = ConanFileReference.loads("Bye/0.1@us/ch")
conanfile = textwrap.dedent("""
- from conans import ConanFile, CompatiblePackage
+ from conans import ConanFile
+
+ class Conan(ConanFile):
+ settings = "compiler"
+
+ def package_id(self):
+ if self.settings.compiler == "Visual Studio":
+ compatible_pkg = self.info.clone()
+ compatible_pkg.parent_compatible("intel")
+ self.compatible_packages.append(compatible_pkg)
+
+ """)
+ visual_profile = textwrap.dedent("""
+ [settings]
+ compiler = Visual Studio
+ compiler.version = 8
+ compiler.runtime = MD
+ """)
+ client.save({"conanfile.py": conanfile,
+ "visual_profile": visual_profile})
+ client.run("create . %s --profile visual_profile" % ref.full_str(), assert_error=True)
+ self.assertIn("Specify 'compiler' as a keywork "
+ "argument. e.g: 'parent_compiler(compiler=\"intel\")'", client.out)
+
+ def intel_package_invalid_subsetting_test(self):
+ """If I specify an invalid subsetting of my base compiler, it won't fail, but it won't
+ file the available package_id"""
+ client = TestClient()
+ ref = ConanFileReference.loads("Bye/0.1@us/ch")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Conan(ConanFile):
+ settings = "compiler"
+
+ def package_id(self):
+ if self.settings.compiler == "Visual Studio":
+ compatible_pkg = self.info.clone()
+ compatible_pkg.parent_compatible(compiler="intel", version=16, FOO="BAR")
+ self.compatible_packages.append(compatible_pkg)
+
+ """)
+ visual_profile = textwrap.dedent("""
+ [settings]
+ compiler = Visual Studio
+ compiler.version = 8
+ compiler.runtime = MD
+ """)
+ intel_profile = textwrap.dedent("""
+ [settings]
+ compiler = intel
+ compiler.version = 16
+ compiler.base = Visual Studio
+ compiler.base.version = 8
+ compiler.base.runtime = MD
+ """)
+ client.save({"conanfile.py": conanfile,
+ "intel_profile": intel_profile,
+ "visual_profile": visual_profile})
+ client.run("create . %s --profile intel_profile" % ref.full_str())
+ client.run("install %s -p visual_profile" % ref.full_str(), assert_error=True)
+ self.assertIn("Missing prebuilt package for 'Bye/0.1@us/ch'", client.out)
+
+ def additional_id_mode_test(self):
+ c1 = GenConanfile().with_name("AA").with_version("1.0")
+ c2 = GenConanfile().with_name("BB").with_version("1.0").with_require_plain("AA/1.0")
+ client = TestClient()
+ # Recipe revision mode
+ client.run("config set general.default_package_id_mode=recipe_revision_mode")
+
+ # Create binaries with recipe revision mode for both
+ client.save({"conanfile.py": c1})
+ client.run("create .")
+
+ client.save({"conanfile.py": c2})
+ client.run("create .")
+
+ # Back to semver default
+ client.run("config set general.default_package_id_mode=semver_direct_mode")
+ client.run("install BB/1.0@", assert_error=True)
+ self.assertIn("Missing prebuilt package for 'BB/1.0'", client.out)
+
+ # What if client modifies the packages declaring a compatible_package with the recipe mode
+ # Recipe revision mode
+ client.run("config set general.default_package_id_mode=recipe_revision_mode")
+ tmp = """
+
+ def package_id(self):
+ p = self.info.clone()
+ p.requires.recipe_revision_mode()
+ self.output.warn("Alternative package ID: {}".format(p.package_id()))
+ self.compatible_packages.append(p)
+"""
+ c1 = str(c1) + tmp
+ c2 = str(c2) + tmp
+ # Create the packages, now with the recipe mode declared as compatible package
+ time.sleep(1) # new timestamp
+ client.save({"conanfile.py": c1})
+ client.run("create .")
+
+ client.save({"conanfile.py": c2})
+ client.run("create .")
+ self.assertIn("Package '9fc42b36e70615fe97acca0afa27e1731868861c' created", client.out)
+
+ # Back to semver mode
+ client.run("config set general.default_package_id_mode=semver_direct_mode")
+ client.run("install BB/1.0@ --update")
+ self.assertIn("Using compatible package '9fc42b36e70615fe97acca0afa27e1731868861c'",
+ client.out)
+ def package_id_consumers_test(self):
+ # If we fallback to a different binary upstream and we are using a "package_revision_mode"
+ # the current package should have a different binary package ID too.
+ client = TestClient()
+ client.run("config set general.default_package_id_mode=package_revision_mode")
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
class Pkg(ConanFile):
- options = {"shared": [True, False]}
- default_options = {"shared": True}
+ settings = "os", "compiler"
def package_id(self):
- compatible_pkg = CompatiblePackage(self)
- compatible_pkg.options.shared = "bad"
- self.compatible_packages.append(self)
+ compatible = self.info.clone()
+ compatible.settings.compiler.version = "4.8"
+ self.compatible_packages.append(compatible)
+ def package_info(self):
+ self.output.info("PackageInfo!: Gcc version: %s!"
+ % self.settings.compiler.version)
+ """)
+ profile = textwrap.dedent("""
+ [settings]
+ os = Linux
+ compiler=gcc
+ compiler.version=4.9
+ compiler.libcxx=libstdc++
""")
+ client.save({"conanfile.py": conanfile,
+ "myprofile": profile})
+ # Create package with gcc 4.8
+ client.run("create . pkg/0.1@user/stable -pr=myprofile -s compiler.version=4.8")
+ self.assertIn("pkg/0.1@user/stable: Package '22c594d7fed4994c59a1eacb24ff6ff48bc5c51c'"
+ " created", client.out)
+
+ # package can be used with a profile gcc 4.9 falling back to 4.8 binary
+ client.save({"conanfile.py": GenConanfile().with_require_plain("pkg/0.1@user/stable")})
+ client.run("create . consumer/0.1@user/stable -pr=myprofile")
+ self.assertIn("pkg/0.1@user/stable: PackageInfo!: Gcc version: 4.8!", client.out)
+ self.assertIn("pkg/0.1@user/stable:22c594d7fed4994c59a1eacb24ff6ff48bc5c51c - Cache",
+ client.out)
+ self.assertIn("pkg/0.1@user/stable: Already installed!", client.out)
+ self.assertIn("consumer/0.1@user/stable:15c77f209e7dca571ffe63b19a04a634654e4211 - Build",
+ client.out)
+ self.assertIn("consumer/0.1@user/stable: Package '15c77f209e7dca571ffe63b19a04a634654e4211'"
+ " created", client.out)
+
+ # Create package with gcc 4.9
client.save({"conanfile.py": conanfile})
- client.run("create . pkg/0.1@user/stable", assert_error=True)
+ client.run("create . pkg/0.1@user/stable -pr=myprofile")
+ self.assertIn("pkg/0.1@user/stable: Package '53f56fbd582a1898b3b9d16efd6d3c0ec71e7cfb'"
+ " created", client.out)
- self.assertIn('ERROR: pkg/0.1@user/stable: Error in package_id() method, line 9',
+ # Consume it
+ client.save({"conanfile.py": GenConanfile().with_require_plain("pkg/0.1@user/stable")})
+ client.run("create . consumer/0.1@user/stable -pr=myprofile")
+ self.assertIn("pkg/0.1@user/stable: PackageInfo!: Gcc version: 4.9!", client.out)
+ self.assertIn("pkg/0.1@user/stable:53f56fbd582a1898b3b9d16efd6d3c0ec71e7cfb - Cache",
client.out)
- self.assertIn('compatible_pkg.options.shared = "bad"', client.out)
- self.assertIn("ConanException: 'bad' is not a valid 'options.shared' value.", client.out)
+ self.assertIn("pkg/0.1@user/stable: Already installed!", client.out)
+ self.assertIn("consumer/0.1@user/stable:fca9e94084ed6fe0ca149dc9c2d54c0f336f0d7e - Build",
+ client.out)
+ self.assertIn("consumer/0.1@user/stable: Package 'fca9e94084ed6fe0ca149dc9c2d54c0f336f0d7e'"
+ " created", client.out)
|
{
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-5623@48e2b17
|
conan-io/conan
|
Python
| 5,623
|
Disabling/enabling remotes: Issue/5544
|
Changelog: Feature: Add subcommand for enabling and disabling remotes
Docs: https://github.com/conan-io/docs/pull/1392
This PR provides the option to enable or disable remotes with the subcommand. By default all remotes are created with `disabled=False` and the `remotes.json` file only adds the information in case `disabled=True` so the absence of the parameter does not break loading files from previous versions
The use is:
`conan remote enable/disable -r remote_name_or_pattern`
This way you can have remotes in your `remotes.json` and skip them for operations such as
`search` or `install` without having to remove them. The command works with patterns as well so if you have lots of remotes and you only want to have a couple enabled you can do:
```
conan remote disable *
conan remote enable remote1
conan remote enable remote2
```
The default behaviour is that disabled remotes will be omitted for every operation but not for printing the list of remotes with `conan remote list` also if you try to make an operation with a remote that is in disabled state an error will be raised.
Closes #5544
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-08-14T10:27:55Z
|
Disabling/enabling remotes
A feature request from user could be summarized as:
- They have multiple remotes, 20+, that are defined and installed via ``conan config install``
- Developers need to login only to a few of them 1, 2...
- The remotes that the user is not logged in should be skipped.
After discussing it, the most common use case is that users would like to use those remotes, even if not logged-in, because anonymous read usage is common practice. So it doesn't seem a pattern that would be useful for other users.
Furthermore, it would be coupling the auth process with the remote enabling process, seems not the best way.
I think that the feature that could make sense and totally address this use case too is the one of enabling/disabling remotes:
- Remotes can be ``conan remote disable <remote-name>`` or ``conan remote enable <remote-name>``
- Disabled remotes are NOT removed from the definition, that means that packages that were installed from that remote do not lose their tracking to that remote (removing a remote cleans from the metadata the original remote)
- Disabled remotes are skipped when a package install is iterating remotes
- An exception is raised if a package tries to reach that remote (like --update, or completing missing sources)
This is a feature I have missed myself a few times, like disabling conan-center or bincrafters for doing experiments, and later needing to look up the remote to be able to set-up it again, and also playing with --insert to get the same order it was before. Plus it seems not very complex to implement.
|
So you config install 20+ repositories and you would need to disable 18? I don't like it... I'm that case I think it is better to remove them or not install them.
Maybe allowing patterns for enable/disable is a bit more handy. You can disable * and enable two of them.
I think it is useful when I have artifactory configured, but I don't want to run it all the time. So to avoid any error when installing a package, I need to exclude it from my remote list, otherwise Conan will raise an error because it's impossible to reach Artifactory. Disabling it would be better.
Yes, for disabling multiple repos:
- Allowing patterns
- allowing to define the disable/enable state in remotes.txt files too
I think the first should be easier to implement
|
[
{
"body": "A feature request from user could be summarized as:\r\n\r\n- They have multiple remotes, 20+, that are defined and installed via ``conan config install``\r\n- Developers need to login only to a few of them 1, 2...\r\n- The remotes that the user is not logged in should be skipped.\r\n\r\n\r\nAfter discussing it, the most common use case is that users would like to use those remotes, even if not logged-in, because anonymous read usage is common practice. So it doesn't seem a pattern that would be useful for other users.\r\n\r\nFurthermore, it would be coupling the auth process with the remote enabling process, seems not the best way.\r\n\r\nI think that the feature that could make sense and totally address this use case too is the one of enabling/disabling remotes:\r\n\r\n- Remotes can be ``conan remote disable <remote-name>`` or ``conan remote enable <remote-name>``\r\n- Disabled remotes are NOT removed from the definition, that means that packages that were installed from that remote do not lose their tracking to that remote (removing a remote cleans from the metadata the original remote)\r\n- Disabled remotes are skipped when a package install is iterating remotes\r\n- An exception is raised if a package tries to reach that remote (like --update, or completing missing sources)\r\n\r\nThis is a feature I have missed myself a few times, like disabling conan-center or bincrafters for doing experiments, and later needing to look up the remote to be able to set-up it again, and also playing with --insert to get the same order it was before. Plus it seems not very complex to implement. \r\n\r\n",
"number": 5544,
"title": "Disabling/enabling remotes"
}
] |
c15779cb336fd208758832263f4ef176ce60b856
|
{
"head_commit": "48e2b17271f9ba467892504afd477b95beb03945",
"head_commit_message": "test install with --update",
"patch_to_review": "diff --git a/conans/client/cache/remote_registry.py b/conans/client/cache/remote_registry.py\nindex 4b5dc971883..4fee4f96f59 100644\n--- a/conans/client/cache/remote_registry.py\n+++ b/conans/client/cache/remote_registry.py\n@@ -1,3 +1,4 @@\n+import fnmatch\n import json\n import os\n from collections import OrderedDict, namedtuple\n@@ -9,7 +10,7 @@\n from conans.model.ref import PackageReference, ConanFileReference\n \n \n-Remote = namedtuple(\"Remote\", \"name url verify_ssl\")\n+Remote = namedtuple(\"Remote\", \"name url verify_ssl disabled\")\n \n \n def load_registry_txt(contents):\n@@ -106,7 +107,7 @@ def __init__(self):\n @classmethod\n def defaults(cls):\n result = Remotes()\n- result._remotes[\"conan-center\"] = Remote(\"conan-center\", \"https://conan.bintray.com\", True)\n+ result._remotes[\"conan-center\"] = Remote(\"conan-center\", \"https://conan.bintray.com\", True, False)\n return result\n \n def select(self, remote_name):\n@@ -122,29 +123,44 @@ def clear(self):\n self._remotes.clear()\n \n def items(self):\n- return self._remotes.items()\n+ return OrderedDict(\n+ (key, value) for (key, value) in self._remotes.items() if not value.disabled)\n \n def values(self):\n+ return [value for value in self._remotes.values() if not value.disabled]\n+\n+ def all_values(self):\n return self._remotes.values()\n \n+ def all_items(self):\n+ return self._remotes.items()\n+\n @staticmethod\n def loads(text):\n result = Remotes()\n data = json.loads(text)\n for r in data.get(\"remotes\", []):\n- result._remotes[r[\"name\"]] = Remote(r[\"name\"], r[\"url\"], r[\"verify_ssl\"])\n+ disabled = r.get(\"disabled\", False)\n+ result._remotes[r[\"name\"]] = Remote(r[\"name\"], r[\"url\"],\n+ r[\"verify_ssl\"], disabled)\n \n return result\n \n def dumps(self):\n result = []\n for remote in self._remotes.values():\n- result.append(\"%s: %s [Verify SSL: %s]\" % (remote.name, remote.url, remote.verify_ssl))\n+ disabled_str = \", Disabled: True\" if remote.disabled else \"\"\n+ result.append(\"%s: %s [Verify SSL: %s%s]\" %\n+ (remote.name, remote.url, remote.verify_ssl, disabled_str))\n return \"\\n\".join(result)\n \n def save(self, filename):\n- ret = {\"remotes\": [{\"name\": r, \"url\": u, \"verify_ssl\": v}\n- for r, (_, u, v) in self._remotes.items()]}\n+ ret = {\"remotes\": []}\n+ for r, (_, u, v, d) in self._remotes.items():\n+ remote = {\"name\": r, \"url\": u, \"verify_ssl\": v}\n+ if d:\n+ remote[\"disabled\"] = True\n+ ret[\"remotes\"].append(remote)\n save(filename, json.dumps(ret, indent=True))\n \n def _get_by_url(self, url):\n@@ -154,12 +170,29 @@ def _get_by_url(self, url):\n \n def rename(self, remote_name, new_remote_name):\n if new_remote_name in self._remotes:\n- raise ConanException(\"Remote '%s' already exists\" % new_remote_name)\n+ raise ConanException(\"Remote '%s' already exists\" %\n+ new_remote_name)\n+ elif self._remotes[remote_name].disabled:\n+ raise ConanException(\"Remote '%s' is disabled\" % remote_name)\n \n remote = self._remotes[remote_name]\n- new_remote = Remote(new_remote_name, remote.url, remote.verify_ssl)\n- self._remotes = OrderedDict([(new_remote_name, new_remote) if k == remote_name\n- else (k, v) for k, v in self._remotes.items()])\n+ new_remote = Remote(new_remote_name, remote.url, remote.verify_ssl,\n+ remote.disabled)\n+ self._remotes = OrderedDict([\n+ (new_remote_name, new_remote) if k == remote_name else (k, v)\n+ for k, v in self._remotes.items()\n+ ])\n+\n+ def set_disabled_state(self, remote_name, state):\n+ filtered_remotes = []\n+ for remote in self._remotes.values():\n+ if fnmatch.fnmatch(remote.name, remote_name):\n+ if remote.disabled != state:\n+ filtered_remotes.append(remote.name)\n+ for r in filtered_remotes:\n+ remote = self._remotes[r]\n+ self._remotes[r] = Remote(remote.name, remote.url,\n+ remote.verify_ssl, state)\n \n def get_remote(self, remote_name):\n # Returns the remote defined by the name, or the default if is None\n@@ -181,11 +214,17 @@ def get(self, remote_name):\n \n def __getitem__(self, remote_name):\n try:\n- return self._remotes[remote_name]\n+ remote = self._remotes[remote_name]\n+ if remote.disabled:\n+ raise ConanException(\"Remote '%s' is disabled\" % (remote_name))\n+ else:\n+ return remote\n except KeyError:\n raise NoRemoteAvailable(\"No remote '%s' defined in remotes\" % (remote_name))\n \n def __delitem__(self, remote_name):\n+ if remote_name in self._remotes and self._remotes[remote_name].disabled:\n+ raise ConanException(\"Remote '%s' is disabled\" % (remote_name))\n try:\n del self._remotes[remote_name]\n except KeyError:\n@@ -193,7 +232,7 @@ def __delitem__(self, remote_name):\n \n def _upsert(self, remote_name, url, verify_ssl, insert):\n # Remove duplicates\n- updated_remote = Remote(remote_name, url, verify_ssl)\n+ updated_remote = Remote(remote_name, url, verify_ssl, False)\n self._remotes.pop(remote_name, None)\n remotes_list = []\n renamed = None\n@@ -227,13 +266,16 @@ def add(self, remote_name, url, verify_ssl=True, insert=None, force=None):\n def update(self, remote_name, url, verify_ssl=True, insert=None):\n if remote_name not in self._remotes:\n raise ConanException(\"Remote '%s' not found in remotes\" % remote_name)\n+ elif self._remotes[remote_name].disabled:\n+ raise ConanException(\"Remote '%s' is disabled\" % remote_name)\n self._add_update(remote_name, url, verify_ssl, insert)\n \n def _add_update(self, remote_name, url, verify_ssl, insert=None):\n prev_remote = self._get_by_url(url)\n if prev_remote and verify_ssl == prev_remote.verify_ssl and insert is None:\n raise ConanException(\"Remote '%s' already exists with same URL\" % prev_remote.name)\n- updated_remote = Remote(remote_name, url, verify_ssl)\n+ disabled = True if prev_remote and prev_remote.disabled else False\n+ updated_remote = Remote(remote_name, url, verify_ssl, disabled)\n if insert is not None:\n try:\n insert_index = int(insert)\n@@ -351,6 +393,11 @@ def rename(self, remote_name, new_remote_name):\n \n remotes.save(self._filename)\n \n+ def set_disabled_state(self, remote_name, state):\n+ remotes = self.load_remotes()\n+ remotes.set_disabled_state(remote_name, state)\n+ remotes.save(self._filename)\n+\n @property\n def refs_list(self):\n result = {}\ndiff --git a/conans/client/cmd/search.py b/conans/client/cmd/search.py\nindex ac8d0e476cf..5fe2ca061e5 100644\n--- a/conans/client/cmd/search.py\n+++ b/conans/client/cmd/search.py\n@@ -24,9 +24,10 @@ def search_recipes(self, pattern, remote_name=None, case_sensitive=False):\n # Deprecate: 2.0 can remove this check\n if 'all' not in self._remotes:\n for remote in self._remotes.values():\n- refs = self._remote_manager.search_recipes(remote, pattern, ignorecase)\n- if refs:\n- references[remote.name] = refs\n+ if not remote.disabled:\n+ refs = self._remote_manager.search_recipes(remote, pattern, ignorecase)\n+ if refs:\n+ references[remote.name] = refs\n return references\n # single remote\n remote = self._remotes[remote_name]\ndiff --git a/conans/client/command.py b/conans/client/command.py\nindex 0892380a71d..f8be9247d73 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -1449,6 +1449,11 @@ def remote(self, *args):\n subparsers.add_parser('clean', help=\"Clean the list of remotes and all \"\n \"recipe-remote associations\")\n \n+ parser_enable = subparsers.add_parser('enable', help='Enable a remote')\n+ parser_enable.add_argument('remote', help='Name of the remote')\n+ parser_disable = subparsers.add_parser('disable', help='Disable a remote')\n+ parser_disable.add_argument('remote', help='Name of the remote')\n+\n args = parser.parse_args(*args)\n \n reference = args.reference if hasattr(args, 'reference') else None\n@@ -1491,6 +1496,10 @@ def remote(self, *args):\n return self._conan.remote_update_pref(package_reference, remote_name)\n elif args.subcommand == \"clean\":\n return self._conan.remote_clean()\n+ elif args.subcommand == \"enable\":\n+ return self._conan.remote_set_disabled_state(remote_name, False)\n+ elif args.subcommand == \"disable\":\n+ return self._conan.remote_set_disabled_state(remote_name, True)\n \n def profile(self, *args):\n \"\"\"\ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 87d123d43db..9f7ab42f4b0 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -915,7 +915,7 @@ def upload(self, pattern, package=None, remote_name=None, all_packages=False, co\n \n @api_method\n def remote_list(self):\n- return list(self.app.cache.registry.load_remotes().values())\n+ return list(self.app.cache.registry.load_remotes().all_values())\n \n @api_method\n def remote_add(self, remote_name, url, verify_ssl=True, insert=None, force=None):\n@@ -925,6 +925,10 @@ def remote_add(self, remote_name, url, verify_ssl=True, insert=None, force=None)\n def remote_remove(self, remote_name):\n return self.app.cache.registry.remove(remote_name)\n \n+ @api_method\n+ def remote_set_disabled_state(self, remote_name, state):\n+ return self.app.cache.registry.set_disabled_state(remote_name, state)\n+\n @api_method\n def remote_update(self, remote_name, url, verify_ssl=True, insert=None):\n return self.app.cache.registry.update(remote_name, url, verify_ssl, insert)\ndiff --git a/conans/client/conan_command_output.py b/conans/client/conan_command_output.py\nindex 31458492255..f91d4b90a49 100644\n--- a/conans/client/conan_command_output.py\n+++ b/conans/client/conan_command_output.py\n@@ -30,9 +30,15 @@ def profile_list(self, profiles):\n def remote_list(self, remotes, raw):\n for r in remotes:\n if raw:\n- self._output.info(\"%s %s %s\" % (r.name, r.url, r.verify_ssl))\n+ disabled_str = \" True\" if r.disabled else \"\"\n+ self._output.info(\n+ \"%s %s %s %s\" %\n+ (r.name, r.url, r.verify_ssl, disabled_str))\n else:\n- self._output.info(\"%s: %s [Verify SSL: %s]\" % (r.name, r.url, r.verify_ssl))\n+ disabled_str = \", Disabled: True\" if r.disabled else \"\"\n+ self._output.info(\n+ \"%s: %s [Verify SSL: %s%s]\" %\n+ (r.name, r.url, r.verify_ssl, disabled_str))\n \n def remote_ref_list(self, refs):\n for reference, remote_name in refs.items():\ndiff --git a/conans/test/functional/command/config_install_test.py b/conans/test/functional/command/config_install_test.py\nindex 3897d74d402..4957440c8bf 100644\n--- a/conans/test/functional/command/config_install_test.py\n+++ b/conans/test/functional/command/config_install_test.py\n@@ -145,9 +145,10 @@ def _check(self, params):\n settings_path = self.client.cache.settings_path\n self.assertEqual(load(settings_path).splitlines(), settings_yml.splitlines())\n remotes = self.client.cache.registry.load_remotes()\n- self.assertEqual(list(remotes.values()), [Remote(\"myrepo1\", \"https://myrepourl.net\", False),\n- Remote(\"my-repo-2\", \"https://myrepo2.com\", True),\n- ])\n+ self.assertEqual(list(remotes.values()), [\n+ Remote(\"myrepo1\", \"https://myrepourl.net\", False, False),\n+ Remote(\"my-repo-2\", \"https://myrepo2.com\", True, False),\n+ ])\n self.assertEqual(sorted(os.listdir(self.client.cache.profiles_path)),\n sorted([\"default\", \"linux\", \"windows\"]))\n self.assertEqual(load(os.path.join(self.client.cache.profiles_path, \"linux\")).splitlines(),\ndiff --git a/conans/test/functional/command/install_test.py b/conans/test/functional/command/install_test.py\nindex 575d3408578..c442bf208a6 100644\n--- a/conans/test/functional/command/install_test.py\n+++ b/conans/test/functional/command/install_test.py\n@@ -2,6 +2,7 @@\n import platform\n import textwrap\n import unittest\n+from collections import OrderedDict\n \n from conans.client.tools.oss import detected_os\n from conans.model.info import ConanInfo\n@@ -631,3 +632,33 @@ class MyPkg(ConanFile):\n # Try this syntax to upload too\n client.run('install lib/1.0@')\n client.run('upload lib/1.0@ -c --all')\n+\n+ def install_disabled_remote_test(self):\n+ client = TestClient(servers={\"default\": TestServer()},\n+ users={\"default\": [(\"lasote\", \"mypass\")]})\n+ client.save({\"conanfile.py\": str(TestConanFile(\"Pkg\", \"0.1\"))})\n+ client.run(\"create . lasote/testing\")\n+ client.run(\"upload * --confirm --all -r default\")\n+ client.run(\"remote disable default\")\n+ client.run(\"install Pkg/0.1@lasote/testing -r default\", assert_error=True)\n+ self.assertIn(\"ERROR: Remote 'default' is disabled\", client.out)\n+ client.run(\"remote enable default\")\n+ client.run(\"install Pkg/0.1@lasote/testing -r default\")\n+ client.run(\"remote disable default\")\n+ client.run(\"install Pkg/0.1@lasote/testing --update\", assert_error=True)\n+ self.assertIn(\"ERROR: Remote 'default' is disabled\", client.out)\n+\n+ def install_skip_disabled_remote_test(self):\n+ client = TestClient(servers=OrderedDict({\"default\": TestServer(),\n+ \"server2\": TestServer(),\n+ \"server3\": TestServer()}),\n+ users={\"default\": [(\"lasote\", \"mypass\")],\n+ \"server3\": [(\"lasote\", \"mypass\")]})\n+ client.save({\"conanfile.py\": str(TestConanFile(\"Pkg\", \"0.1\"))})\n+ client.run(\"create . lasote/testing\")\n+ client.run(\"upload * --confirm --all -r default\")\n+ client.run(\"upload * --confirm --all -r server3\")\n+ client.run(\"remove * -f\")\n+ client.run(\"remote disable default\")\n+ client.run(\"install Pkg/0.1@lasote/testing\", assert_error=False)\n+ self.assertNotIn(\"Trying with 'default'...\", client.out)\ndiff --git a/conans/test/functional/command/remote_test.py b/conans/test/functional/command/remote_test.py\nindex cc8feacc4af..7b0dd066192 100644\n--- a/conans/test/functional/command/remote_test.py\n+++ b/conans/test/functional/command/remote_test.py\n@@ -293,6 +293,57 @@ def verify_ssl_test(self):\n self.assertEqual(data[\"remotes\"][3][\"url\"], \"http://someurl4\")\n self.assertEqual(data[\"remotes\"][3][\"verify_ssl\"], False)\n \n+ def remote_disable_test(self):\n+ client = TestClient()\n+ client.run(\"remote add my-remote0 http://someurl0\")\n+ client.run(\"remote add my-remote1 http://someurl1\")\n+ client.run(\"remote add my-remote2 http://someurl2\")\n+ client.run(\"remote add my-remote3 http://someurl3\")\n+ client.run(\"remote disable my-remote0\")\n+ client.run(\"remote disable my-remote3\")\n+ registry = load(client.cache.registry_path)\n+ data = json.loads(registry)\n+ self.assertEqual(data[\"remotes\"][0][\"name\"], \"my-remote0\")\n+ self.assertEqual(data[\"remotes\"][0][\"url\"], \"http://someurl0\")\n+ self.assertEqual(data[\"remotes\"][0][\"disabled\"], True)\n+ self.assertEqual(data[\"remotes\"][3][\"name\"], \"my-remote3\")\n+ self.assertEqual(data[\"remotes\"][3][\"url\"], \"http://someurl3\")\n+ self.assertEqual(data[\"remotes\"][3][\"disabled\"], True)\n+\n+ client.run(\"remote disable *\")\n+ registry = load(client.cache.registry_path)\n+ data = json.loads(registry)\n+ for remote in data[\"remotes\"]:\n+ self.assertEqual(remote[\"disabled\"], True)\n+\n+ client.run(\"remote enable *\")\n+ registry = load(client.cache.registry_path)\n+ data = json.loads(registry)\n+ for remote in data[\"remotes\"]:\n+ with self.assertRaises(KeyError):\n+ disabled = remote[\"disabled\"]\n+\n+ def remove_disabled_remote_test(self):\n+ client = TestClient()\n+ client.run(\"remote add my-remote3 http://someurl0\")\n+ client.run(\"remote disable my-remote3\")\n+ client.run(\"remote remove my-remote3\", assert_error=True)\n+ self.assertIn(\"ERROR: Remote 'my-remote3' is disabled\", client.out)\n+\n+ def rename_disabled_remote_test(self):\n+ client = TestClient()\n+ client.run(\"remote add my-remote3 http://someurl0\")\n+ client.run(\"remote disable my-remote3\")\n+ client.run(\"remote rename my-remote3 new_name\", assert_error=True)\n+ self.assertIn(\"ERROR: Remote 'my-remote3' is disabled\", client.out)\n+\n+ def update_disabled_remote_test(self):\n+ client = TestClient()\n+ client.run(\"remote add my-remote3 http://someurl0\")\n+ client.run(\"remote disable my-remote3\")\n+ client.run(\"remote update my-remote3 http://someurl1 True\", assert_error=True)\n+ self.assertIn(\"ERROR: Remote 'my-remote3' is disabled\", client.out)\n+\n def verify_ssl_error_test(self):\n client = TestClient()\n client.run(\"remote add my-remote http://someurl some_invalid_option=foo\", assert_error=True)\ndiff --git a/conans/test/functional/command/search_test.py b/conans/test/functional/command/search_test.py\nindex 6e90816db07..65aeeec9fc3 100644\n--- a/conans/test/functional/command/search_test.py\n+++ b/conans/test/functional/command/search_test.py\n@@ -247,6 +247,20 @@ class Test(ConanFile):\n lib/1.0@foo/bar\n lib/1.0@user/channel\"\"\"), client.out)\n \n+ def search_disabled_remote_test(self):\n+ self.client.run(\"remote disable search_able\")\n+ self.client.run(\"search * -r search_able\", assert_error=True)\n+ self.assertIn(\"ERROR: Remote 'search_able' is disabled\", self.client.out)\n+\n+ def search_skip_disabled_remote_test(self):\n+ os.rmdir(self.servers[\"local\"].server_store.store)\n+ self._copy_to_server(self.client.cache, self.servers[\"local\"].server_store)\n+ os.rmdir(self.servers[\"search_able\"].server_store.store)\n+ self._copy_to_server(self.client.cache, self.servers[\"search_able\"].server_store)\n+ self.client.run(\"remote disable local\")\n+ self.client.run(\"search Hello* -r all\")\n+ self.assertNotIn(\"Remote 'local':\", self.client.out)\n+\n def recipe_search_all_test(self):\n os.rmdir(self.servers[\"local\"].server_store.store)\n self._copy_to_server(self.client.cache, self.servers[\"local\"].server_store)\ndiff --git a/conans/test/functional/configuration/registry_test.py b/conans/test/functional/configuration/registry_test.py\nindex 392a395c6cf..93b21b74a84 100644\n--- a/conans/test/functional/configuration/registry_test.py\n+++ b/conans/test/functional/configuration/registry_test.py\n@@ -27,7 +27,7 @@ def retro_compatibility_test(self):\n migrate_registry_file(cache, output)\n registry = RemoteRegistry(cache, output)\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan.io\", \"https://server.conan.io\", True)])\n+ [(\"conan.io\", \"https://server.conan.io\", True, False)])\n \n def to_json_migration_test(self):\n cache_folder = temp_folder()\n@@ -44,7 +44,7 @@ def to_json_migration_test(self):\n self.assertIn(\"conan.io: https://server.conan.io\", client.out)\n registry = client.cache.registry\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan.io\", \"https://server.conan.io\", True)])\n+ [(\"conan.io\", \"https://server.conan.io\", True, False)])\n ref1 = ConanFileReference.loads('lib/1.0@conan/stable')\n ref2 = ConanFileReference.loads('other/1.0@lasote/testing')\n expected = {ref1: 'conan.io', ref2: 'conan.io'}\n@@ -65,36 +65,36 @@ def add_remove_update_test(self):\n # Add\n registry.add(\"local\", \"http://localhost:9300\")\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", \"https://conan.bintray.com\", True),\n- (\"local\", \"http://localhost:9300\", True)])\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"local\", \"http://localhost:9300\", True, False)])\n # Add\n registry.add(\"new\", \"new_url\", False)\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", \"https://conan.bintray.com\", True),\n- (\"local\", \"http://localhost:9300\", True),\n- (\"new\", \"new_url\", False)])\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"local\", \"http://localhost:9300\", True, False),\n+ (\"new\", \"new_url\", False, False)])\n with self.assertRaises(ConanException):\n registry.add(\"new\", \"new_url\")\n # Update\n registry.update(\"new\", \"other_url\")\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", \"https://conan.bintray.com\", True),\n- (\"local\", \"http://localhost:9300\", True),\n- (\"new\", \"other_url\", True)])\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"local\", \"http://localhost:9300\", True, False),\n+ (\"new\", \"other_url\", True, False)])\n with self.assertRaises(ConanException):\n registry.update(\"new2\", \"new_url\")\n \n registry.update(\"new\", \"other_url\", False)\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", \"https://conan.bintray.com\", True),\n- (\"local\", \"http://localhost:9300\", True),\n- (\"new\", \"other_url\", False)])\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"local\", \"http://localhost:9300\", True, False),\n+ (\"new\", \"other_url\", False, False)])\n \n # Remove\n registry.remove(\"local\")\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", \"https://conan.bintray.com\", True),\n- (\"new\", \"other_url\", False)])\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"new\", \"other_url\", False, False)])\n with self.assertRaises(ConanException):\n registry.remove(\"new2\")\n \n@@ -116,17 +116,17 @@ def insert_test(self):\n cache = ClientCache(tmp_folder, output)\n registry = RemoteRegistry(cache, output)\n registry.add(\"repo1\", \"url1\", True, insert=0)\n- self.assertEqual(list(registry.load_remotes().values()), [Remote(\"repo1\", \"url1\", True),\n- Remote(\"conan.io\", \"https://server.conan.io\", True)])\n+ self.assertEqual(list(registry.load_remotes().values()), [Remote(\"repo1\", \"url1\", True, False),\n+ Remote(\"conan.io\", \"https://server.conan.io\", True, False)])\n registry.add(\"repo2\", \"url2\", True, insert=1)\n- self.assertEqual(list(registry.load_remotes().values()), [Remote(\"repo1\", \"url1\", True),\n- Remote(\"repo2\", \"url2\", True),\n- Remote(\"conan.io\", \"https://server.conan.io\", True)])\n+ self.assertEqual(list(registry.load_remotes().values()), [Remote(\"repo1\", \"url1\", True, False),\n+ Remote(\"repo2\", \"url2\", True, False),\n+ Remote(\"conan.io\", \"https://server.conan.io\", True, False)])\n registry.add(\"repo3\", \"url3\", True, insert=5)\n- self.assertEqual(list(registry.load_remotes().values()), [Remote(\"repo1\", \"url1\", True),\n- Remote(\"repo2\", \"url2\", True),\n- Remote(\"conan.io\", \"https://server.conan.io\", True),\n- Remote(\"repo3\", \"url3\", True)])\n+ self.assertEqual(list(registry.load_remotes().values()), [Remote(\"repo1\", \"url1\", True, False),\n+ Remote(\"repo2\", \"url2\", True, False),\n+ Remote(\"conan.io\", \"https://server.conan.io\", True, False),\n+ Remote(\"repo3\", \"url3\", True, False)])\n \n def test_remote_none(self):\n \"\"\" RemoteRegistry should be able to deal when the URL is None\n@@ -138,12 +138,42 @@ def test_remote_none(self):\n \n registry.add(\"foobar\", None)\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", \"https://conan.bintray.com\", True),\n- (\"foobar\", None, True)])\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"foobar\", None, True, False)])\n self.assertIn(\"WARN: The URL is empty. It must contain scheme and hostname.\", cache._output)\n registry.remove(\"foobar\")\n \n registry.update(\"conan-center\", None)\n self.assertEqual(list(registry.load_remotes().values()),\n- [(\"conan-center\", None, True)])\n+ [(\"conan-center\", None, True, False)])\n self.assertIn(\"WARN: The URL is empty. It must contain scheme and hostname.\", cache._output)\n+\n+ def enable_disable_remotes_test(self):\n+ f = os.path.join(temp_folder(), \"aux_file\")\n+ Remotes().save(f)\n+ cache = ClientCache(os.path.dirname(f), TestBufferConanOutput())\n+ registry = cache.registry\n+\n+ registry.add(\"local\", \"http://localhost:9300\")\n+ registry.set_disabled_state(\"local\", True)\n+ self.assertEqual(list(registry.load_remotes().all_values()),\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"local\", \"http://localhost:9300\", True, True)])\n+\n+ self.assertEqual(list(registry.load_remotes().values()),\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False)])\n+\n+ registry.set_disabled_state(\"conan-center\", True)\n+ self.assertEqual(list(registry.load_remotes().all_values()),\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, True),\n+ (\"local\", \"http://localhost:9300\", True, True)])\n+\n+ self.assertEqual(list(registry.load_remotes().values()), [])\n+\n+ registry.set_disabled_state(\"*\", False)\n+ self.assertEqual(list(registry.load_remotes().values()),\n+ [(\"conan-center\", \"https://conan.bintray.com\", True, False),\n+ (\"local\", \"http://localhost:9300\", True, False)])\n+\n+ registry.set_disabled_state(\"*\", True)\n+ self.assertEqual(list(registry.load_remotes().values()), [])\n"
}
|
[
{
"diff_hunk": "@@ -24,9 +24,10 @@ def search_recipes(self, pattern, remote_name=None, case_sensitive=False):\n # Deprecate: 2.0 can remove this check\n if 'all' not in self._remotes:\n for remote in self._remotes.values():\n- refs = self._remote_manager.search_recipes(remote, pattern, ignorecase)\n- if refs:\n- references[remote.name] = refs\n+ if not remote.disabled:",
"line": null,
"original_line": 27,
"original_start_line": null,
"path": "conans/client/cmd/search.py",
"start_line": null,
"text": "@user1:\nneeded the if?"
}
] |
5f40813efacaca02c07f9a144ca8ab10539b977f
|
diff --git a/conans/client/cache/remote_registry.py b/conans/client/cache/remote_registry.py
index 4b5dc971883..6b72b519400 100644
--- a/conans/client/cache/remote_registry.py
+++ b/conans/client/cache/remote_registry.py
@@ -1,3 +1,4 @@
+import fnmatch
import json
import os
from collections import OrderedDict, namedtuple
@@ -9,7 +10,7 @@
from conans.model.ref import PackageReference, ConanFileReference
-Remote = namedtuple("Remote", "name url verify_ssl")
+Remote = namedtuple("Remote", "name url verify_ssl disabled")
def load_registry_txt(contents):
@@ -106,7 +107,7 @@ def __init__(self):
@classmethod
def defaults(cls):
result = Remotes()
- result._remotes["conan-center"] = Remote("conan-center", "https://conan.bintray.com", True)
+ result._remotes["conan-center"] = Remote("conan-center", "https://conan.bintray.com", True, False)
return result
def select(self, remote_name):
@@ -122,29 +123,44 @@ def clear(self):
self._remotes.clear()
def items(self):
- return self._remotes.items()
+ return OrderedDict(
+ (key, value) for (key, value) in self._remotes.items() if not value.disabled)
def values(self):
+ return [value for value in self._remotes.values() if not value.disabled]
+
+ def all_values(self):
return self._remotes.values()
+ def all_items(self):
+ return self._remotes.items()
+
@staticmethod
def loads(text):
result = Remotes()
data = json.loads(text)
for r in data.get("remotes", []):
- result._remotes[r["name"]] = Remote(r["name"], r["url"], r["verify_ssl"])
+ disabled = r.get("disabled", False)
+ result._remotes[r["name"]] = Remote(r["name"], r["url"],
+ r["verify_ssl"], disabled)
return result
def dumps(self):
result = []
for remote in self._remotes.values():
- result.append("%s: %s [Verify SSL: %s]" % (remote.name, remote.url, remote.verify_ssl))
+ disabled_str = ", Disabled: True" if remote.disabled else ""
+ result.append("%s: %s [Verify SSL: %s%s]" %
+ (remote.name, remote.url, remote.verify_ssl, disabled_str))
return "\n".join(result)
def save(self, filename):
- ret = {"remotes": [{"name": r, "url": u, "verify_ssl": v}
- for r, (_, u, v) in self._remotes.items()]}
+ ret = {"remotes": []}
+ for r, (_, u, v, d) in self._remotes.items():
+ remote = {"name": r, "url": u, "verify_ssl": v}
+ if d:
+ remote["disabled"] = True
+ ret["remotes"].append(remote)
save(filename, json.dumps(ret, indent=True))
def _get_by_url(self, url):
@@ -154,12 +170,27 @@ def _get_by_url(self, url):
def rename(self, remote_name, new_remote_name):
if new_remote_name in self._remotes:
- raise ConanException("Remote '%s' already exists" % new_remote_name)
+ raise ConanException("Remote '%s' already exists" %
+ new_remote_name)
remote = self._remotes[remote_name]
- new_remote = Remote(new_remote_name, remote.url, remote.verify_ssl)
- self._remotes = OrderedDict([(new_remote_name, new_remote) if k == remote_name
- else (k, v) for k, v in self._remotes.items()])
+ new_remote = Remote(new_remote_name, remote.url, remote.verify_ssl,
+ remote.disabled)
+ self._remotes = OrderedDict([
+ (new_remote_name, new_remote) if k == remote_name else (k, v)
+ for k, v in self._remotes.items()
+ ])
+
+ def set_disabled_state(self, remote_name, state):
+ filtered_remotes = []
+ for remote in self._remotes.values():
+ if fnmatch.fnmatch(remote.name, remote_name):
+ if remote.disabled != state:
+ filtered_remotes.append(remote.name)
+ for r in filtered_remotes:
+ remote = self._remotes[r]
+ self._remotes[r] = Remote(remote.name, remote.url,
+ remote.verify_ssl, state)
def get_remote(self, remote_name):
# Returns the remote defined by the name, or the default if is None
@@ -181,7 +212,11 @@ def get(self, remote_name):
def __getitem__(self, remote_name):
try:
- return self._remotes[remote_name]
+ remote = self._remotes[remote_name]
+ if remote.disabled:
+ raise ConanException("Remote '%s' is disabled" % (remote_name))
+ else:
+ return remote
except KeyError:
raise NoRemoteAvailable("No remote '%s' defined in remotes" % (remote_name))
@@ -193,7 +228,7 @@ def __delitem__(self, remote_name):
def _upsert(self, remote_name, url, verify_ssl, insert):
# Remove duplicates
- updated_remote = Remote(remote_name, url, verify_ssl)
+ updated_remote = Remote(remote_name, url, verify_ssl, False)
self._remotes.pop(remote_name, None)
remotes_list = []
renamed = None
@@ -233,7 +268,8 @@ def _add_update(self, remote_name, url, verify_ssl, insert=None):
prev_remote = self._get_by_url(url)
if prev_remote and verify_ssl == prev_remote.verify_ssl and insert is None:
raise ConanException("Remote '%s' already exists with same URL" % prev_remote.name)
- updated_remote = Remote(remote_name, url, verify_ssl)
+ disabled = True if prev_remote and prev_remote.disabled else False
+ updated_remote = Remote(remote_name, url, verify_ssl, disabled)
if insert is not None:
try:
insert_index = int(insert)
@@ -351,6 +387,11 @@ def rename(self, remote_name, new_remote_name):
remotes.save(self._filename)
+ def set_disabled_state(self, remote_name, state):
+ remotes = self.load_remotes()
+ remotes.set_disabled_state(remote_name, state)
+ remotes.save(self._filename)
+
@property
def refs_list(self):
result = {}
diff --git a/conans/client/command.py b/conans/client/command.py
index 988c740ce74..00ab918872e 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -1455,6 +1455,11 @@ def remote(self, *args):
subparsers.add_parser('clean', help="Clean the list of remotes and all "
"recipe-remote associations")
+ parser_enable = subparsers.add_parser('enable', help='Enable a remote')
+ parser_enable.add_argument('remote', help='Name of the remote')
+ parser_disable = subparsers.add_parser('disable', help='Disable a remote')
+ parser_disable.add_argument('remote', help='Name of the remote')
+
args = parser.parse_args(*args)
reference = args.reference if hasattr(args, 'reference') else None
@@ -1497,6 +1502,10 @@ def remote(self, *args):
return self._conan.remote_update_pref(package_reference, remote_name)
elif args.subcommand == "clean":
return self._conan.remote_clean()
+ elif args.subcommand == "enable":
+ return self._conan.remote_set_disabled_state(remote_name, False)
+ elif args.subcommand == "disable":
+ return self._conan.remote_set_disabled_state(remote_name, True)
def profile(self, *args):
"""
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index 4efa52637eb..1714006b917 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -919,7 +919,7 @@ def upload(self, pattern, package=None, remote_name=None, all_packages=False, co
@api_method
def remote_list(self):
- return list(self.app.cache.registry.load_remotes().values())
+ return list(self.app.cache.registry.load_remotes().all_values())
@api_method
def remote_add(self, remote_name, url, verify_ssl=True, insert=None, force=None):
@@ -929,6 +929,10 @@ def remote_add(self, remote_name, url, verify_ssl=True, insert=None, force=None)
def remote_remove(self, remote_name):
return self.app.cache.registry.remove(remote_name)
+ @api_method
+ def remote_set_disabled_state(self, remote_name, state):
+ return self.app.cache.registry.set_disabled_state(remote_name, state)
+
@api_method
def remote_update(self, remote_name, url, verify_ssl=True, insert=None):
return self.app.cache.registry.update(remote_name, url, verify_ssl, insert)
diff --git a/conans/client/conan_command_output.py b/conans/client/conan_command_output.py
index 31458492255..f91d4b90a49 100644
--- a/conans/client/conan_command_output.py
+++ b/conans/client/conan_command_output.py
@@ -30,9 +30,15 @@ def profile_list(self, profiles):
def remote_list(self, remotes, raw):
for r in remotes:
if raw:
- self._output.info("%s %s %s" % (r.name, r.url, r.verify_ssl))
+ disabled_str = " True" if r.disabled else ""
+ self._output.info(
+ "%s %s %s %s" %
+ (r.name, r.url, r.verify_ssl, disabled_str))
else:
- self._output.info("%s: %s [Verify SSL: %s]" % (r.name, r.url, r.verify_ssl))
+ disabled_str = ", Disabled: True" if r.disabled else ""
+ self._output.info(
+ "%s: %s [Verify SSL: %s%s]" %
+ (r.name, r.url, r.verify_ssl, disabled_str))
def remote_ref_list(self, refs):
for reference, remote_name in refs.items():
diff --git a/conans/test/functional/command/config_install_test.py b/conans/test/functional/command/config_install_test.py
index 3897d74d402..4957440c8bf 100644
--- a/conans/test/functional/command/config_install_test.py
+++ b/conans/test/functional/command/config_install_test.py
@@ -145,9 +145,10 @@ def _check(self, params):
settings_path = self.client.cache.settings_path
self.assertEqual(load(settings_path).splitlines(), settings_yml.splitlines())
remotes = self.client.cache.registry.load_remotes()
- self.assertEqual(list(remotes.values()), [Remote("myrepo1", "https://myrepourl.net", False),
- Remote("my-repo-2", "https://myrepo2.com", True),
- ])
+ self.assertEqual(list(remotes.values()), [
+ Remote("myrepo1", "https://myrepourl.net", False, False),
+ Remote("my-repo-2", "https://myrepo2.com", True, False),
+ ])
self.assertEqual(sorted(os.listdir(self.client.cache.profiles_path)),
sorted(["default", "linux", "windows"]))
self.assertEqual(load(os.path.join(self.client.cache.profiles_path, "linux")).splitlines(),
diff --git a/conans/test/functional/command/install_test.py b/conans/test/functional/command/install_test.py
index cca8ced9f43..8edac10701d 100644
--- a/conans/test/functional/command/install_test.py
+++ b/conans/test/functional/command/install_test.py
@@ -2,6 +2,7 @@
import platform
import textwrap
import unittest
+from collections import OrderedDict
from conans.client.tools.oss import detected_os
from conans.model.info import ConanInfo
@@ -630,3 +631,33 @@ class MyPkg(ConanFile):
# Try this syntax to upload too
client.run('install lib/1.0@')
client.run('upload lib/1.0@ -c --all')
+
+ def install_disabled_remote_test(self):
+ client = TestClient(servers={"default": TestServer()},
+ users={"default": [("lasote", "mypass")]})
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . Pkg/0.1@lasote/testing")
+ client.run("upload * --confirm --all -r default")
+ client.run("remote disable default")
+ client.run("install Pkg/0.1@lasote/testing -r default", assert_error=True)
+ self.assertIn("ERROR: Remote 'default' is disabled", client.out)
+ client.run("remote enable default")
+ client.run("install Pkg/0.1@lasote/testing -r default")
+ client.run("remote disable default")
+ client.run("install Pkg/0.1@lasote/testing --update", assert_error=True)
+ self.assertIn("ERROR: Remote 'default' is disabled", client.out)
+
+ def install_skip_disabled_remote_test(self):
+ client = TestClient(servers=OrderedDict({"default": TestServer(),
+ "server2": TestServer(),
+ "server3": TestServer()}),
+ users={"default": [("lasote", "mypass")],
+ "server3": [("lasote", "mypass")]})
+ client.save({"conanfile.py": GenConanfile()})
+ client.run("create . Pkg/0.1@lasote/testing")
+ client.run("upload * --confirm --all -r default")
+ client.run("upload * --confirm --all -r server3")
+ client.run("remove * -f")
+ client.run("remote disable default")
+ client.run("install Pkg/0.1@lasote/testing", assert_error=False)
+ self.assertNotIn("Trying with 'default'...", client.out)
diff --git a/conans/test/functional/command/remote_test.py b/conans/test/functional/command/remote_test.py
index cc8feacc4af..e53f70fd6a2 100644
--- a/conans/test/functional/command/remote_test.py
+++ b/conans/test/functional/command/remote_test.py
@@ -293,6 +293,35 @@ def verify_ssl_test(self):
self.assertEqual(data["remotes"][3]["url"], "http://someurl4")
self.assertEqual(data["remotes"][3]["verify_ssl"], False)
+ def remote_disable_test(self):
+ client = TestClient()
+ client.run("remote add my-remote0 http://someurl0")
+ client.run("remote add my-remote1 http://someurl1")
+ client.run("remote add my-remote2 http://someurl2")
+ client.run("remote add my-remote3 http://someurl3")
+ client.run("remote disable my-remote0")
+ client.run("remote disable my-remote3")
+ registry = load(client.cache.registry_path)
+ data = json.loads(registry)
+ self.assertEqual(data["remotes"][0]["name"], "my-remote0")
+ self.assertEqual(data["remotes"][0]["url"], "http://someurl0")
+ self.assertEqual(data["remotes"][0]["disabled"], True)
+ self.assertEqual(data["remotes"][3]["name"], "my-remote3")
+ self.assertEqual(data["remotes"][3]["url"], "http://someurl3")
+ self.assertEqual(data["remotes"][3]["disabled"], True)
+
+ client.run("remote disable *")
+ registry = load(client.cache.registry_path)
+ data = json.loads(registry)
+ for remote in data["remotes"]:
+ self.assertEqual(remote["disabled"], True)
+
+ client.run("remote enable *")
+ registry = load(client.cache.registry_path)
+ data = json.loads(registry)
+ for remote in data["remotes"]:
+ self.assertNotIn("disabled", remote)
+
def verify_ssl_error_test(self):
client = TestClient()
client.run("remote add my-remote http://someurl some_invalid_option=foo", assert_error=True)
diff --git a/conans/test/functional/command/search_test.py b/conans/test/functional/command/search_test.py
index 6e90816db07..4fe575cc9bf 100644
--- a/conans/test/functional/command/search_test.py
+++ b/conans/test/functional/command/search_test.py
@@ -247,6 +247,20 @@ class Test(ConanFile):
lib/1.0@foo/bar
lib/1.0@user/channel"""), client.out)
+ def search_disabled_remote_test(self):
+ self.client.run("remote disable search_able")
+ self.client.run("search * -r search_able", assert_error=True)
+ self.assertIn("ERROR: Remote 'search_able' is disabled", self.client.out)
+
+ def search_skip_disabled_remote_test(self):
+ os.rmdir(self.servers["local"].server_store.store)
+ self._copy_to_server(self.client.cache, self.servers["local"].server_store)
+ os.rmdir(self.servers["search_able"].server_store.store)
+ self._copy_to_server(self.client.cache, self.servers["search_able"].server_store)
+ self.client.run("remote disable local")
+ self.client.run("search Hello* -r all")
+ self.assertNotIn("local", self.client.out)
+
def recipe_search_all_test(self):
os.rmdir(self.servers["local"].server_store.store)
self._copy_to_server(self.client.cache, self.servers["local"].server_store)
diff --git a/conans/test/functional/configuration/registry_test.py b/conans/test/functional/configuration/registry_test.py
index 392a395c6cf..93b21b74a84 100644
--- a/conans/test/functional/configuration/registry_test.py
+++ b/conans/test/functional/configuration/registry_test.py
@@ -27,7 +27,7 @@ def retro_compatibility_test(self):
migrate_registry_file(cache, output)
registry = RemoteRegistry(cache, output)
self.assertEqual(list(registry.load_remotes().values()),
- [("conan.io", "https://server.conan.io", True)])
+ [("conan.io", "https://server.conan.io", True, False)])
def to_json_migration_test(self):
cache_folder = temp_folder()
@@ -44,7 +44,7 @@ def to_json_migration_test(self):
self.assertIn("conan.io: https://server.conan.io", client.out)
registry = client.cache.registry
self.assertEqual(list(registry.load_remotes().values()),
- [("conan.io", "https://server.conan.io", True)])
+ [("conan.io", "https://server.conan.io", True, False)])
ref1 = ConanFileReference.loads('lib/1.0@conan/stable')
ref2 = ConanFileReference.loads('other/1.0@lasote/testing')
expected = {ref1: 'conan.io', ref2: 'conan.io'}
@@ -65,36 +65,36 @@ def add_remove_update_test(self):
# Add
registry.add("local", "http://localhost:9300")
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", "https://conan.bintray.com", True),
- ("local", "http://localhost:9300", True)])
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("local", "http://localhost:9300", True, False)])
# Add
registry.add("new", "new_url", False)
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", "https://conan.bintray.com", True),
- ("local", "http://localhost:9300", True),
- ("new", "new_url", False)])
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("local", "http://localhost:9300", True, False),
+ ("new", "new_url", False, False)])
with self.assertRaises(ConanException):
registry.add("new", "new_url")
# Update
registry.update("new", "other_url")
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", "https://conan.bintray.com", True),
- ("local", "http://localhost:9300", True),
- ("new", "other_url", True)])
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("local", "http://localhost:9300", True, False),
+ ("new", "other_url", True, False)])
with self.assertRaises(ConanException):
registry.update("new2", "new_url")
registry.update("new", "other_url", False)
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", "https://conan.bintray.com", True),
- ("local", "http://localhost:9300", True),
- ("new", "other_url", False)])
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("local", "http://localhost:9300", True, False),
+ ("new", "other_url", False, False)])
# Remove
registry.remove("local")
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", "https://conan.bintray.com", True),
- ("new", "other_url", False)])
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("new", "other_url", False, False)])
with self.assertRaises(ConanException):
registry.remove("new2")
@@ -116,17 +116,17 @@ def insert_test(self):
cache = ClientCache(tmp_folder, output)
registry = RemoteRegistry(cache, output)
registry.add("repo1", "url1", True, insert=0)
- self.assertEqual(list(registry.load_remotes().values()), [Remote("repo1", "url1", True),
- Remote("conan.io", "https://server.conan.io", True)])
+ self.assertEqual(list(registry.load_remotes().values()), [Remote("repo1", "url1", True, False),
+ Remote("conan.io", "https://server.conan.io", True, False)])
registry.add("repo2", "url2", True, insert=1)
- self.assertEqual(list(registry.load_remotes().values()), [Remote("repo1", "url1", True),
- Remote("repo2", "url2", True),
- Remote("conan.io", "https://server.conan.io", True)])
+ self.assertEqual(list(registry.load_remotes().values()), [Remote("repo1", "url1", True, False),
+ Remote("repo2", "url2", True, False),
+ Remote("conan.io", "https://server.conan.io", True, False)])
registry.add("repo3", "url3", True, insert=5)
- self.assertEqual(list(registry.load_remotes().values()), [Remote("repo1", "url1", True),
- Remote("repo2", "url2", True),
- Remote("conan.io", "https://server.conan.io", True),
- Remote("repo3", "url3", True)])
+ self.assertEqual(list(registry.load_remotes().values()), [Remote("repo1", "url1", True, False),
+ Remote("repo2", "url2", True, False),
+ Remote("conan.io", "https://server.conan.io", True, False),
+ Remote("repo3", "url3", True, False)])
def test_remote_none(self):
""" RemoteRegistry should be able to deal when the URL is None
@@ -138,12 +138,42 @@ def test_remote_none(self):
registry.add("foobar", None)
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", "https://conan.bintray.com", True),
- ("foobar", None, True)])
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("foobar", None, True, False)])
self.assertIn("WARN: The URL is empty. It must contain scheme and hostname.", cache._output)
registry.remove("foobar")
registry.update("conan-center", None)
self.assertEqual(list(registry.load_remotes().values()),
- [("conan-center", None, True)])
+ [("conan-center", None, True, False)])
self.assertIn("WARN: The URL is empty. It must contain scheme and hostname.", cache._output)
+
+ def enable_disable_remotes_test(self):
+ f = os.path.join(temp_folder(), "aux_file")
+ Remotes().save(f)
+ cache = ClientCache(os.path.dirname(f), TestBufferConanOutput())
+ registry = cache.registry
+
+ registry.add("local", "http://localhost:9300")
+ registry.set_disabled_state("local", True)
+ self.assertEqual(list(registry.load_remotes().all_values()),
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("local", "http://localhost:9300", True, True)])
+
+ self.assertEqual(list(registry.load_remotes().values()),
+ [("conan-center", "https://conan.bintray.com", True, False)])
+
+ registry.set_disabled_state("conan-center", True)
+ self.assertEqual(list(registry.load_remotes().all_values()),
+ [("conan-center", "https://conan.bintray.com", True, True),
+ ("local", "http://localhost:9300", True, True)])
+
+ self.assertEqual(list(registry.load_remotes().values()), [])
+
+ registry.set_disabled_state("*", False)
+ self.assertEqual(list(registry.load_remotes().values()),
+ [("conan-center", "https://conan.bintray.com", True, False),
+ ("local", "http://localhost:9300", True, False)])
+
+ registry.set_disabled_state("*", True)
+ self.assertEqual(list(registry.load_remotes().values()), [])
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-5511@b1a3f76
|
conan-io/conan
|
Python
| 5,511
|
Feature: Generator for python environment
|
Changelog: Feature: Virtual environment generator for gathering only the PYTHONPATH.
Docs: https://github.com/conan-io/docs/pull/1369
This pull-request closes #5157
- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [X] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-07-18T12:21:22Z
|
[suggestion] Add PYTHONPATH to virtualrunenv
Adding PYTHONPATH would make it sufficient to activate a virtualrunenv to use python modules in conan packages that also have dynamic library dependencies.
Without PYTHONPATH (as it is now) both a virtualrunenv and virtualenv needs to be activated for such python modules to work.
I would like to contribute to this feature.
|
Not sure, but might be related to this issue: https://github.com/conan-io/conan/issues/5117
The generators `virtualenv` and `virtualrunenv` are separated because they manage different things. With #5158 you are mixing them. I agree it would be only one step but that is a design decision, and, in my opinion, is random, why PYTHONPATH and not others?
We might consider unifying both generators for Conan 2.0
I would like to understand better the use case, why aren't you using [python_requires](https://docs.conan.io/en/latest/extending/python_requires.html#python-requires)?
> The generators `virtualenv` and `virtualrunenv` are separated because they manage different things. With #5158 you are mixing them. I agree it would be only one step but that is a design decision, and, in my opinion, is random, why PYTHONPATH and not others?
> We might consider unifying both generators for Conan 2.0
>
> I would like to understand better the use case, why aren't you using [python_requires](https://docs.conan.io/en/latest/extending/python_requires.html#python-requires)?
I'm creating python bindings with [pybind11](https://github.com/pybind/pybind11) which are used in different python applications.
The python bindings are distributed as Conan packages.
Can `python_requires` be used in generic python code (not `conanfile.py` files)?
`LD_LIBRARY_PATH` is also needed since the python bindings use some shared libraries.
Interesting, the python extensions management is a topic we would love to document and write a blog post about it.
@atilag you might be interested.
I would love to unify the `virtualenv` generators, but in the meantime, should we consider to add another one for this use case? `virtualenv_python` or similar? @memsharded feedback needed here.
I have been thinking about unifying the virtualenv generators, and the thing is I still see a use case where users want the "runtime-only" information, without all the extra configuration that might be necessary for building.
While we have a look for it for Conan 2.0, yes, I wouldn't oppose to create a ``virtualenv_python``, if it is useful for this use case.
I see. It would be usefull to have a `virtualenv_python` in the meantime.
What is needed for creating a `virtualenv_python` generator?
Something like this?
- Create `conans/client/generators/virtualenv_python.py` and add it in `conans/client/generators/__init__.py`
- Create `docs/reference/generators/virtualenv_python.rst` and list it in `docs/reference/generators.rst`
Yes, that is the way to go 👍
Just another suggestion...
Could we add a feature to specify a list of python modules which will be installed (using pip) in this python virtual environment? We have packages which are dependent on several python modules which each have their own secondary python module dependencies. We have been struggling for some time on how to solve this. We don't want to install to the system in case other projects are using different versions of the modules.
Also, if the installed modules could somehow be added to the dependency graph, that would be even better.
Does this sound feasible?
|
[
{
"body": "Adding PYTHONPATH would make it sufficient to activate a virtualrunenv to use python modules in conan packages that also have dynamic library dependencies.\r\n\r\nWithout PYTHONPATH (as it is now) both a virtualrunenv and virtualenv needs to be activated for such python modules to work.\r\n\r\nI would like to contribute to this feature.",
"number": 5157,
"title": "[suggestion] Add PYTHONPATH to virtualrunenv"
}
] |
0e8187a118ffa50dcacc385dcdae0c47b5c86484
|
{
"head_commit": "b1a3f769ba5d592f94492bbf27d56e47ca96803d",
"head_commit_message": "Merge pull request #1 from lasote/feature/python_virtualenv\n\nFeature/python virtualenv",
"patch_to_review": "diff --git a/conans/client/generators/__init__.py b/conans/client/generators/__init__.py\nindex dde846ad1e0..3fad34f07b5 100644\n--- a/conans/client/generators/__init__.py\n+++ b/conans/client/generators/__init__.py\n@@ -24,6 +24,7 @@\n from .text import TXTGenerator\n from .virtualbuildenv import VirtualBuildEnvGenerator\n from .virtualenv import VirtualEnvGenerator\n+from .virtualenv_python import VirtualEnvPythonGenerator\n from .virtualrunenv import VirtualRunEnvGenerator\n from .visualstudio import VisualStudioGenerator\n from .visualstudio_multi import VisualStudioMultiGenerator\n@@ -70,6 +71,7 @@ def __getitem__(self, key):\n registered_generators.add(\"xcode\", XCodeGenerator)\n registered_generators.add(\"ycm\", YouCompleteMeGenerator)\n registered_generators.add(\"virtualenv\", VirtualEnvGenerator)\n+registered_generators.add(\"virtualenv_python\", VirtualEnvPythonGenerator)\n registered_generators.add(\"virtualbuildenv\", VirtualBuildEnvGenerator)\n registered_generators.add(\"virtualrunenv\", VirtualRunEnvGenerator)\n registered_generators.add(\"boost-build\", BoostBuildGenerator)\ndiff --git a/conans/client/generators/virtualenv_python.py b/conans/client/generators/virtualenv_python.py\nnew file mode 100644\nindex 00000000000..7e66cea2e21\n--- /dev/null\n+++ b/conans/client/generators/virtualenv_python.py\n@@ -0,0 +1,22 @@\n+from conans.client.generators.virtualrunenv import VirtualRunEnvGenerator\n+\n+\n+class VirtualEnvPythonGenerator(VirtualRunEnvGenerator):\n+\n+ def __init__(self, conanfile):\n+ super(VirtualEnvPythonGenerator, self).__init__(conanfile)\n+ self.venv_name = \"conanenvpython\"\n+ ppath = conanfile.env.get(\"PYTHONPATH\")\n+ if ppath:\n+ self.env.update({\"PYTHONPATH\": [ppath, ] if not isinstance(ppath, list) else ppath})\n+\n+ @property\n+ def content(self):\n+ tmp = super(VirtualEnvPythonGenerator, self).content\n+ ret = {}\n+ for name, value in tmp.items():\n+ tmp = name.split(\".\")\n+ ret[\"%s_python.%s\" % (tmp[0], tmp[1])] = value\n+\n+ return ret\n+\ndiff --git a/conans/test/functional/generators/virtualenv_python_test.py b/conans/test/functional/generators/virtualenv_python_test.py\nnew file mode 100644\nindex 00000000000..7c7eacf6e95\n--- /dev/null\n+++ b/conans/test/functional/generators/virtualenv_python_test.py\n@@ -0,0 +1,102 @@\n+import platform\n+import unittest\n+import os\n+\n+from conans.util.files import load\n+\n+from conans.test.utils.tools import TestClient, GenConanfile\n+\n+\n+class VirtualEnvPythonGeneratorTest(unittest.TestCase):\n+\n+ def simple_value_test(self):\n+ client = TestClient()\n+ dep1 = \"\"\"\n+import os\n+from conans import ConanFile\n+\n+class BaseConan(ConanFile):\n+ name = \"base\"\n+ version = \"0.1\"\n+\n+ def package_info(self):\n+ self.env_info.PYTHONPATH=\"/path/to/something\"\n+ self.env_info.LD_LIBRARY_PATH=\"/path/ld_library\"\n+ self.env_info.DYLD_LIBRARY_PATH=\"/path/dyld_library\"\n+ self.env_info.PATH=\"/path/path\"\n+ self.env_info.OTHER=\"23\"\n+\"\"\"\n+\n+ base = '''\n+[requires]\n+base/0.1\n+[generators]\n+virtualenv\n+ '''\n+ client.save({\"conanfile.py\": dep1})\n+ client.run(\"create . \")\n+ client.save({\"conanfile.txt\": base}, clean_first=True)\n+ client.run(\"install . -g virtualenv_python\")\n+ name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_run_python.bat\"\n+ contents = load(os.path.join(client.current_folder, name))\n+ self.assertNotIn(\"OTHER\", contents)\n+ self.assertIn(\"PATH=\", contents)\n+ self.assertIn(\"LD_LIBRARY_PATH=\", contents)\n+ self.assertIn(\"DYLD_LIBRARY_PATH=\", contents)\n+\n+ if platform.system() != \"Windows\":\n+\n+ self.assertIn('PYTHONPATH=\"/path/to/something\"${PYTHONPATH+:$PYTHONPATH}', contents)\n+ else:\n+ self.assertIn('SET PYTHONPATH=/path/to/something;%PYTHONPATH%', contents)\n+\n+ def multiple_value_test(self):\n+ client = TestClient()\n+ dep1 = \"\"\"\n+from conans import ConanFile\n+\n+class BaseConan(ConanFile):\n+ name = \"base\"\n+ version = \"0.1\"\n+\n+ def package_info(self):\n+ self.env_info.PYTHONPATH=[\"/path/to/something\", \"/otherpath\"]\n+ self.env_info.OTHER=\"23\"\n+\"\"\"\n+\n+ base = '''\n+ [requires]\n+ base/0.1\n+ [generators]\n+ virtualenv\n+ '''\n+ client.save({\"conanfile.py\": dep1})\n+ client.run(\"create . \")\n+ client.save({\"conanfile.txt\": base}, clean_first=True)\n+ client.run(\"install . -g virtualenv_python\")\n+ name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_run_python.bat\"\n+ contents = load(os.path.join(client.current_folder, name))\n+ self.assertNotIn(\"OTHER\", contents)\n+ if platform.system() != \"Windows\":\n+ self.assertIn('PYTHONPATH=\"/path/to/something\":\"/otherpath\"'\n+ '${PYTHONPATH+:$PYTHONPATH}', contents)\n+ else:\n+ self.assertIn('SET PYTHONPATH=/path/to/something;/otherpath;%PYTHONPATH%', contents)\n+\n+ def no_value_declared_test(self):\n+ client = TestClient()\n+ dep1 = GenConanfile()\n+\n+ base = '''\n+[requires]\n+base/0.1\n+[generators]\n+virtualenv\n+ '''\n+ client.save({\"conanfile.py\": dep1})\n+ client.run(\"create . base/0.1@\")\n+ client.save({\"conanfile.txt\": base}, clean_first=True)\n+ client.run(\"install . -g virtualenv_python\")\n+ name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_python.bat\"\n+ contents = load(os.path.join(client.current_folder, name))\n+ self.assertNotIn(\"PYTHONPATH\", contents)\ndiff --git a/conans/test/unittests/client/generators/virtualenv_python_test.py b/conans/test/unittests/client/generators/virtualenv_python_test.py\nnew file mode 100644\nindex 00000000000..fa879754f5c\n--- /dev/null\n+++ b/conans/test/unittests/client/generators/virtualenv_python_test.py\n@@ -0,0 +1,26 @@\n+import unittest\n+\n+from conans import ConanFile, Settings\n+from conans.client.generators.virtualenv_python import VirtualEnvPythonGenerator\n+from conans.model.env_info import DepsEnvInfo\n+from conans.model.env_info import EnvValues\n+from conans.test.utils.tools import TestBufferConanOutput\n+\n+\n+class VirtualEnvPythonGeneratorTest(unittest.TestCase):\n+\n+ def pythonpath_test(self):\n+ \"\"\"\n+ Check PYTHONPATH env variable\n+ \"\"\"\n+ conanfile = ConanFile(TestBufferConanOutput(), None)\n+ conanfile.initialize(Settings({}), EnvValues.loads(\"PYTHONPATH=[1,2,three]\"))\n+ conanfile.deps_env_info = DepsEnvInfo.loads(\n+ '[ENV_A]\\nPYTHONPATH=[\"DepAPath\"]\\n[ENV_B]\\nPYTHONPATH=[\"DepBPath\"]'\n+ )\n+ gen = VirtualEnvPythonGenerator(conanfile)\n+ content = gen.content\n+\n+ self.assertIn('PYTHONPATH=\"1\":\"2\":\"three\":\"DepAPath\":\"DepBPath\"${PYTHONPATH+:$PYTHONPATH}',\n+ content[\"activate_run_python.sh\"])\n+\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,102 @@\n+import platform\n+import unittest\n+import os\n+\n+from conans.util.files import load\n+\n+from conans.test.utils.tools import TestClient, GenConanfile\n+\n+\n+class VirtualEnvPythonGeneratorTest(unittest.TestCase):\n+\n+ def simple_value_test(self):\n+ client = TestClient()\n+ dep1 = \"\"\"\n+import os\n+from conans import ConanFile\n+\n+class BaseConan(ConanFile):\n+ name = \"base\"\n+ version = \"0.1\"\n+\n+ def package_info(self):\n+ self.env_info.PYTHONPATH=\"/path/to/something\"\n+ self.env_info.LD_LIBRARY_PATH=\"/path/ld_library\"\n+ self.env_info.DYLD_LIBRARY_PATH=\"/path/dyld_library\"\n+ self.env_info.PATH=\"/path/path\"\n+ self.env_info.OTHER=\"23\"\n+\"\"\"\n+\n+ base = '''\n+[requires]\n+base/0.1\n+[generators]\n+virtualenv\n+ '''\n+ client.save({\"conanfile.py\": dep1})\n+ client.run(\"create . \")\n+ client.save({\"conanfile.txt\": base}, clean_first=True)\n+ client.run(\"install . -g virtualenv_python\")\n+ name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_run_python.bat\"\n+ contents = load(os.path.join(client.current_folder, name))\n+ self.assertNotIn(\"OTHER\", contents)\n+ self.assertIn(\"PATH=\", contents)\n+ self.assertIn(\"LD_LIBRARY_PATH=\", contents)\n+ self.assertIn(\"DYLD_LIBRARY_PATH=\", contents)\n+\n+ if platform.system() != \"Windows\":\n+\n+ self.assertIn('PYTHONPATH=\"/path/to/something\"${PYTHONPATH+:$PYTHONPATH}', contents)\n+ else:\n+ self.assertIn('SET PYTHONPATH=/path/to/something;%PYTHONPATH%', contents)\n+\n+ def multiple_value_test(self):\n+ client = TestClient()\n+ dep1 = \"\"\"\n+from conans import ConanFile\n+\n+class BaseConan(ConanFile):\n+ name = \"base\"\n+ version = \"0.1\"\n+\n+ def package_info(self):\n+ self.env_info.PYTHONPATH=[\"/path/to/something\", \"/otherpath\"]\n+ self.env_info.OTHER=\"23\"\n+\"\"\"\n+\n+ base = '''\n+ [requires]\n+ base/0.1\n+ [generators]\n+ virtualenv\n+ '''\n+ client.save({\"conanfile.py\": dep1})\n+ client.run(\"create . \")\n+ client.save({\"conanfile.txt\": base}, clean_first=True)\n+ client.run(\"install . -g virtualenv_python\")\n+ name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_run_python.bat\"\n+ contents = load(os.path.join(client.current_folder, name))\n+ self.assertNotIn(\"OTHER\", contents)\n+ if platform.system() != \"Windows\":\n+ self.assertIn('PYTHONPATH=\"/path/to/something\":\"/otherpath\"'\n+ '${PYTHONPATH+:$PYTHONPATH}', contents)\n+ else:\n+ self.assertIn('SET PYTHONPATH=/path/to/something;/otherpath;%PYTHONPATH%', contents)\n+\n+ def no_value_declared_test(self):\n+ client = TestClient()\n+ dep1 = GenConanfile()\n+\n+ base = '''\n+[requires]\n+base/0.1\n+[generators]\n+virtualenv\n+ '''\n+ client.save({\"conanfile.py\": dep1})\n+ client.run(\"create . base/0.1@\")\n+ client.save({\"conanfile.txt\": base}, clean_first=True)\n+ client.run(\"install . -g virtualenv_python\")\n+ name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_python.bat\"",
"line": null,
"original_line": 100,
"original_start_line": null,
"path": "conans/test/functional/generators/virtualenv_python_test.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n name = \"activate_run_python.sh\" if platform.system() != \"Windows\" else \"activate_run_python.bat\"\r\n```\n\n@user1:\nPlease @author accept this commit. I forgot to change a file name."
}
] |
0c50e1339fa5178d15213516fbeee64cf42025f0
|
diff --git a/conans/client/generators/__init__.py b/conans/client/generators/__init__.py
index dde846ad1e0..3fad34f07b5 100644
--- a/conans/client/generators/__init__.py
+++ b/conans/client/generators/__init__.py
@@ -24,6 +24,7 @@
from .text import TXTGenerator
from .virtualbuildenv import VirtualBuildEnvGenerator
from .virtualenv import VirtualEnvGenerator
+from .virtualenv_python import VirtualEnvPythonGenerator
from .virtualrunenv import VirtualRunEnvGenerator
from .visualstudio import VisualStudioGenerator
from .visualstudio_multi import VisualStudioMultiGenerator
@@ -70,6 +71,7 @@ def __getitem__(self, key):
registered_generators.add("xcode", XCodeGenerator)
registered_generators.add("ycm", YouCompleteMeGenerator)
registered_generators.add("virtualenv", VirtualEnvGenerator)
+registered_generators.add("virtualenv_python", VirtualEnvPythonGenerator)
registered_generators.add("virtualbuildenv", VirtualBuildEnvGenerator)
registered_generators.add("virtualrunenv", VirtualRunEnvGenerator)
registered_generators.add("boost-build", BoostBuildGenerator)
diff --git a/conans/client/generators/virtualenv_python.py b/conans/client/generators/virtualenv_python.py
new file mode 100644
index 00000000000..7e66cea2e21
--- /dev/null
+++ b/conans/client/generators/virtualenv_python.py
@@ -0,0 +1,22 @@
+from conans.client.generators.virtualrunenv import VirtualRunEnvGenerator
+
+
+class VirtualEnvPythonGenerator(VirtualRunEnvGenerator):
+
+ def __init__(self, conanfile):
+ super(VirtualEnvPythonGenerator, self).__init__(conanfile)
+ self.venv_name = "conanenvpython"
+ ppath = conanfile.env.get("PYTHONPATH")
+ if ppath:
+ self.env.update({"PYTHONPATH": [ppath, ] if not isinstance(ppath, list) else ppath})
+
+ @property
+ def content(self):
+ tmp = super(VirtualEnvPythonGenerator, self).content
+ ret = {}
+ for name, value in tmp.items():
+ tmp = name.split(".")
+ ret["%s_python.%s" % (tmp[0], tmp[1])] = value
+
+ return ret
+
diff --git a/conans/test/functional/generators/virtualenv_python_test.py b/conans/test/functional/generators/virtualenv_python_test.py
new file mode 100644
index 00000000000..5f7dc70f5cb
--- /dev/null
+++ b/conans/test/functional/generators/virtualenv_python_test.py
@@ -0,0 +1,102 @@
+import platform
+import unittest
+import os
+
+from conans.util.files import load
+
+from conans.test.utils.tools import TestClient, GenConanfile
+
+
+class VirtualEnvPythonGeneratorTest(unittest.TestCase):
+
+ def simple_value_test(self):
+ client = TestClient()
+ dep1 = """
+import os
+from conans import ConanFile
+
+class BaseConan(ConanFile):
+ name = "base"
+ version = "0.1"
+
+ def package_info(self):
+ self.env_info.PYTHONPATH="/path/to/something"
+ self.env_info.LD_LIBRARY_PATH="/path/ld_library"
+ self.env_info.DYLD_LIBRARY_PATH="/path/dyld_library"
+ self.env_info.PATH="/path/path"
+ self.env_info.OTHER="23"
+"""
+
+ base = '''
+[requires]
+base/0.1
+[generators]
+virtualenv
+ '''
+ client.save({"conanfile.py": dep1})
+ client.run("create . ")
+ client.save({"conanfile.txt": base}, clean_first=True)
+ client.run("install . -g virtualenv_python")
+ name = "activate_run_python.sh" if platform.system() != "Windows" else "activate_run_python.bat"
+ contents = load(os.path.join(client.current_folder, name))
+ self.assertNotIn("OTHER", contents)
+ self.assertIn("PATH=", contents)
+ self.assertIn("LD_LIBRARY_PATH=", contents)
+ self.assertIn("DYLD_LIBRARY_PATH=", contents)
+
+ if platform.system() != "Windows":
+
+ self.assertIn('PYTHONPATH="/path/to/something"${PYTHONPATH+:$PYTHONPATH}', contents)
+ else:
+ self.assertIn('SET PYTHONPATH=/path/to/something;%PYTHONPATH%', contents)
+
+ def multiple_value_test(self):
+ client = TestClient()
+ dep1 = """
+from conans import ConanFile
+
+class BaseConan(ConanFile):
+ name = "base"
+ version = "0.1"
+
+ def package_info(self):
+ self.env_info.PYTHONPATH=["/path/to/something", "/otherpath"]
+ self.env_info.OTHER="23"
+"""
+
+ base = '''
+ [requires]
+ base/0.1
+ [generators]
+ virtualenv
+ '''
+ client.save({"conanfile.py": dep1})
+ client.run("create . ")
+ client.save({"conanfile.txt": base}, clean_first=True)
+ client.run("install . -g virtualenv_python")
+ name = "activate_run_python.sh" if platform.system() != "Windows" else "activate_run_python.bat"
+ contents = load(os.path.join(client.current_folder, name))
+ self.assertNotIn("OTHER", contents)
+ if platform.system() != "Windows":
+ self.assertIn('PYTHONPATH="/path/to/something":"/otherpath"'
+ '${PYTHONPATH+:$PYTHONPATH}', contents)
+ else:
+ self.assertIn('SET PYTHONPATH=/path/to/something;/otherpath;%PYTHONPATH%', contents)
+
+ def no_value_declared_test(self):
+ client = TestClient()
+ dep1 = GenConanfile()
+
+ base = '''
+[requires]
+base/0.1
+[generators]
+virtualenv
+ '''
+ client.save({"conanfile.py": dep1})
+ client.run("create . base/0.1@")
+ client.save({"conanfile.txt": base}, clean_first=True)
+ client.run("install . -g virtualenv_python")
+ name = "activate_run_python.sh" if platform.system() != "Windows" else "activate_run_python.bat"
+ contents = load(os.path.join(client.current_folder, name))
+ self.assertNotIn("PYTHONPATH", contents)
diff --git a/conans/test/unittests/client/generators/virtualenv_python_test.py b/conans/test/unittests/client/generators/virtualenv_python_test.py
new file mode 100644
index 00000000000..fa879754f5c
--- /dev/null
+++ b/conans/test/unittests/client/generators/virtualenv_python_test.py
@@ -0,0 +1,26 @@
+import unittest
+
+from conans import ConanFile, Settings
+from conans.client.generators.virtualenv_python import VirtualEnvPythonGenerator
+from conans.model.env_info import DepsEnvInfo
+from conans.model.env_info import EnvValues
+from conans.test.utils.tools import TestBufferConanOutput
+
+
+class VirtualEnvPythonGeneratorTest(unittest.TestCase):
+
+ def pythonpath_test(self):
+ """
+ Check PYTHONPATH env variable
+ """
+ conanfile = ConanFile(TestBufferConanOutput(), None)
+ conanfile.initialize(Settings({}), EnvValues.loads("PYTHONPATH=[1,2,three]"))
+ conanfile.deps_env_info = DepsEnvInfo.loads(
+ '[ENV_A]\nPYTHONPATH=["DepAPath"]\n[ENV_B]\nPYTHONPATH=["DepBPath"]'
+ )
+ gen = VirtualEnvPythonGenerator(conanfile)
+ content = gen.content
+
+ self.assertIn('PYTHONPATH="1":"2":"three":"DepAPath":"DepBPath"${PYTHONPATH+:$PYTHONPATH}',
+ content["activate_run_python.sh"])
+
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-5537@28bc2f8
|
conan-io/conan
|
Python
| 5,537
|
conan search <pattern> --revisions with cache Issue/5472
|
Changelog: Feature: Output current revision from references in local cache when using a pattern
Docs: https://github.com/conan-io/docs/pull/1381
Previous behaviour was that current revision number could only be shown if a reference was
given with the `--revisions` argument but not a pattern.
Now you can use a pattern to list the current revision of the references in the local cache.
Closes #5472

- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-07-26T06:52:34Z
|
conan search <pattern> --revisions
I miss a list of references but with the current revision in the local cache. Why? Even having the revisions enabled, the "install" traces ~~don't show which RREV is being used~~ (edit: there is a trace P1/1.0@conan/stable: Downloaded recipe revision c9da69cad5ccb88452fa1c6ed3db54b2) but still would be practical. If you have missing packages and you want to "debug" why first you need to know at first sight the resolved revisions for the recipes.
|
[
{
"body": "I miss a list of references but with the current revision in the local cache. Why? Even having the revisions enabled, the \"install\" traces ~~don't show which RREV is being used~~ (edit: there is a trace P1/1.0@conan/stable: Downloaded recipe revision c9da69cad5ccb88452fa1c6ed3db54b2) but still would be practical. If you have missing packages and you want to \"debug\" why first you need to know at first sight the resolved revisions for the recipes.\r\n",
"number": 5472,
"title": "conan search <pattern> --revisions"
}
] |
f4e77dc330437c83f72f0e3053ab619825155766
|
{
"head_commit": "28bc2f877d6a1143194520114dbec9fa63ccb0c9",
"head_commit_message": "add some tests",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex 28eb53379d0..ed5eac8434d 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -1212,17 +1212,36 @@ def search(self, *args):\n else:\n info = self._conan.get_package_revisions(repr(pref), remote_name=args.remote)\n \n- if not info:\n- if not ref:\n- msg = \"With --revision, specify a reference (e.g {ref}) or a package \" \\\n- \"reference with \" \\\n- \"recipe revision (e.g {ref}#3453453453:d50a0d523d98c15bb147b18f\" \\\n- \"a7d203887c38be8b)\".format(ref=_REFERENCE_EXAMPLE)\n- raise ConanException(msg)\n- info = self._conan.get_recipe_revisions(repr(ref),\n- remote_name=args.remote)\n- self._outputer.print_revisions(ref, info, remote_name=args.remote)\n- return\n+ if not ref and not info:\n+ exc_msg = \"With --revision, specify a reference (e.g {ref}) a valid pattern \" \\\n+ \"or a package reference with \" \\\n+ \"recipe revision (e.g {ref}#3453453453:d50a0d523d98c15bb147b18f\" \\\n+ \"a7d203887c38be8b)\".format(ref=_REFERENCE_EXAMPLE)\n+ if args.remote:\n+ raise ConanException(exc_msg)\n+ else:\n+ info = self._conan.search_recipes(args.pattern_or_reference,\n+ remote_name=args.remote,\n+ case_sensitive=args.case_sensitive)\n+ if info[\"results\"]:\n+ for remote_info in info[\"results\"]:\n+ for conan_item in remote_info[\"items\"]:\n+ reference = conan_item[\"recipe\"][\"id\"]\n+ ref = ConanFileReference.loads(reference)\n+ rev = self._conan.get_recipe_revisions(repr(ref),\n+ remote_name=args.remote,\n+ check_rev_time=False)\n+ self._outputer.print_revisions(ref, rev, remote_name=args.remote)\n+ else:\n+ raise ConanException(exc_msg)\n+\n+ return\n+ else:\n+ if not info:\n+ info = self._conan.get_recipe_revisions(repr(ref),\n+ remote_name=args.remote)\n+ self._outputer.print_revisions(ref, info, remote_name=args.remote)\n+ return\n \n if ref:\n info = self._conan.search_packages(repr(ref), query=args.query,\ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 404984f17f2..2bf6f3ec0a1 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -1093,7 +1093,7 @@ def get_remote_by_name(self, remote_name):\n return self._cache.registry.load_remotes()[remote_name]\n \n @api_method\n- def get_recipe_revisions(self, reference, remote_name=None):\n+ def get_recipe_revisions(self, reference, remote_name=None, check_rev_time=True):\n if not self._cache.config.revisions_enabled:\n raise ConanException(\"The client doesn't have the revisions feature enabled.\"\n \" Enable this feature setting to '1' the environment variable\"\n@@ -1111,20 +1111,21 @@ def get_recipe_revisions(self, reference, remote_name=None):\n e.print_rev = True\n raise e\n \n- # Check the time in the associated remote if any\n- remote_name = layout.load_metadata().recipe.remote\n- remote = self._cache.registry.load_remotes()[remote_name] if remote_name else None\n rev_time = None\n- if remote:\n- try:\n- revisions = self._remote_manager.get_recipe_revisions(ref, remote)\n- except RecipeNotFoundException:\n- pass\n- except (NoRestV2Available, NotFoundException):\n- rev_time = None\n- else:\n- tmp = {r[\"revision\"]: r[\"time\"] for r in revisions}\n- rev_time = tmp.get(rev)\n+ if check_rev_time:\n+ # Check the time in the associated remote if any\n+ remote_name = layout.load_metadata().recipe.remote\n+ remote = self._cache.registry.load_remotes()[remote_name] if remote_name else None\n+ if remote:\n+ try:\n+ revisions = self._remote_manager.get_recipe_revisions(ref, remote)\n+ except RecipeNotFoundException:\n+ pass\n+ except (NoRestV2Available, NotFoundException):\n+ rev_time = None\n+ else:\n+ tmp = {r[\"revision\"]: r[\"time\"] for r in revisions}\n+ rev_time = tmp.get(rev)\n \n return [{\"revision\": rev, \"time\": rev_time}]\n else:\ndiff --git a/conans/test/functional/command/search_test.py b/conans/test/functional/command/search_test.py\nindex 5dcdb9940c5..fe63d41d8a9 100644\n--- a/conans/test/functional/command/search_test.py\n+++ b/conans/test/functional/command/search_test.py\n@@ -1194,6 +1194,10 @@ class Test(ConanFile):\n client.run(\"search lib/1.0@user/testing --revisions\")\n self.assertIn(\"bd761686d5c57b31f4cd85fd0329751f (No time)\", client.out)\n \n+ # test that the pattern search with --revisions enabled works\n+ client.run(\"search li* --revisions\")\n+ self.assertIn(\"bd761686d5c57b31f4cd85fd0329751f (No time)\", client.out)\n+\n with patch.object(RevisionList, '_now', return_value=the_time):\n client.run(\"upload lib/1.0@user/testing -c\")\n \n@@ -1348,6 +1352,10 @@ class Test(ConanFile):\n \n # IN REMOTE\n \n+ # Search with pattern and remotes\n+ client.run(\"search * --revisions -r default\", assert_error=True)\n+ self.assertIn(\"ERROR: With --revision, specify a reference\", client.out)\n+\n # Search not found in remote\n client.run(\"search missing/1.0@conan/stable --revisions -r default\", assert_error=True)\n self.assertIn(\"ERROR: Recipe not found: 'missing/1.0@conan/stable'\", client.out)\n"
}
|
[
{
"diff_hunk": "@@ -1093,7 +1093,7 @@ def get_remote_by_name(self, remote_name):\n return self._cache.registry.load_remotes()[remote_name]\n \n @api_method\n- def get_recipe_revisions(self, reference, remote_name=None):\n+ def get_recipe_revisions(self, reference, remote_name=None, check_rev_time=True):",
"line": null,
"original_line": 1096,
"original_start_line": null,
"path": "conans/client/conan_api.py",
"start_line": null,
"text": "@user1:\nI think we shouldn't change this method. That `check_rev_time` is a patch."
},
{
"diff_hunk": "@@ -1212,17 +1212,36 @@ def search(self, *args):\n else:\n info = self._conan.get_package_revisions(repr(pref), remote_name=args.remote)\n \n- if not info:\n- if not ref:\n- msg = \"With --revision, specify a reference (e.g {ref}) or a package \" \\\n- \"reference with \" \\\n- \"recipe revision (e.g {ref}#3453453453:d50a0d523d98c15bb147b18f\" \\\n- \"a7d203887c38be8b)\".format(ref=_REFERENCE_EXAMPLE)\n- raise ConanException(msg)\n- info = self._conan.get_recipe_revisions(repr(ref),\n- remote_name=args.remote)\n- self._outputer.print_revisions(ref, info, remote_name=args.remote)\n- return\n+ if not ref and not info:\n+ exc_msg = \"With --revision, specify a reference (e.g {ref}) a valid pattern \" \\\n+ \"or a package reference with \" \\\n+ \"recipe revision (e.g {ref}#3453453453:d50a0d523d98c15bb147b18f\" \\\n+ \"a7d203887c38be8b)\".format(ref=_REFERENCE_EXAMPLE)\n+ if args.remote:\n+ raise ConanException(exc_msg)\n+ else:\n+ info = self._conan.search_recipes(args.pattern_or_reference,\n+ remote_name=args.remote,\n+ case_sensitive=args.case_sensitive)\n+ if info[\"results\"]:",
"line": null,
"original_line": 1226,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user1:\nThis feels hacky too"
},
{
"diff_hunk": "@@ -1194,6 +1194,10 @@ class Test(ConanFile):\n client.run(\"search lib/1.0@user/testing --revisions\")\n self.assertIn(\"bd761686d5c57b31f4cd85fd0329751f (No time)\", client.out)\n \n+ # test that the pattern search with --revisions enabled works\n+ client.run(\"search li* --revisions\")\n+ self.assertIn(\"bd761686d5c57b31f4cd85fd0329751f (No time)\", client.out)",
"line": null,
"original_line": 1199,
"original_start_line": null,
"path": "conans/test/functional/command/search_test.py",
"start_line": null,
"text": "@user2:\nIs it `lib/1.0@user1/testing#bd761686d5c57b31f4cd85fd0329751f (No time)`? In that case assert the whole line because it is confusing what is `bd761686d5c57b31f4cd85fd0329751f`"
}
] |
25009a644e14772d3f81787f19de8d79143664c7
|
diff --git a/conans/client/command.py b/conans/client/command.py
index 50b68d24a57..0892380a71d 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -1211,23 +1211,38 @@ def search(self, *args):
try:
if args.revisions:
+ # Show revisions of a ref
+ if ref:
+ info = self._conan.get_recipe_revisions(repr(ref), remote_name=args.remote)
+ self._outputer.print_revisions(ref, info, remote_name=args.remote)
+ return
+
+ # Show revisions of pref
try:
pref = PackageReference.loads(args.pattern_or_reference)
except (TypeError, ConanException, AttributeError):
pass
else:
info = self._conan.get_package_revisions(repr(pref), remote_name=args.remote)
-
- if not info:
- if not ref:
- msg = "With --revision, specify a reference (e.g {ref}) or a package " \
- "reference with " \
- "recipe revision (e.g {ref}#3453453453:d50a0d523d98c15bb147b18f" \
+ self._outputer.print_revisions(ref, info, remote_name=args.remote)
+ return
+
+ # A pattern: Listing references by pattern but showing revisions
+ if args.remote:
+ exc_msg = "With --revision, specify a reference (e.g {ref}) " \
+ "a valid pattern " \
+ "or a package reference with " \
+ "recipe revision (e.g {ref}#3453453453:" \
+ "d50a0d523d98c15bb147b18f" \
"a7d203887c38be8b)".format(ref=_REFERENCE_EXAMPLE)
- raise ConanException(msg)
- info = self._conan.get_recipe_revisions(repr(ref),
- remote_name=args.remote)
- self._outputer.print_revisions(ref, info, remote_name=args.remote)
+ raise ConanException(exc_msg)
+
+ info = self._conan.search_recipes(args.pattern_or_reference, remote_name=None,
+ case_sensitive=args.case_sensitive,
+ fill_revisions=True)
+ self._outputer.print_search_references(info["results"],
+ args.pattern_or_reference,
+ args.raw, all_remotes_search=None)
return
if ref:
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index aea22976af9..c95312b0ca0 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -2,6 +2,8 @@
import sys
from collections import OrderedDict
+from conans.paths.package_layouts.package_cache_layout import PackageCacheLayout
+
import conans
from conans import __version__ as client_version
from conans.client import packager, tools
@@ -851,7 +853,8 @@ def users_list(self, remote_name=None):
raise
@api_method
- def search_recipes(self, pattern, remote_name=None, case_sensitive=False):
+ def search_recipes(self, pattern, remote_name=None, case_sensitive=False,
+ fill_revisions=False):
search_recorder = SearchRecorder()
remotes = self.app.cache.registry.load_remotes()
search = Search(self.app.cache, self.app.remote_manager, remotes)
@@ -865,6 +868,11 @@ def search_recipes(self, pattern, remote_name=None, case_sensitive=False):
for remote_name, refs in references.items():
for ref in refs:
+ if fill_revisions:
+ layout = self.app.cache.package_layout(ref)
+ if isinstance(layout, PackageCacheLayout):
+ ref = ref.copy_with_rev(layout.recipe_revision())
+
search_recorder.add_recipe(remote_name, ref, with_packages=False)
return search_recorder.get_info()
diff --git a/conans/client/printer.py b/conans/client/printer.py
index fac414d77c7..116a2d58084 100644
--- a/conans/client/printer.py
+++ b/conans/client/printer.py
@@ -145,7 +145,7 @@ def print_search_recipes(self, search_info, pattern, raw, all_remotes_search):
for conan_item in remote_info["items"]:
reference = conan_item["recipe"]["id"]
ref = ConanFileReference.loads(reference)
- self._print_colored_line(str(ref), indent=0)
+ self._print_colored_line(ref.full_str(), indent=0)
else:
for remote_info in search_info:
if all_remotes_search:
@@ -153,7 +153,7 @@ def print_search_recipes(self, search_info, pattern, raw, all_remotes_search):
for conan_item in remote_info["items"]:
reference = conan_item["recipe"]["id"]
ref = ConanFileReference.loads(reference)
- self._out.writeln(str(ref))
+ self._out.writeln(ref.full_str())
def print_search_packages(self, search_info, ref, packages_query,
outdated=False):
diff --git a/conans/client/recorder/search_recorder.py b/conans/client/recorder/search_recorder.py
index e1754b3b91a..d8dc117644d 100644
--- a/conans/client/recorder/search_recorder.py
+++ b/conans/client/recorder/search_recorder.py
@@ -5,7 +5,7 @@ class _SearchRecipe(namedtuple("SearchRecipe", "ref")):
with_packages = True
def to_dict(self):
- data = {"id": str(self.ref)}
+ data = {"id": repr(self.ref)}
return data
diff --git a/conans/test/functional/command/search_test.py b/conans/test/functional/command/search_test.py
index 3636cd3583a..ea355fb015b 100644
--- a/conans/test/functional/command/search_test.py
+++ b/conans/test/functional/command/search_test.py
@@ -1221,8 +1221,8 @@ class Test(ConanFile):
def test_exception_client_without_revs(self):
client = TestClient()
- client.run("search whatever --revisions", assert_error=True)
- self.assertIn("ERROR: With --revision, specify a reference", client.out)
+ client.run("search whatever --revisions")
+ self.assertIn("There are no packages matching the 'whatever' pattern", client.out)
client.run("search lib/0.1@user/testing --revisions", assert_error=True)
self.assertIn("ERROR: The client doesn't have the revisions feature enabled", client.out)
@@ -1254,6 +1254,10 @@ class Test(ConanFile):
client.run("search lib/1.0@user/testing --revisions")
self.assertIn("bd761686d5c57b31f4cd85fd0329751f (No time)", client.out)
+ # test that the pattern search with --revisions enabled works
+ client.run("search li* --revisions")
+ self.assertIn("lib/1.0@user/testing#bd761686d5c57b31f4cd85fd0329751f (No time)", client.out)
+
with patch.object(RevisionList, '_now', return_value=the_time):
client.run("upload lib/1.0@user/testing -c")
@@ -1408,6 +1412,10 @@ class Test(ConanFile):
# IN REMOTE
+ # Search with pattern and remotes
+ client.run("search * --revisions -r default", assert_error=True)
+ self.assertIn("ERROR: With --revision, specify a reference", client.out)
+
# Search not found in remote
client.run("search missing/1.0@conan/stable --revisions -r default", assert_error=True)
self.assertIn("ERROR: Recipe not found: 'missing/1.0@conan/stable'", client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-5461@5d06f8e
|
conan-io/conan
|
Python
| 5,461
|
Do not raise when accessing the metadata of editable packages
|
Changelog: Bugfix: Do not raise when accessing the metadata of editable packages
Docs: omit
- [x] Refer to the issue that supports this Pull Request: closes #5424
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [x] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-07-08T14:22:25Z
|
conan remote remove fails with editable packages
Trying to update the metadata, it fails because the `PackageEditableLayout` has no `update_metadata()`. The main questions are:
- Should the editable packages still have the metadata methods available?
or
- Should we force always to retrieve the `PackageCacheLayout` sometimes, e.g from the `remote_registry.py`?
```
Traceback (most recent call last):
File "/home/luism/workspace/conan_sources/conans/client/command.py", line 1832, in run
method(args[0][1:])
File "/home/luism/workspace/conan_sources/conans/client/command.py", line 1423, in remote
return self._conan.remote_remove(remote_name)
File "/home/luism/workspace/conan_sources/conans/client/conan_api.py", line 77, in wrapper
return f(*args, **kwargs)
File "/home/luism/workspace/conan_sources/conans/client/conan_api.py", line 922, in remote_remove
return self._cache.registry.remove(remote_name)
File "/home/luism/workspace/conan_sources/conans/client/cache/remote_registry.py", line 301, in remove
with self._cache.package_layout(ref).update_metadata() as metadata:
AttributeError: 'PackageEditableLayout' object has no attribute 'update_metadata'
```
|
@memsharded I would like to know what do you think about this.
I think that so far it doesn't make sense, as metadata are revisions (we don't have them while editable) and remote (same). If the package is in editable mode, I think doesn't make much sense to return PackageCacheLayout for it, so far the PacakgeEditableLayout have placeholder methods to provide better error messages.
Agree. But then what, how to solve this issue? Should the `remote remove` just ignore the editable layouts? When the package is no longer in "editable mode" it will have metadata with a wrong remote (removed).
I see, you are right.
I think there are 2 approaches:
- Explicit force to get the cache metadata, like: ``with self._cache.package_layout(ref, from-cache=True).update_metadata() as metadata:``
- Explicit disabling of ``editables``, like in the ``RemoteRegistry.remove`` make cache.editables = {} and at the end of the function restore it.
I think at the moment I prefer the later one, until we have more cases.
|
[
{
"body": "Trying to update the metadata, it fails because the `PackageEditableLayout` has no `update_metadata()`. The main questions are:\r\n- Should the editable packages still have the metadata methods available? \r\nor \r\n- Should we force always to retrieve the `PackageCacheLayout` sometimes, e.g from the `remote_registry.py`?\r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/luism/workspace/conan_sources/conans/client/command.py\", line 1832, in run \r\n method(args[0][1:]) \r\n File \"/home/luism/workspace/conan_sources/conans/client/command.py\", line 1423, in remote \r\n return self._conan.remote_remove(remote_name) \r\n File \"/home/luism/workspace/conan_sources/conans/client/conan_api.py\", line 77, in wrapper \r\n return f(*args, **kwargs)\r\n File \"/home/luism/workspace/conan_sources/conans/client/conan_api.py\", line 922, in remote_remove\r\n return self._cache.registry.remove(remote_name)\r\n File \"/home/luism/workspace/conan_sources/conans/client/cache/remote_registry.py\", line 301, in remove\r\n with self._cache.package_layout(ref).update_metadata() as metadata:\r\nAttributeError: 'PackageEditableLayout' object has no attribute 'update_metadata'\r\n```",
"number": 5424,
"title": "conan remote remove fails with editable packages"
}
] |
de33bf7c0372776682fb30bceff0425282f164ea
|
{
"head_commit": "5d06f8ef7b9ab4f7554adbdc483e9668d8d73584",
"head_commit_message": "fix missing call",
"patch_to_review": "diff --git a/conans/client/cache/remote_registry.py b/conans/client/cache/remote_registry.py\nindex c202e8ec32a..3bd1b173c15 100644\n--- a/conans/client/cache/remote_registry.py\n+++ b/conans/client/cache/remote_registry.py\n@@ -1,6 +1,7 @@\n import json\n import os\n from collections import OrderedDict, namedtuple\n+from contextlib import contextmanager\n from six.moves.urllib.parse import urlparse\n \n from conans.errors import ConanException, NoRemoteAvailable\n@@ -267,6 +268,16 @@ def _validate_url(self, url):\n else:\n self._output.warn(\"The URL is empty. It must contain scheme and hostname.\")\n \n+ @contextmanager\n+ def _editables_metadata_from_cache(self):\n+ \"\"\"\n+ Hide editable packages to get the cache layout instead of the editable one\n+ \"\"\"\n+ editables = self._cache.editable_packages\n+ self._cache.editable_packages = {}\n+ yield\n+ self._cache.editable_packages = editables\n+\n def load_remotes(self):\n if not os.path.exists(self._filename):\n self._output.warn(\"Remotes registry file missing, \"\n@@ -284,13 +295,14 @@ def add(self, remote_name, url, verify_ssl=True, insert=None, force=None):\n renamed = remotes.add(remote_name, url, verify_ssl, insert, force)\n remotes.save(self._filename)\n if renamed:\n- for ref in self._cache.all_refs():\n- with self._cache.package_layout(ref).update_metadata() as metadata:\n- if metadata.recipe.remote == renamed:\n- metadata.recipe.remote = remote_name\n- for pkg_metadata in metadata.packages.values():\n- if pkg_metadata.remote == renamed:\n- pkg_metadata.remote = remote_name\n+ with self._editables_metadata_from_cache():\n+ for ref in self._cache.all_refs():\n+ with self._cache.package_layout(ref).update_metadata() as metadata:\n+ if metadata.recipe.remote == renamed:\n+ metadata.recipe.remote = remote_name\n+ for pkg_metadata in metadata.packages.values():\n+ if pkg_metadata.remote == renamed:\n+ pkg_metadata.remote = remote_name\n \n def update(self, remote_name, url, verify_ssl=True, insert=None):\n self._validate_url(url)\n@@ -301,52 +313,54 @@ def update(self, remote_name, url, verify_ssl=True, insert=None):\n def clear(self):\n remotes = self.load_remotes()\n remotes.clear()\n- for ref in self._cache.all_refs():\n- with self._cache.package_layout(ref).update_metadata() as metadata:\n- metadata.recipe.remote = None\n- for pkg_metadata in metadata.packages.values():\n- pkg_metadata.remote = None\n- remotes.save(self._filename)\n+ with self._editables_metadata_from_cache():\n+ for ref in self._cache.all_refs():\n+ with self._cache.package_layout(ref).update_metadata() as metadata:\n+ metadata.recipe.remote = None\n+ for pkg_metadata in metadata.packages.values():\n+ pkg_metadata.remote = None\n+ remotes.save(self._filename)\n \n def remove(self, remote_name):\n remotes = self.load_remotes()\n del remotes[remote_name]\n+ with self._editables_metadata_from_cache():\n+ for ref in self._cache.all_refs():\n+ with self._cache.package_layout(ref).update_metadata() as metadata:\n+ if metadata.recipe.remote == remote_name:\n+ metadata.recipe.remote = None\n+ for pkg_metadata in metadata.packages.values():\n+ if pkg_metadata.remote == remote_name:\n+ pkg_metadata.remote = None\n \n- for ref in self._cache.all_refs():\n- with self._cache.package_layout(ref).update_metadata() as metadata:\n- if metadata.recipe.remote == remote_name:\n- metadata.recipe.remote = None\n- for pkg_metadata in metadata.packages.values():\n- if pkg_metadata.remote == remote_name:\n- pkg_metadata.remote = None\n-\n- remotes.save(self._filename)\n+ remotes.save(self._filename)\n \n def define(self, remotes):\n # For definition from conan config install\n- for ref in self._cache.all_refs():\n- with self._cache.package_layout(ref).update_metadata() as metadata:\n- if metadata.recipe.remote not in remotes:\n- metadata.recipe.remote = None\n- for pkg_metadata in metadata.packages.values():\n- if pkg_metadata.remote not in remotes:\n- pkg_metadata.remote = None\n+ with self._editables_metadata_from_cache():\n+ for ref in self._cache.all_refs():\n+ with self._cache.package_layout(ref).update_metadata() as metadata:\n+ if metadata.recipe.remote not in remotes:\n+ metadata.recipe.remote = None\n+ for pkg_metadata in metadata.packages.values():\n+ if pkg_metadata.remote not in remotes:\n+ pkg_metadata.remote = None\n \n- remotes.save(self._filename)\n+ remotes.save(self._filename)\n \n def rename(self, remote_name, new_remote_name):\n remotes = self.load_remotes()\n remotes.rename(remote_name, new_remote_name)\n+ with self._editables_metadata_from_cache():\n+ for ref in self._cache.all_refs():\n+ with self._cache.package_layout(ref).update_metadata() as metadata:\n+ if metadata.recipe.remote == remote_name:\n+ metadata.recipe.remote = new_remote_name\n+ for pkg_metadata in metadata.packages.values():\n+ if pkg_metadata.remote == remote_name:\n+ pkg_metadata.remote = new_remote_name\n \n- for ref in self._cache.all_refs():\n- with self._cache.package_layout(ref).update_metadata() as metadata:\n- if metadata.recipe.remote == remote_name:\n- metadata.recipe.remote = new_remote_name\n- for pkg_metadata in metadata.packages.values():\n- if pkg_metadata.remote == remote_name:\n- pkg_metadata.remote = new_remote_name\n-\n- remotes.save(self._filename)\n+ remotes.save(self._filename)\n \n @property\n def refs_list(self):\ndiff --git a/conans/test/functional/command/remote_test.py b/conans/test/functional/command/remote_test.py\nindex ee6a25b77e6..3e7e9600dee 100644\n--- a/conans/test/functional/command/remote_test.py\n+++ b/conans/test/functional/command/remote_test.py\n@@ -392,3 +392,43 @@ def test_invalid_url(self):\n self.client.user_io.out)\n self.client.run(\"remote list\")\n self.assertIn(\"pepe.org\", self.client.out)\n+\n+ def test_metadata_editable_packages(self):\n+ \"\"\"\n+ conan remote remove fails with editable packages\n+ \"\"\"\n+ self.client.save({\"conanfile.py\": \"\"\"from conans import ConanFile\n+class Conan(ConanFile):\n+ pass\"\"\"})\n+ self.client.run(\"create . pkg/1.1@lasote/stable\")\n+ self.client.run(\"upload pkg/1.1@lasote/stable --all -c --remote remote1\")\n+ self.client.run(\"remove -f pkg/1.1@lasote/stable\")\n+ self.client.run(\"install pkg/1.1@lasote/stable\")\n+ self.assertIn(\"pkg/1.1@lasote/stable: Package installed\", self.client.out)\n+ self.client.run(\"remote list_ref\")\n+ self.assertIn(\"pkg/1.1@lasote/stable: remote1\", self.client.out)\n+ self.client.run(\"editable add . pkg/1.1@lasote/stable\")\n+ # Check add --force, update and rename\n+ self.client.run(\"remote add remote2 %s --force\" % self.servers[\"remote1\"].fake_url)\n+ self.client.run(\"remote update remote2 %sfake\" % self.servers[\"remote1\"].fake_url)\n+ self.client.run(\"remote rename remote2 remote-fake\")\n+ self.client.run(\"editable remove pkg/1.1@lasote/stable\")\n+ # Check associated remote has changed name\n+ self.client.run(\"remote list_ref\")\n+ self.assertIn(\"pkg/1.1@lasote/stable: remote-fake\", self.client.out)\n+ # Check remove\n+ self.client.run(\"editable add . pkg/1.1@lasote/stable\")\n+ self.client.run(\"remote remove remote-fake\")\n+ self.client.run(\"remote list\")\n+ self.assertIn(\"remote0: %s\" % self.servers[\"remote0\"].fake_url, self.client.out)\n+ self.assertNotIn(\"remote-fake\", self.client.out)\n+ # Check clean\n+ self.client.run(\"editable remove pkg/1.1@lasote/stable\")\n+ self.client.run(\"remove -f pkg/1.1@lasote/stable\")\n+ self.client.run(\"remote add remote1 %s\" % self.servers[\"remote1\"].fake_url)\n+ self.client.run(\"install pkg/1.1@lasote/stable\")\n+ self.client.run(\"editable add . pkg/1.1@lasote/stable\")\n+ self.client.run(\"remote clean\")\n+ self.client.run(\"remote list\")\n+ self.assertNotIn(\"remote1\", self.client.out)\n+ self.assertNotIn(\"remote0\", self.client.out)\n"
}
|
[
{
"diff_hunk": "@@ -267,6 +268,16 @@ def _validate_url(self, url):\n else:\n self._output.warn(\"The URL is empty. It must contain scheme and hostname.\")\n \n+ @contextmanager\n+ def _editables_metadata_from_cache(self):\n+ \"\"\"\n+ Hide editable packages to get the cache layout instead of the editable one\n+ \"\"\"\n+ editables = self._cache.editable_packages\n+ self._cache.editable_packages = {}",
"line": null,
"original_line": 277,
"original_start_line": null,
"path": "conans/client/cache/remote_registry.py",
"start_line": null,
"text": "@user1:\nThe member `editable_packages` is not a dictionary, it is an instance of `EditablePackages` class, we should override the `_edited_refs` member. I would like to keep the same type, one never knows if there is a `isinstance` check somewhere."
}
] |
ac50278e400585b4fbc490119a2bbee93fc5e2bc
|
diff --git a/conans/client/cache/editable.py b/conans/client/cache/editable.py
index 0bf08637ca6..6334a80e1c6 100644
--- a/conans/client/cache/editable.py
+++ b/conans/client/cache/editable.py
@@ -1,5 +1,6 @@
import json
import os
+from contextlib import contextmanager
from os.path import join, normpath
from conans.model.ref import ConanFileReference
@@ -49,3 +50,14 @@ def remove(self, ref):
def override(self, workspace_edited):
self._edited_refs = workspace_edited
+
+ @contextmanager
+ def disable_editables(self):
+ """
+ Temporary disable editables, if we want to make operations on the cache, as updating
+ remotes in packages metadata.
+ """
+ edited_refs = self._edited_refs
+ self._edited_refs = {}
+ yield
+ self._edited_refs = edited_refs
diff --git a/conans/client/cache/remote_registry.py b/conans/client/cache/remote_registry.py
index c202e8ec32a..4b5dc971883 100644
--- a/conans/client/cache/remote_registry.py
+++ b/conans/client/cache/remote_registry.py
@@ -284,13 +284,14 @@ def add(self, remote_name, url, verify_ssl=True, insert=None, force=None):
renamed = remotes.add(remote_name, url, verify_ssl, insert, force)
remotes.save(self._filename)
if renamed:
- for ref in self._cache.all_refs():
- with self._cache.package_layout(ref).update_metadata() as metadata:
- if metadata.recipe.remote == renamed:
- metadata.recipe.remote = remote_name
- for pkg_metadata in metadata.packages.values():
- if pkg_metadata.remote == renamed:
- pkg_metadata.remote = remote_name
+ with self._cache.editable_packages.disable_editables():
+ for ref in self._cache.all_refs():
+ with self._cache.package_layout(ref).update_metadata() as metadata:
+ if metadata.recipe.remote == renamed:
+ metadata.recipe.remote = remote_name
+ for pkg_metadata in metadata.packages.values():
+ if pkg_metadata.remote == renamed:
+ pkg_metadata.remote = remote_name
def update(self, remote_name, url, verify_ssl=True, insert=None):
self._validate_url(url)
@@ -301,52 +302,54 @@ def update(self, remote_name, url, verify_ssl=True, insert=None):
def clear(self):
remotes = self.load_remotes()
remotes.clear()
- for ref in self._cache.all_refs():
- with self._cache.package_layout(ref).update_metadata() as metadata:
- metadata.recipe.remote = None
- for pkg_metadata in metadata.packages.values():
- pkg_metadata.remote = None
- remotes.save(self._filename)
+ with self._cache.editable_packages.disable_editables():
+ for ref in self._cache.all_refs():
+ with self._cache.package_layout(ref).update_metadata() as metadata:
+ metadata.recipe.remote = None
+ for pkg_metadata in metadata.packages.values():
+ pkg_metadata.remote = None
+ remotes.save(self._filename)
def remove(self, remote_name):
remotes = self.load_remotes()
del remotes[remote_name]
+ with self._cache.editable_packages.disable_editables():
+ for ref in self._cache.all_refs():
+ with self._cache.package_layout(ref).update_metadata() as metadata:
+ if metadata.recipe.remote == remote_name:
+ metadata.recipe.remote = None
+ for pkg_metadata in metadata.packages.values():
+ if pkg_metadata.remote == remote_name:
+ pkg_metadata.remote = None
- for ref in self._cache.all_refs():
- with self._cache.package_layout(ref).update_metadata() as metadata:
- if metadata.recipe.remote == remote_name:
- metadata.recipe.remote = None
- for pkg_metadata in metadata.packages.values():
- if pkg_metadata.remote == remote_name:
- pkg_metadata.remote = None
-
- remotes.save(self._filename)
+ remotes.save(self._filename)
def define(self, remotes):
# For definition from conan config install
- for ref in self._cache.all_refs():
- with self._cache.package_layout(ref).update_metadata() as metadata:
- if metadata.recipe.remote not in remotes:
- metadata.recipe.remote = None
- for pkg_metadata in metadata.packages.values():
- if pkg_metadata.remote not in remotes:
- pkg_metadata.remote = None
+ with self._cache.editable_packages.disable_editables():
+ for ref in self._cache.all_refs():
+ with self._cache.package_layout(ref).update_metadata() as metadata:
+ if metadata.recipe.remote not in remotes:
+ metadata.recipe.remote = None
+ for pkg_metadata in metadata.packages.values():
+ if pkg_metadata.remote not in remotes:
+ pkg_metadata.remote = None
- remotes.save(self._filename)
+ remotes.save(self._filename)
def rename(self, remote_name, new_remote_name):
remotes = self.load_remotes()
remotes.rename(remote_name, new_remote_name)
+ with self._cache.editable_packages.disable_editables():
+ for ref in self._cache.all_refs():
+ with self._cache.package_layout(ref).update_metadata() as metadata:
+ if metadata.recipe.remote == remote_name:
+ metadata.recipe.remote = new_remote_name
+ for pkg_metadata in metadata.packages.values():
+ if pkg_metadata.remote == remote_name:
+ pkg_metadata.remote = new_remote_name
- for ref in self._cache.all_refs():
- with self._cache.package_layout(ref).update_metadata() as metadata:
- if metadata.recipe.remote == remote_name:
- metadata.recipe.remote = new_remote_name
- for pkg_metadata in metadata.packages.values():
- if pkg_metadata.remote == remote_name:
- pkg_metadata.remote = new_remote_name
-
- remotes.save(self._filename)
+ remotes.save(self._filename)
@property
def refs_list(self):
diff --git a/conans/test/functional/command/remote_test.py b/conans/test/functional/command/remote_test.py
index ee6a25b77e6..dd2e930ca2d 100644
--- a/conans/test/functional/command/remote_test.py
+++ b/conans/test/functional/command/remote_test.py
@@ -392,3 +392,43 @@ def test_invalid_url(self):
self.client.user_io.out)
self.client.run("remote list")
self.assertIn("pepe.org", self.client.out)
+
+ def test_metadata_editable_packages(self):
+ """
+ Check that 'conan remote' commands work with editable packages
+ """
+ self.client.save({"conanfile.py": """from conans import ConanFile
+class Conan(ConanFile):
+ pass"""})
+ self.client.run("create . pkg/1.1@lasote/stable")
+ self.client.run("upload pkg/1.1@lasote/stable --all -c --remote remote1")
+ self.client.run("remove -f pkg/1.1@lasote/stable")
+ self.client.run("install pkg/1.1@lasote/stable")
+ self.assertIn("pkg/1.1@lasote/stable: Package installed", self.client.out)
+ self.client.run("remote list_ref")
+ self.assertIn("pkg/1.1@lasote/stable: remote1", self.client.out)
+ self.client.run("editable add . pkg/1.1@lasote/stable")
+ # Check add --force, update and rename
+ self.client.run("remote add remote2 %s --force" % self.servers["remote1"].fake_url)
+ self.client.run("remote update remote2 %sfake" % self.servers["remote1"].fake_url)
+ self.client.run("remote rename remote2 remote-fake")
+ self.client.run("editable remove pkg/1.1@lasote/stable")
+ # Check associated remote has changed name
+ self.client.run("remote list_ref")
+ self.assertIn("pkg/1.1@lasote/stable: remote-fake", self.client.out)
+ # Check remove
+ self.client.run("editable add . pkg/1.1@lasote/stable")
+ self.client.run("remote remove remote-fake")
+ self.client.run("remote list")
+ self.assertIn("remote0: %s" % self.servers["remote0"].fake_url, self.client.out)
+ self.assertNotIn("remote-fake", self.client.out)
+ # Check clean
+ self.client.run("editable remove pkg/1.1@lasote/stable")
+ self.client.run("remove -f pkg/1.1@lasote/stable")
+ self.client.run("remote add remote1 %s" % self.servers["remote1"].fake_url)
+ self.client.run("install pkg/1.1@lasote/stable")
+ self.client.run("editable add . pkg/1.1@lasote/stable")
+ self.client.run("remote clean")
+ self.client.run("remote list")
+ self.assertNotIn("remote1", self.client.out)
+ self.assertNotIn("remote0", self.client.out)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-5346@94af85a
|
conan-io/conan
|
Python
| 5,346
|
Remove packages when version is asterisk
|
When removing a pattern like _hello/*@conan/testing_, Conan doesn't remove all patterns because all fields (name, version, channel, user) are validated by ConanFileReference. However, when using a pattern like _hello*/*@conan/testing_, the ConanFileReference won't match the name and will search for recipes.
Changelog: Fix: Remove packages when version is asterisk (#5297)
Docs: Omit
fixes #5297
- [x] Refer to the issue that supports this Pull Request.
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-06-12T13:00:58Z
|
[bug] conan remove does not find recipe
Hi there,
I tried to delete every version of a recipe in a certain user/channel but Conan stopped working with the error that it could not find the recipe:
```
conan remove cygwin_installer/*@user/channel
ERROR: Recipe not found: 'cygwin_installer/*@user/channel'
```
But when I modify the recipe name and append an asterix it works:
```
conan remove cygwin_installer*/*@user/channel
Are you sure you want to delete from 'cygwin_installer/2.9.0@user/channel' (yes/no):
```
Is this intentional?
|
[
{
"body": "Hi there,\r\nI tried to delete every version of a recipe in a certain user/channel but Conan stopped working with the error that it could not find the recipe:\r\n```\r\nconan remove cygwin_installer/*@user/channel\r\nERROR: Recipe not found: 'cygwin_installer/*@user/channel'\r\n```\r\nBut when I modify the recipe name and append an asterix it works:\r\n```\r\nconan remove cygwin_installer*/*@user/channel\r\nAre you sure you want to delete from 'cygwin_installer/2.9.0@user/channel' (yes/no):\r\n```\r\n\r\nIs this intentional?",
"number": 5297,
"title": "[bug] conan remove does not find recipe"
}
] |
46663964cbe546f50d29901ce749d694bba117c9
|
{
"head_commit": "94af85ae727fd8b96662ca6d3e23e067eb3017b7",
"head_commit_message": "#5297 Validate channel when removing\n\nSigned-off-by: Uilian Ries <[email protected]>",
"patch_to_review": "diff --git a/conans/client/remover.py b/conans/client/remover.py\nindex de6ca8675a3..353bbcb8393 100644\n--- a/conans/client/remover.py\n+++ b/conans/client/remover.py\n@@ -168,7 +168,15 @@ def remove(self, pattern, remote_name, src=None, build_ids=None, package_ids_fil\n else:\n refs = self._remote_manager.search_recipes(remote, pattern)\n else:\n- if input_ref:\n+ if not input_ref or (input_ref and\n+ input_ref.version == \"*\" or\n+ input_ref.channel == \"*\" or\n+ input_ref.user == \"*\"):\n+ refs = search_recipes(self._cache, pattern)\n+ if not refs:\n+ self._user_io.out.warn(\"No package recipe matches '%s'\" % str(pattern))\n+ return\n+ else:\n refs = []\n if self._cache.installed_as_editable(input_ref):\n raise ConanException(self._message_removing_editable(input_ref))\n@@ -176,11 +184,6 @@ def remove(self, pattern, remote_name, src=None, build_ids=None, package_ids_fil\n raise RecipeNotFoundException(input_ref,\n print_rev=self._cache.config.revisions_enabled)\n refs.append(input_ref)\n- else:\n- refs = search_recipes(self._cache, pattern)\n- if not refs:\n- self._user_io.out.warn(\"No package recipe matches '%s'\" % str(pattern))\n- return\n \n if input_ref and not input_ref.revision:\n # Ignore revisions for deleting if the input was not with a revision\ndiff --git a/conans/test/functional/command/remove_test.py b/conans/test/functional/command/remove_test.py\nindex c62452f40da..7d1b7f988f8 100644\n--- a/conans/test/functional/command/remove_test.py\n+++ b/conans/test/functional/command/remove_test.py\n@@ -284,6 +284,42 @@ def basic_packages_test(self):\n os.listdir(os.path.join(self.client.storage_folder,\n \"Hello/2.4.11/myuser/testing\")))\n \n+ def _validate_remove_all_hello_packages(self):\n+ self.assert_folders(local_folders={\"H1\": None, \"H2\": None, \"B\": [1, 2], \"O\": [1, 2]},\n+ remote_folders={\"H1\": [1, 2], \"H2\": [1, 2], \"B\": [1, 2], \"O\": [1, 2]},\n+ build_folders={\"H1\": None, \"H2\": None, \"B\": [1, 2], \"O\": [1, 2]},\n+ src_folders={\"H1\": False, \"H2\": False, \"B\": True, \"O\": True})\n+ folders = os.listdir(self.client.storage_folder)\n+ six.assertCountEqual(self, [\"Other\", \"Bye\"], folders)\n+\n+ def test_remove_any_package_version(self):\n+ self.client.run(\"remove Hello/*@myuser/testing -f\")\n+ self._validate_remove_all_hello_packages()\n+\n+ def test_remove_any_package_version_channel(self):\n+ self.client.run(\"remove Hello/*@*/testing -f\")\n+ self._validate_remove_all_hello_packages()\n+\n+ def test_remove_any_package_version_channel(self):\n+ self.client.run(\"remove Hello/*@*/* -f\")\n+ self._validate_remove_all_hello_packages()\n+\n+ def _validate_remove_hello_1_4_10(self):\n+ self.assert_folders(local_folders={\"H1\": None, \"H2\": [1, 2], \"B\": [1, 2], \"O\": [1, 2]},\n+ remote_folders={\"H1\": [1, 2], \"H2\": [1, 2], \"B\": [1, 2], \"O\": [1, 2]},\n+ build_folders={\"H1\": None, \"H2\": [1, 2], \"B\": [1, 2], \"O\": [1, 2]},\n+ src_folders={\"H1\": False, \"H2\": True, \"B\": True, \"O\": True})\n+ folders = os.listdir(self.client.storage_folder)\n+ six.assertCountEqual(self, [\"Hello\", \"Other\", \"Bye\"], folders)\n+\n+ def test_remove_any_package_channel(self):\n+ self.client.run(\"remove Hello/1.4.10@*/testing -f\")\n+ self._validate_remove_hello_1_4_10()\n+\n+ def test_remove_any_package_channel(self):\n+ self.client.run(\"remove Hello/1.4.10@myuser/* -f\")\n+ self._validate_remove_hello_1_4_10()\n+\n def builds_test(self):\n mocked_user_io = UserIO(out=TestBufferConanOutput())\n mocked_user_io.request_boolean = Mock(return_value=True)\n"
}
|
[
{
"diff_hunk": "@@ -168,19 +168,22 @@ def remove(self, pattern, remote_name, src=None, build_ids=None, package_ids_fil\n else:\n refs = self._remote_manager.search_recipes(remote, pattern)\n else:\n- if input_ref:\n+ if not input_ref or (input_ref and\n+ input_ref.version == \"*\" or\n+ input_ref.channel == \"*\" or\n+ input_ref.user == \"*\"):",
"line": null,
"original_line": 174,
"original_start_line": null,
"path": "conans/client/remover.py",
"start_line": null,
"text": "@user1:\nWe can make it a little bit easier 😉 \r\n\r\n```python\r\n\r\nfrom conans.model.ref import check_valid_ref\r\n\r\nif check_valid_ref(input_ref, allow_pattern=False):\r\n # Do the refs.append(input_ref)\r\nelse:\r\n # Do the search\r\n\r\n```\n\n@author:\nmuch better! thanks!"
}
] |
3b96ee040d257ca1ea18ab5b2112352d214a8bea
|
diff --git a/conans/client/remover.py b/conans/client/remover.py
index de6ca8675a3..fabcb21386f 100644
--- a/conans/client/remover.py
+++ b/conans/client/remover.py
@@ -3,7 +3,7 @@
from conans.client.cache.remote_registry import Remote
from conans.errors import ConanException, PackageNotFoundException, RecipeNotFoundException
from conans.errors import NotFoundException
-from conans.model.ref import ConanFileReference, PackageReference
+from conans.model.ref import ConanFileReference, PackageReference, check_valid_ref
from conans.paths import SYSTEM_REQS, rm_conandir
from conans.search.search import filter_outdated, search_packages, search_recipes
from conans.util.log import logger
@@ -168,7 +168,7 @@ def remove(self, pattern, remote_name, src=None, build_ids=None, package_ids_fil
else:
refs = self._remote_manager.search_recipes(remote, pattern)
else:
- if input_ref:
+ if input_ref and check_valid_ref(input_ref, allow_pattern=False):
refs = []
if self._cache.installed_as_editable(input_ref):
raise ConanException(self._message_removing_editable(input_ref))
diff --git a/conans/test/functional/command/remove_test.py b/conans/test/functional/command/remove_test.py
index c62452f40da..7d1b7f988f8 100644
--- a/conans/test/functional/command/remove_test.py
+++ b/conans/test/functional/command/remove_test.py
@@ -284,6 +284,42 @@ def basic_packages_test(self):
os.listdir(os.path.join(self.client.storage_folder,
"Hello/2.4.11/myuser/testing")))
+ def _validate_remove_all_hello_packages(self):
+ self.assert_folders(local_folders={"H1": None, "H2": None, "B": [1, 2], "O": [1, 2]},
+ remote_folders={"H1": [1, 2], "H2": [1, 2], "B": [1, 2], "O": [1, 2]},
+ build_folders={"H1": None, "H2": None, "B": [1, 2], "O": [1, 2]},
+ src_folders={"H1": False, "H2": False, "B": True, "O": True})
+ folders = os.listdir(self.client.storage_folder)
+ six.assertCountEqual(self, ["Other", "Bye"], folders)
+
+ def test_remove_any_package_version(self):
+ self.client.run("remove Hello/*@myuser/testing -f")
+ self._validate_remove_all_hello_packages()
+
+ def test_remove_any_package_version_channel(self):
+ self.client.run("remove Hello/*@*/testing -f")
+ self._validate_remove_all_hello_packages()
+
+ def test_remove_any_package_version_channel(self):
+ self.client.run("remove Hello/*@*/* -f")
+ self._validate_remove_all_hello_packages()
+
+ def _validate_remove_hello_1_4_10(self):
+ self.assert_folders(local_folders={"H1": None, "H2": [1, 2], "B": [1, 2], "O": [1, 2]},
+ remote_folders={"H1": [1, 2], "H2": [1, 2], "B": [1, 2], "O": [1, 2]},
+ build_folders={"H1": None, "H2": [1, 2], "B": [1, 2], "O": [1, 2]},
+ src_folders={"H1": False, "H2": True, "B": True, "O": True})
+ folders = os.listdir(self.client.storage_folder)
+ six.assertCountEqual(self, ["Hello", "Other", "Bye"], folders)
+
+ def test_remove_any_package_channel(self):
+ self.client.run("remove Hello/1.4.10@*/testing -f")
+ self._validate_remove_hello_1_4_10()
+
+ def test_remove_any_package_channel(self):
+ self.client.run("remove Hello/1.4.10@myuser/* -f")
+ self._validate_remove_hello_1_4_10()
+
def builds_test(self):
mocked_user_io = UserIO(out=TestBufferConanOutput())
mocked_user_io.request_boolean = Mock(return_value=True)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-5224@42cca07
|
conan-io/conan
|
Python
| 5,224
|
Upload using pref as the recommended way
|
Changelog: Feature: The `conan upload` command can receive now the full package reference to upload a binary package. The `-p` argument is now deprecated.
Docs: https://github.com/conan-io/docs/pull/1300
Closes #5196
|
2019-05-27T10:47:18Z
|
Conan upload receiving a pref instead of -p and from file
- The same we did with `conan get`, the pattern could process automatically the package to be uploaded, the `-p` usage could be warned as deprecated.
- Additionally, following the syntax used by other programs like `gcc`, `conan upload @filepath` could read the reference from a file.
|
> Additionally, following the syntax used by other programs like gcc, conan upload @filepath could read the reference from a file.
Probably not a good idea, we plan to be able to specify references like `@user/channel` to disambiguate.
|
[
{
"body": "- The same we did with `conan get`, the pattern could process automatically the package to be uploaded, the `-p` usage could be warned as deprecated.\r\n\r\n- Additionally, following the syntax used by other programs like `gcc`, `conan upload @filepath` could read the reference from a file.",
"number": 5196,
"title": "Conan upload receiving a pref instead of -p and from file"
}
] |
6a90926d073d31bd31e8b1eb54ec69e483993391
|
{
"head_commit": "42cca07db0f4f4c6796dc4f109fb3379f7e2cc07",
"head_commit_message": "Added test",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex a1085b3985d..236abb46f4a 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -83,6 +83,7 @@ def _fill_text(self, text, width, indent):\n _QUERY_EXAMPLE = (\"os=Windows AND (arch=x86 OR compiler=gcc)\")\n _PATTERN_EXAMPLE = (\"boost/*\")\n _REFERENCE_EXAMPLE = (\"MyPackage/1.2@user/channel\")\n+_PREF_EXAMPLE = (\"MyPackage/1.2@user/channel:af7901d8bdfde621d086181aa1c495c25a17b137\")\n \n _BUILD_FOLDER_HELP = (\"Directory for the build process. Defaulted to the current directory. A \"\n \"relative path to current directory can also be specified\")\n@@ -91,7 +92,11 @@ def _fill_text(self, text, width, indent):\n _KEEP_SOURCE_HELP = (\"Do not remove the source folder in local cache, even if the recipe changed. \"\n \"Use this for testing purposes only\")\n _PATTERN_OR_REFERENCE_HELP = (\"Pattern or package recipe reference, e.g., '%s', \"\n- \"'%s'\" % (_REFERENCE_EXAMPLE, _PATTERN_EXAMPLE))\n+ \"'%s'\" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE))\n+_PATTERN_REF_OR_PREF_HELP = (\"Pattern, recipe reference or package reference e.g., '%s', \"\n+ \"'%s', '%s'\" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE, _PREF_EXAMPLE))\n+_PREF_OR_PREF_HELP = (\"Recipe reference or package reference e.g., '%s', \"\n+ \"'%s'\" % (_REFERENCE_EXAMPLE, _PREF_EXAMPLE))\n _PATH_HELP = (\"Path to a folder containing a conanfile.py or to a recipe file \"\n \"e.g., my_folder/conanfile.py\")\n _QUERY_HELP = (\"Packages query: '%s'. The 'pattern_or_reference' parameter has \"\n@@ -1192,9 +1197,10 @@ def upload(self, *args):\n parser = argparse.ArgumentParser(description=self.upload.__doc__,\n prog=\"conan upload\",\n formatter_class=SmartFormatter)\n- parser.add_argument('pattern_or_reference', help=_PATTERN_OR_REFERENCE_HELP)\n- parser.add_argument(\"-p\", \"--package\", default=None, action=OnceArgument,\n- help='package ID to upload')\n+ parser.add_argument('pattern_or_reference', help=_PATTERN_REF_OR_PREF_HELP)\n+ parser.add_argument(\"-p\", \"--package\", default=None,\n+ help=\"Package ID [DEPRECATED: use full reference instead]\",\n+ action=OnceArgument)\n parser.add_argument('-q', '--query', default=None, action=OnceArgument,\n help=\"Only upload packages matching a specific query. \" + _QUERY_HELP)\n parser.add_argument(\"-r\", \"--remote\", action=OnceArgument,\n@@ -1222,7 +1228,24 @@ def upload(self, *args):\n \n args = parser.parse_args(*args)\n \n- if args.query and args.package:\n+ try:\n+ pref = PackageReference.loads(args.pattern_or_reference, validate=True)\n+ reference = pref.ref.full_repr()\n+ package_id = pref.id\n+ except ConanException:\n+ reference = args.pattern_or_reference\n+ package_id = args.package\n+\n+ if package_id:\n+ self._user_io.out.warn(\"Usage of `--package` argument is deprecated.\"\n+ \" Use a full reference instead: \"\n+ \"`conan upload [...] {}:{}`\".format(reference, package_id))\n+ else:\n+ if args.package:\n+ raise ConanException(\"Use a full package reference (preferred) or the `--package`\"\n+ \" command argument, but not both.\")\n+\n+ if args.query and package_id:\n raise ConanException(\"'-q' and '-p' parameters can't be used at the same time\")\n \n cwd = os.getcwd()\n@@ -1250,7 +1273,7 @@ def upload(self, *args):\n policy = None\n \n try:\n- info = self._conan.upload(pattern=args.pattern_or_reference, package=args.package,\n+ info = self._conan.upload(pattern=reference, package=package_id,\n query=args.query, remote_name=args.remote,\n all_packages=args.all, policy=policy,\n confirm=args.confirm, retry=args.retry,\n@@ -1453,7 +1476,7 @@ def get(self, *args):\n parser = argparse.ArgumentParser(description=self.get.__doc__,\n prog=\"conan get\",\n formatter_class=SmartFormatter)\n- parser.add_argument('reference', help='package recipe reference')\n+ parser.add_argument('reference', help=_PREF_OR_PREF_HELP)\n parser.add_argument('path',\n help='Path to the file or directory. If not specified will get the '\n 'conanfile if only a reference is specified and a conaninfo.txt '\ndiff --git a/conans/test/functional/command/upload_test.py b/conans/test/functional/command/upload_test.py\nindex cefe3a46fbf..c4addba9ab2 100644\n--- a/conans/test/functional/command/upload_test.py\n+++ b/conans/test/functional/command/upload_test.py\n@@ -115,6 +115,38 @@ def non_existing_package_error_test(self):\n client.run(\"upload Pkg/0.1@user/channel -p hash1\", assert_error=True)\n self.assertIn(\"ERROR: Recipe not found: 'Pkg/0.1@user/channel'\", client.out)\n \n+ def deprecated_p_arg_test(self):\n+ client = self._client()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . user/testing\")\n+ client.run(\"upload Hello0/1.2.1@user/testing -p {} -c\".format(NO_SETTINGS_PACKAGE_ID))\n+ self.assertIn(\"WARN: Usage of `--package` argument is deprecated. \"\n+ \"Use a full reference instead: `conan upload [...] \"\n+ \"Hello0/1.2.1@user/testing:{}`\".format(NO_SETTINGS_PACKAGE_ID), client.out)\n+\n+ def upload_with_pref_test(self):\n+ client = self._client()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . user/testing\")\n+ client.run(\"upload Hello0/1.2.1@user/testing:{} -c\".format(NO_SETTINGS_PACKAGE_ID))\n+ self.assertNotIn(\"WARN: Usage of `--package` argument is deprecated. \"\n+ \"Use a full reference instead: `conan upload [...] \"\n+ \"Hello0/1.2.1@user/testing:{}`\".format(NO_SETTINGS_PACKAGE_ID),\n+ client.out)\n+ self.assertIn(\"Uploading package 1/1: {} to 'default'\".format(NO_SETTINGS_PACKAGE_ID),\n+ client.out)\n+\n+ def upload_with_pref_and_p_test(self):\n+ client = self._client()\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . user/testing\")\n+ client.run(\"upload Hello0/1.2.1@user/testing:{} -c -p {}\".format(NO_SETTINGS_PACKAGE_ID,\n+ NO_SETTINGS_PACKAGE_ID),\n+ assert_error=True)\n+\n+ self.assertIn(\"Use a full package reference (preferred) or the \"\n+ \"`--package` command argument, but not both.\", client.out)\n+\n def _client(self):\n if not hasattr(self, \"_servers\"):\n servers = {}\n"
}
|
[
{
"diff_hunk": "@@ -91,7 +92,11 @@ def _fill_text(self, text, width, indent):\n _KEEP_SOURCE_HELP = (\"Do not remove the source folder in local cache, even if the recipe changed. \"\n \"Use this for testing purposes only\")\n _PATTERN_OR_REFERENCE_HELP = (\"Pattern or package recipe reference, e.g., '%s', \"\n- \"'%s'\" % (_REFERENCE_EXAMPLE, _PATTERN_EXAMPLE))\n+ \"'%s'\" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE))\n+_PATTERN_REF_OR_PREF_HELP = (\"Pattern, recipe reference or package reference e.g., '%s', \"\n+ \"'%s', '%s'\" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE, _PREF_EXAMPLE))\n+_PREF_OR_PREF_HELP = (\"Recipe reference or package reference e.g., '%s', \"",
"line": null,
"original_line": 98,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n_REF_OR_PREF_HELP = (\"Recipe reference or package reference e.g., '%s', \"\r\n```\r\n\r\nor `_REFERENCE_OR_PREF_HELP`"
}
] |
ea68e820a576dde3a1453d792298d6313f8dae95
|
diff --git a/conans/client/command.py b/conans/client/command.py
index a1085b3985d..6ce940387a8 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -83,6 +83,7 @@ def _fill_text(self, text, width, indent):
_QUERY_EXAMPLE = ("os=Windows AND (arch=x86 OR compiler=gcc)")
_PATTERN_EXAMPLE = ("boost/*")
_REFERENCE_EXAMPLE = ("MyPackage/1.2@user/channel")
+_PREF_EXAMPLE = ("MyPackage/1.2@user/channel:af7901d8bdfde621d086181aa1c495c25a17b137")
_BUILD_FOLDER_HELP = ("Directory for the build process. Defaulted to the current directory. A "
"relative path to current directory can also be specified")
@@ -91,7 +92,11 @@ def _fill_text(self, text, width, indent):
_KEEP_SOURCE_HELP = ("Do not remove the source folder in local cache, even if the recipe changed. "
"Use this for testing purposes only")
_PATTERN_OR_REFERENCE_HELP = ("Pattern or package recipe reference, e.g., '%s', "
- "'%s'" % (_REFERENCE_EXAMPLE, _PATTERN_EXAMPLE))
+ "'%s'" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE))
+_PATTERN_REF_OR_PREF_HELP = ("Pattern, recipe reference or package reference e.g., '%s', "
+ "'%s', '%s'" % (_PATTERN_EXAMPLE, _REFERENCE_EXAMPLE, _PREF_EXAMPLE))
+_REF_OR_PREF_HELP = ("Recipe reference or package reference e.g., '%s', "
+ "'%s'" % (_REFERENCE_EXAMPLE, _PREF_EXAMPLE))
_PATH_HELP = ("Path to a folder containing a conanfile.py or to a recipe file "
"e.g., my_folder/conanfile.py")
_QUERY_HELP = ("Packages query: '%s'. The 'pattern_or_reference' parameter has "
@@ -1192,9 +1197,10 @@ def upload(self, *args):
parser = argparse.ArgumentParser(description=self.upload.__doc__,
prog="conan upload",
formatter_class=SmartFormatter)
- parser.add_argument('pattern_or_reference', help=_PATTERN_OR_REFERENCE_HELP)
- parser.add_argument("-p", "--package", default=None, action=OnceArgument,
- help='package ID to upload')
+ parser.add_argument('pattern_or_reference', help=_PATTERN_REF_OR_PREF_HELP)
+ parser.add_argument("-p", "--package", default=None,
+ help="Package ID [DEPRECATED: use full reference instead]",
+ action=OnceArgument)
parser.add_argument('-q', '--query', default=None, action=OnceArgument,
help="Only upload packages matching a specific query. " + _QUERY_HELP)
parser.add_argument("-r", "--remote", action=OnceArgument,
@@ -1222,7 +1228,24 @@ def upload(self, *args):
args = parser.parse_args(*args)
- if args.query and args.package:
+ try:
+ pref = PackageReference.loads(args.pattern_or_reference, validate=True)
+ reference = pref.ref.full_repr()
+ package_id = pref.id
+ except ConanException:
+ reference = args.pattern_or_reference
+ package_id = args.package
+
+ if package_id:
+ self._user_io.out.warn("Usage of `--package` argument is deprecated."
+ " Use a full reference instead: "
+ "`conan upload [...] {}:{}`".format(reference, package_id))
+ else:
+ if args.package:
+ raise ConanException("Use a full package reference (preferred) or the `--package`"
+ " command argument, but not both.")
+
+ if args.query and package_id:
raise ConanException("'-q' and '-p' parameters can't be used at the same time")
cwd = os.getcwd()
@@ -1250,7 +1273,7 @@ def upload(self, *args):
policy = None
try:
- info = self._conan.upload(pattern=args.pattern_or_reference, package=args.package,
+ info = self._conan.upload(pattern=reference, package=package_id,
query=args.query, remote_name=args.remote,
all_packages=args.all, policy=policy,
confirm=args.confirm, retry=args.retry,
@@ -1453,7 +1476,7 @@ def get(self, *args):
parser = argparse.ArgumentParser(description=self.get.__doc__,
prog="conan get",
formatter_class=SmartFormatter)
- parser.add_argument('reference', help='package recipe reference')
+ parser.add_argument('reference', help=_REF_OR_PREF_HELP)
parser.add_argument('path',
help='Path to the file or directory. If not specified will get the '
'conanfile if only a reference is specified and a conaninfo.txt '
diff --git a/conans/test/functional/command/upload_test.py b/conans/test/functional/command/upload_test.py
index cefe3a46fbf..c4addba9ab2 100644
--- a/conans/test/functional/command/upload_test.py
+++ b/conans/test/functional/command/upload_test.py
@@ -115,6 +115,38 @@ def non_existing_package_error_test(self):
client.run("upload Pkg/0.1@user/channel -p hash1", assert_error=True)
self.assertIn("ERROR: Recipe not found: 'Pkg/0.1@user/channel'", client.out)
+ def deprecated_p_arg_test(self):
+ client = self._client()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . user/testing")
+ client.run("upload Hello0/1.2.1@user/testing -p {} -c".format(NO_SETTINGS_PACKAGE_ID))
+ self.assertIn("WARN: Usage of `--package` argument is deprecated. "
+ "Use a full reference instead: `conan upload [...] "
+ "Hello0/1.2.1@user/testing:{}`".format(NO_SETTINGS_PACKAGE_ID), client.out)
+
+ def upload_with_pref_test(self):
+ client = self._client()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . user/testing")
+ client.run("upload Hello0/1.2.1@user/testing:{} -c".format(NO_SETTINGS_PACKAGE_ID))
+ self.assertNotIn("WARN: Usage of `--package` argument is deprecated. "
+ "Use a full reference instead: `conan upload [...] "
+ "Hello0/1.2.1@user/testing:{}`".format(NO_SETTINGS_PACKAGE_ID),
+ client.out)
+ self.assertIn("Uploading package 1/1: {} to 'default'".format(NO_SETTINGS_PACKAGE_ID),
+ client.out)
+
+ def upload_with_pref_and_p_test(self):
+ client = self._client()
+ client.save({"conanfile.py": conanfile})
+ client.run("create . user/testing")
+ client.run("upload Hello0/1.2.1@user/testing:{} -c -p {}".format(NO_SETTINGS_PACKAGE_ID,
+ NO_SETTINGS_PACKAGE_ID),
+ assert_error=True)
+
+ self.assertIn("Use a full package reference (preferred) or the "
+ "`--package` command argument, but not both.", client.out)
+
def _client(self):
if not hasattr(self, "_servers"):
servers = {}
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-5189@b1baa25
|
conan-io/conan
|
Python
| 5,189
|
templates for conan new
|
Changelog: Feature: New ``conan new --template=mytemplate`` to initialize recipes with your own templates
Docs: https://github.com/conan-io/docs/pull/1286
This has been improved with Jinja templates in https://github.com/conan-io/conan/pull/5267
Close #5192
|
2019-05-20T17:02:00Z
|
conan new template recipes
A use case where a team is creating many packages for their own libraries, and the recipes are very similar, it is useful to have the possibility to use a pre-defined template for the ``conan new`` command.
|
[
{
"body": "A use case where a team is creating many packages for their own libraries, and the recipes are very similar, it is useful to have the possibility to use a pre-defined template for the ``conan new`` command.\r\n\r\n",
"number": 5192,
"title": "conan new template recipes"
}
] |
6022abf83a61e7bba720bff4349118ee818b018e
|
{
"head_commit": "b1baa252ccecd5a0eaa868a477147cf22cd019c1",
"head_commit_message": "templates for conan new",
"patch_to_review": "diff --git a/conans/client/cmd/new.py b/conans/client/cmd/new.py\nindex cc7bafc9d7b..70c73f1513b 100644\n--- a/conans/client/cmd/new.py\n+++ b/conans/client/cmd/new.py\n@@ -1,8 +1,11 @@\n+import os\n import re\n \n from conans.client.cmd.new_ci import ci_get_files\n from conans.errors import ConanException\n from conans.model.ref import ConanFileReference\n+from conans.util.files import load\n+\n \n conanfile = \"\"\"from conans import ConanFile, CMake, tools\n \n@@ -233,9 +236,11 @@ def test(self):\n \n \n def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False, bare=False,\n- visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None, osx_clang_versions=None,\n- shared=None, upload_url=None, gitignore=None, gitlab_gcc_versions=None, gitlab_clang_versions=None,\n- circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None):\n+ visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None,\n+ osx_clang_versions=None, shared=None, upload_url=None, gitignore=None,\n+ gitlab_gcc_versions=None, gitlab_clang_versions=None,\n+ circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None,\n+ template=None, cache=None):\n try:\n tokens = ref.split(\"@\")\n name, version = tokens[0].split(\"/\")\n@@ -259,6 +264,8 @@ def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False,\n raise ConanException(\"'pure_c' is incompatible with 'header' and 'sources'\")\n if bare and (header or exports_sources):\n raise ConanException(\"'bare' is incompatible with 'header' and 'sources'\")\n+ if template and (header or exports_sources or bare):\n+ raise ConanException(\"'template' argument incompatible with 'header', 'sources', and 'bare'\")\n \n if header:\n files = {\"conanfile.py\": conanfile_header.format(name=name, version=version,\n@@ -272,6 +279,13 @@ def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False,\n elif bare:\n files = {\"conanfile.py\": conanfile_bare.format(name=name, version=version,\n package_name=package_name)}\n+ elif template:\n+ path_template = os.path.join(cache.conan_folder, template)\n+ if not os.path.isfile(path_template):\n+ raise ConanException(\"Template doesn't exist: %s\" % path_template)\n+ conanfile_template = load(path_template)\n+ files = {\"conanfile.py\": conanfile_template.format(name=name, version=version,\n+ package_name=package_name)}\n else:\n files = {\"conanfile.py\": conanfile.format(name=name, version=version,\n package_name=package_name)}\ndiff --git a/conans/client/command.py b/conans/client/command.py\nindex 2d0670c7e83..802bb40224c 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -143,6 +143,8 @@ def new(self, *args):\n parser.add_argument(\"-b\", \"--bare\", action='store_true', default=False,\n help='Create the minimum package recipe, without build() method. '\n 'Useful in combination with \"export-pkg\" command')\n+ parser.add_argument(\"-m\", \"--template\",\n+ help='Use the given template from the local cache for conanfile.py')\n parser.add_argument(\"-cis\", \"--ci-shared\", action='store_true',\n default=False,\n help='Package will have a \"shared\" option to be used in CI')\n@@ -192,7 +194,8 @@ def new(self, *args):\n gitlab_clang_versions=args.ci_gitlab_clang,\n circleci_gcc_versions=args.ci_circleci_gcc,\n circleci_clang_versions=args.ci_circleci_clang,\n- circleci_osx_versions=args.ci_circleci_osx)\n+ circleci_osx_versions=args.ci_circleci_osx,\n+ template=args.template)\n \n def inspect(self, *args):\n \"\"\"Displays conanfile attributes, like name, version, options\ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex f431515a242..d7d34f69a98 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -258,7 +258,8 @@ def new(self, name, header=False, pure_c=False, test=False, exports_sources=Fals\n cwd=None, visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None,\n osx_clang_versions=None, shared=None, upload_url=None, gitignore=None,\n gitlab_gcc_versions=None, gitlab_clang_versions=None,\n- circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None):\n+ circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None,\n+ template=None):\n from conans.client.cmd.new import cmd_new\n cwd = os.path.abspath(cwd or get_cwd())\n files = cmd_new(name, header=header, pure_c=pure_c, test=test,\n@@ -272,7 +273,8 @@ def new(self, name, header=False, pure_c=False, test=False, exports_sources=Fals\n gitlab_clang_versions=gitlab_clang_versions,\n circleci_gcc_versions=circleci_gcc_versions,\n circleci_clang_versions=circleci_clang_versions,\n- circleci_osx_versions=circleci_osx_versions)\n+ circleci_osx_versions=circleci_osx_versions,\n+ template=template, cache=self._cache)\n \n save_files(cwd, files)\n for f in sorted(files):\ndiff --git a/conans/test/functional/command/new_test.py b/conans/test/functional/command/new_test.py\nindex 95c65a368f1..e29a32bc528 100644\n--- a/conans/test/functional/command/new_test.py\n+++ b/conans/test/functional/command/new_test.py\n@@ -3,10 +3,47 @@\n \n from conans.test.utils.tools import TestClient\n from conans.util.files import load\n+from conans.tools import save\n+import textwrap\n \n \n class NewTest(unittest.TestCase):\n \n+ def template_test(self):\n+ client = TestClient()\n+ template1 = textwrap.dedent(\"\"\"\n+ class {package_name}Conan(ConanFile):\n+ name = \"{name}\"\n+ version = \"{version}\"\n+ \"\"\")\n+ save(os.path.join(client.base_folder, \".conan\", \"mytemplate.py\"), template1)\n+ client.run(\"new hello/0.1 --template=mytemplate.py\")\n+ conanfile = load(os.path.join(client.current_folder, \"conanfile.py\"))\n+ self.assertIn(\"class HelloConan(ConanFile):\", conanfile)\n+ self.assertIn('name = \"hello\"', conanfile)\n+ self.assertIn('version = \"0.1\"', conanfile)\n+\n+ def template_test_package_test(self):\n+ client = TestClient()\n+ template2 = textwrap.dedent(\"\"\"\n+ class {package_name}Conan(ConanFile):\n+ version = \"fixed\"\n+ \"\"\")\n+ save(os.path.join(client.base_folder, \".conan\", \"subfolder\", \"mytemplate.py\"), template2)\n+ client.run(\"new hello/0.1 --template=subfolder/mytemplate.py\")\n+ conanfile = load(os.path.join(client.current_folder, \"conanfile.py\"))\n+ self.assertIn(\"class HelloConan(ConanFile):\", conanfile)\n+ self.assertIn('version = \"fixed\"', conanfile)\n+\n+ def template_errors_test(self):\n+ client = TestClient()\n+ client.run(\"new hello/0.1 --template=mytemplate.py\", assert_error=True)\n+ self.assertIn(\"ERROR: Template doesn't exist\", client.out)\n+ client.run(\"new hello/0.1 --template=mytemplate.py --bare\", assert_error=True)\n+ self.assertIn(\"ERROR: 'template' argument incompatible\", client.out)\n+ client.run(\"new hello/0.1 --template\", assert_error=True)\n+ self.assertIn(\"ERROR: Exiting with code: 2\", client.out)\n+\n def new_test(self):\n client = TestClient()\n client.run('new MyPackage/1.3@myuser/testing -t')\n"
}
|
[
{
"diff_hunk": "@@ -143,6 +143,8 @@ def new(self, *args):\n parser.add_argument(\"-b\", \"--bare\", action='store_true', default=False,\n help='Create the minimum package recipe, without build() method. '\n 'Useful in combination with \"export-pkg\" command')\n+ parser.add_argument(\"-m\", \"--template\",",
"line": null,
"original_line": 146,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user1:\n`-m`? `-f` meaning file maybe?"
}
] |
b835a162e4408ef460bcd13fce6f53063e5af34e
|
diff --git a/conans/client/cmd/new.py b/conans/client/cmd/new.py
index cc7bafc9d7b..70c73f1513b 100644
--- a/conans/client/cmd/new.py
+++ b/conans/client/cmd/new.py
@@ -1,8 +1,11 @@
+import os
import re
from conans.client.cmd.new_ci import ci_get_files
from conans.errors import ConanException
from conans.model.ref import ConanFileReference
+from conans.util.files import load
+
conanfile = """from conans import ConanFile, CMake, tools
@@ -233,9 +236,11 @@ def test(self):
def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False, bare=False,
- visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None, osx_clang_versions=None,
- shared=None, upload_url=None, gitignore=None, gitlab_gcc_versions=None, gitlab_clang_versions=None,
- circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None):
+ visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None,
+ osx_clang_versions=None, shared=None, upload_url=None, gitignore=None,
+ gitlab_gcc_versions=None, gitlab_clang_versions=None,
+ circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None,
+ template=None, cache=None):
try:
tokens = ref.split("@")
name, version = tokens[0].split("/")
@@ -259,6 +264,8 @@ def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False,
raise ConanException("'pure_c' is incompatible with 'header' and 'sources'")
if bare and (header or exports_sources):
raise ConanException("'bare' is incompatible with 'header' and 'sources'")
+ if template and (header or exports_sources or bare):
+ raise ConanException("'template' argument incompatible with 'header', 'sources', and 'bare'")
if header:
files = {"conanfile.py": conanfile_header.format(name=name, version=version,
@@ -272,6 +279,13 @@ def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False,
elif bare:
files = {"conanfile.py": conanfile_bare.format(name=name, version=version,
package_name=package_name)}
+ elif template:
+ path_template = os.path.join(cache.conan_folder, template)
+ if not os.path.isfile(path_template):
+ raise ConanException("Template doesn't exist: %s" % path_template)
+ conanfile_template = load(path_template)
+ files = {"conanfile.py": conanfile_template.format(name=name, version=version,
+ package_name=package_name)}
else:
files = {"conanfile.py": conanfile.format(name=name, version=version,
package_name=package_name)}
diff --git a/conans/client/command.py b/conans/client/command.py
index 2d0670c7e83..f60dee2f1f6 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -143,6 +143,8 @@ def new(self, *args):
parser.add_argument("-b", "--bare", action='store_true', default=False,
help='Create the minimum package recipe, without build() method. '
'Useful in combination with "export-pkg" command')
+ parser.add_argument("-f", "--file",
+ help='Use the given template from the local cache for conanfile.py')
parser.add_argument("-cis", "--ci-shared", action='store_true',
default=False,
help='Package will have a "shared" option to be used in CI')
@@ -192,7 +194,8 @@ def new(self, *args):
gitlab_clang_versions=args.ci_gitlab_clang,
circleci_gcc_versions=args.ci_circleci_gcc,
circleci_clang_versions=args.ci_circleci_clang,
- circleci_osx_versions=args.ci_circleci_osx)
+ circleci_osx_versions=args.ci_circleci_osx,
+ template=args.file)
def inspect(self, *args):
"""Displays conanfile attributes, like name, version, options
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index f431515a242..d7d34f69a98 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -258,7 +258,8 @@ def new(self, name, header=False, pure_c=False, test=False, exports_sources=Fals
cwd=None, visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None,
osx_clang_versions=None, shared=None, upload_url=None, gitignore=None,
gitlab_gcc_versions=None, gitlab_clang_versions=None,
- circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None):
+ circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None,
+ template=None):
from conans.client.cmd.new import cmd_new
cwd = os.path.abspath(cwd or get_cwd())
files = cmd_new(name, header=header, pure_c=pure_c, test=test,
@@ -272,7 +273,8 @@ def new(self, name, header=False, pure_c=False, test=False, exports_sources=Fals
gitlab_clang_versions=gitlab_clang_versions,
circleci_gcc_versions=circleci_gcc_versions,
circleci_clang_versions=circleci_clang_versions,
- circleci_osx_versions=circleci_osx_versions)
+ circleci_osx_versions=circleci_osx_versions,
+ template=template, cache=self._cache)
save_files(cwd, files)
for f in sorted(files):
diff --git a/conans/test/functional/command/new_test.py b/conans/test/functional/command/new_test.py
index 95c65a368f1..7a10981e88d 100644
--- a/conans/test/functional/command/new_test.py
+++ b/conans/test/functional/command/new_test.py
@@ -3,10 +3,47 @@
from conans.test.utils.tools import TestClient
from conans.util.files import load
+from conans.tools import save
+import textwrap
class NewTest(unittest.TestCase):
+ def template_test(self):
+ client = TestClient()
+ template1 = textwrap.dedent("""
+ class {package_name}Conan(ConanFile):
+ name = "{name}"
+ version = "{version}"
+ """)
+ save(os.path.join(client.base_folder, ".conan", "mytemplate.py"), template1)
+ client.run("new hello/0.1 --f=mytemplate.py")
+ conanfile = load(os.path.join(client.current_folder, "conanfile.py"))
+ self.assertIn("class HelloConan(ConanFile):", conanfile)
+ self.assertIn('name = "hello"', conanfile)
+ self.assertIn('version = "0.1"', conanfile)
+
+ def template_test_package_test(self):
+ client = TestClient()
+ template2 = textwrap.dedent("""
+ class {package_name}Conan(ConanFile):
+ version = "fixed"
+ """)
+ save(os.path.join(client.base_folder, ".conan", "subfolder", "mytemplate.py"), template2)
+ client.run("new hello/0.1 --file=subfolder/mytemplate.py")
+ conanfile = load(os.path.join(client.current_folder, "conanfile.py"))
+ self.assertIn("class HelloConan(ConanFile):", conanfile)
+ self.assertIn('version = "fixed"', conanfile)
+
+ def template_errors_test(self):
+ client = TestClient()
+ client.run("new hello/0.1 --file=mytemplate.py", assert_error=True)
+ self.assertIn("ERROR: Template doesn't exist", client.out)
+ client.run("new hello/0.1 --f=mytemplate.py --bare", assert_error=True)
+ self.assertIn("ERROR: 'template' argument incompatible", client.out)
+ client.run("new hello/0.1 --file", assert_error=True)
+ self.assertIn("ERROR: Exiting with code: 2", client.out)
+
def new_test(self):
client = TestClient()
client.run('new MyPackage/1.3@myuser/testing -t')
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
conan-io__conan-5044@b6ea7c0
|
conan-io/conan
|
Python
| 5,044
|
Feature/improve settings migration
|
Changelog: omit
Docs: omit
Close #5038
|
2019-04-26T16:40:33Z
|
Improve migration of settings.yml
**Current behavior**
We are not currently "patching" the `settings.yml` file nor modifying it. Only if we detect that the current yml on disk is the same as the previous version one, we replace it with the new one and store the old one in a `.old` file or similar.
This behavior is like this because people don't like us to change a customized `settings.yml` file.
**Several issues**
- It is not easy for contributors to understand that the "yml" text in the `migrations.py` shouldn't be modified when a new setting is introduced in the default one for that release.
- It is easy to forget to introduce the corresponding migration in a release.
- It is almost impossible to automate a test that fails if the previous yml setting migration is not created.
- It is ugly to have there the code of the `.yml`.
**How to improve it**
- Add a python module with variables following the `settings_x_y_z`. E.g: `settings_1_14_4`
- Even if there are no changes between the releases we should do: `settings_1_14_4 = settings_1_14_3`
- Add a test to check that at least there is a variable `settings_1_14_*`, this is not perfect because we don't really know the previous version but it is not likely to change the settings between minors.
- The migrations algorithm will do always the same, and automatically: "if I came from version x_y_z and my settings are equal that the settings in the variable `settings_x_y_z` then I replace them with the new settings yaml". "If there is no variable for my version, then skip any change"
|
[
{
"body": "**Current behavior**\r\n\r\nWe are not currently \"patching\" the `settings.yml` file nor modifying it. Only if we detect that the current yml on disk is the same as the previous version one, we replace it with the new one and store the old one in a `.old` file or similar. \r\nThis behavior is like this because people don't like us to change a customized `settings.yml` file.\r\n\r\n**Several issues**\r\n\r\n- It is not easy for contributors to understand that the \"yml\" text in the `migrations.py` shouldn't be modified when a new setting is introduced in the default one for that release.\r\n- It is easy to forget to introduce the corresponding migration in a release.\r\n- It is almost impossible to automate a test that fails if the previous yml setting migration is not created.\r\n- It is ugly to have there the code of the `.yml`.\r\n\r\n**How to improve it**\r\n\r\n- Add a python module with variables following the `settings_x_y_z`. E.g: `settings_1_14_4`\r\n- Even if there are no changes between the releases we should do: `settings_1_14_4 = settings_1_14_3`\r\n- Add a test to check that at least there is a variable `settings_1_14_*`, this is not perfect because we don't really know the previous version but it is not likely to change the settings between minors.\r\n- The migrations algorithm will do always the same, and automatically: \"if I came from version x_y_z and my settings are equal that the settings in the variable `settings_x_y_z` then I replace them with the new settings yaml\". \"If there is no variable for my version, then skip any change\"",
"number": 5038,
"title": "Improve migration of settings.yml"
}
] |
f4d680bd8c193b2735131f7c4968947e5491ab5e
|
{
"head_commit": "b6ea7c0bb3ae652f4093e0279abc6a37f07f2db9",
"head_commit_message": "Test",
"patch_to_review": "diff --git a/conans/client/migrations.py b/conans/client/migrations.py\nindex db5a91efb42..c7ddbcfd501 100644\n--- a/conans/client/migrations.py\n+++ b/conans/client/migrations.py\n@@ -2,6 +2,7 @@\n import shutil\n \n from conans import DEFAULT_REVISION_V1\n+from conans.client import migrations_settings\n from conans.client.cache.cache import CONAN_CONF, PROFILES_FOLDER\n from conans.client.conf.config_installer import _ConfigOrigin, _save_configs\n from conans.client.tools import replace_in_file\n@@ -25,7 +26,8 @@ def __init__(self, cache, current_version, out):\n super(ClientMigrator, self).__init__(cache.conan_folder, cache.store,\n current_version, out)\n \n- def _update_settings_yml(self, old_settings):\n+ def _update_settings_yml(self, old_version):\n+\n from conans.client.conf import default_settings_yml\n settings_path = self.cache.settings_path\n if not os.path.exists(settings_path):\n@@ -33,18 +35,30 @@ def _update_settings_yml(self, old_settings):\n self.out.warn(\"Nothing to migrate here, settings will be generated automatically\")\n return\n \n- current_settings = load(self.cache.settings_path)\n- if current_settings != default_settings_yml:\n- self.out.warn(\"Migration: Updating settings.yml\")\n- if current_settings != old_settings:\n- new_path = self.cache.settings_path + \".new\"\n- save(new_path, default_settings_yml)\n- self.out.warn(\"*\" * 40)\n- self.out.warn(\"settings.yml is locally modified, can't be updated\")\n- self.out.warn(\"The new settings.yml has been stored in: %s\" % new_path)\n- self.out.warn(\"*\" * 40)\n+ var_name = \"settings_{}\".format(old_version.replace(\".\", \"_\"))\n+\n+ def save_new():\n+ new_path = self.cache.settings_path + \".new\"\n+ save(new_path, default_settings_yml)\n+ self.out.warn(\"*\" * 40)\n+ self.out.warn(\"settings.yml is locally modified, can't be updated\")\n+ self.out.warn(\"The new settings.yml has been stored in: %s\" % new_path)\n+ self.out.warn(\"*\" * 40)\n+\n+ self.out.warn(\"Migration: Updating settings.yml\")\n+ if hasattr(migrations_settings, var_name):\n+ version_default_contents = getattr(migrations_settings, var_name)\n+ if version_default_contents != default_settings_yml:\n+ current_settings = load(self.cache.settings_path)\n+ if current_settings != version_default_contents:\n+ save_new()\n+ else:\n+ save(self.cache.settings_path, default_settings_yml)\n else:\n- save(self.cache.settings_path, default_settings_yml)\n+ self.out.warn(\"Migration: Settings already up to date\")\n+ else:\n+ # We don't have the value for that version, so don't override\n+ save_new()\n \n def _make_migrations(self, old_version):\n # ############### FILL THIS METHOD WITH THE REQUIRED ACTIONS ##############\n@@ -52,6 +66,9 @@ def _make_migrations(self, old_version):\n if old_version is None:\n return\n \n+ # Migrate the settings if they were the default for that version\n+ self._update_settings_yml(old_version)\n+\n if old_version < Version(\"0.25\"):\n from conans.paths import DEFAULT_PROFILE_NAME\n default_profile_path = os.path.join(self.cache.conan_folder, PROFILES_FOLDER,\n@@ -73,75 +90,6 @@ def _make_migrations(self, old_version):\n migrate_plugins_to_hooks(self.cache)\n \n if old_version < Version(\"1.13.0\"):\n- old_settings = \"\"\"\n-# Only for cross building, 'os_build/arch_build' is the system that runs Conan\n-os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]\n-arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n-\n-# Only for building cross compilation tools, 'os_target/arch_target' is the system for\n-# which the tools generate code\n-os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]\n-arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n-\n-# Rest of the settings are \"host\" settings:\n-# - For native building/cross building: Where the library/program will run.\n-# - For building cross compilation tools: Where the cross compiler will run.\n-os:\n- Windows:\n- subsystem: [None, cygwin, msys, msys2, wsl]\n- WindowsStore:\n- version: [\"8.1\", \"10.0\"]\n- Linux:\n- Macos:\n- version: [None, \"10.6\", \"10.7\", \"10.8\", \"10.9\", \"10.10\", \"10.11\", \"10.12\", \"10.13\", \"10.14\"]\n- Android:\n- api_level: ANY\n- iOS:\n- version: [\"7.0\", \"7.1\", \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"10.0\", \"10.1\", \"10.2\", \"10.3\", \"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n- watchOS:\n- version: [\"4.0\", \"4.1\", \"4.2\", \"4.3\", \"5.0\", \"5.1\"]\n- tvOS:\n- version: [\"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n- FreeBSD:\n- SunOS:\n- Arduino:\n- board: ANY\n-arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n-compiler:\n- sun-cc:\n- version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n- threads: [None, posix]\n- libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n- gcc:\n- version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n- \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n- \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n- \"7\", \"7.1\", \"7.2\", \"7.3\",\n- \"8\", \"8.1\", \"8.2\"]\n- libcxx: [libstdc++, libstdc++11]\n- threads: [None, posix, win32] # Windows MinGW\n- exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n- Visual Studio:\n- runtime: [MD, MT, MTd, MDd]\n- version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\"]\n- toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n- v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n- LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n- LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2]\n- clang:\n- version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n- \"5.0\", \"6.0\", \"7.0\",\n- \"8\"]\n- libcxx: [libstdc++, libstdc++11, libc++]\n- apple-clang:\n- version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n- libcxx: [libstdc++, libc++]\n-\n-build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n-cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n-\"\"\"\n- self._update_settings_yml(old_settings)\n-\n # MIGRATE LOCAL CACHE TO GENERATE MISSING METADATA.json\n _migrate_create_metadata(self.cache, self.out)\n \ndiff --git a/conans/client/migrations_settings.py b/conans/client/migrations_settings.py\nnew file mode 100644\nindex 00000000000..b96bdbc186a\n--- /dev/null\n+++ b/conans/client/migrations_settings.py\n@@ -0,0 +1,290 @@\n+settings_1_9_0 = \"\"\"\n+# Only for cross building, 'os_build/arch_build' is the system that runs Conan\n+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]\n+arch_build: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]\n+\n+# Only for building cross compilation tools, 'os_target/arch_target' is the system for\n+# which the tools generate code\n+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]\n+arch_target: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]\n+\n+# Rest of the settings are \"host\" settings:\n+# - For native building/cross building: Where the library/program will run.\n+# - For building cross compilation tools: Where the cross compiler will run.\n+os:\n+ Windows:\n+ subsystem: [None, cygwin, msys, msys2, wsl]\n+ WindowsStore:\n+ version: [\"8.1\", \"10.0\"]\n+ Linux:\n+ Macos:\n+ version: [None, \"10.6\", \"10.7\", \"10.8\", \"10.9\", \"10.10\", \"10.11\", \"10.12\", \"10.13\", \"10.14\"]\n+ Android:\n+ api_level: ANY\n+ iOS:\n+ version: [\"7.0\", \"7.1\", \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"10.0\", \"10.1\", \"10.2\", \"10.3\", \"11.0\"]\n+ watchOS:\n+ version: [\"4.0\"]\n+ tvOS:\n+ version: [\"11.0\"]\n+ FreeBSD:\n+ SunOS:\n+ Arduino:\n+ board: ANY\n+arch: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]\n+compiler:\n+ sun-cc:\n+ version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n+ threads: [None, posix]\n+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n+ gcc:\n+ version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n+ \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n+ \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n+ \"7\", \"7.1\", \"7.2\", \"7.3\",\n+ \"8\", \"8.1\", \"8.2\"]\n+ libcxx: [libstdc++, libstdc++11]\n+ threads: [None, posix, win32] # Windows MinGW\n+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n+ Visual Studio:\n+ runtime: [MD, MT, MTd, MDd]\n+ version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\"]\n+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2]\n+ clang:\n+ version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n+ \"5.0\", \"6.0\", \"7.0\",\n+ \"8\"]\n+ libcxx: [libstdc++, libstdc++11, libc++]\n+ apple-clang:\n+ version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n+ libcxx: [libstdc++, libc++]\n+\n+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+\"\"\"\n+\n+settings_1_9_1 = settings_1_9_0\n+settings_1_9_2 = settings_1_9_1\n+settings_1_10_0 = settings_1_9_2\n+settings_1_10_1 = settings_1_10_0\n+settings_1_10_2 = settings_1_10_1\n+settings_1_11_0 = settings_1_10_2\n+settings_1_11_1 = settings_1_11_0\n+settings_1_11_2 = settings_1_11_1\n+settings_1_12_0 = \"\"\"\n+# Only for cross building, 'os_build/arch_build' is the system that runs Conan\n+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]\n+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n+\n+# Only for building cross compilation tools, 'os_target/arch_target' is the system for\n+# which the tools generate code\n+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]\n+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n+\n+# Rest of the settings are \"host\" settings:\n+# - For native building/cross building: Where the library/program will run.\n+# - For building cross compilation tools: Where the cross compiler will run.\n+os:\n+ Windows:\n+ subsystem: [None, cygwin, msys, msys2, wsl]\n+ WindowsStore:\n+ version: [\"8.1\", \"10.0\"]\n+ Linux:\n+ Macos:\n+ version: [None, \"10.6\", \"10.7\", \"10.8\", \"10.9\", \"10.10\", \"10.11\", \"10.12\", \"10.13\", \"10.14\"]\n+ Android:\n+ api_level: ANY\n+ iOS:\n+ version: [\"7.0\", \"7.1\", \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"10.0\", \"10.1\", \"10.2\", \"10.3\", \"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ watchOS:\n+ version: [\"4.0\", \"4.1\", \"4.2\", \"4.3\", \"5.0\", \"5.1\"]\n+ tvOS:\n+ version: [\"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ FreeBSD:\n+ SunOS:\n+ Arduino:\n+ board: ANY\n+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n+compiler:\n+ sun-cc:\n+ version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n+ threads: [None, posix]\n+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n+ gcc:\n+ version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n+ \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n+ \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n+ \"7\", \"7.1\", \"7.2\", \"7.3\",\n+ \"8\", \"8.1\", \"8.2\"]\n+ libcxx: [libstdc++, libstdc++11]\n+ threads: [None, posix, win32] # Windows MinGW\n+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n+ Visual Studio:\n+ runtime: [MD, MT, MTd, MDd]\n+ version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\"]\n+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2]\n+ clang:\n+ version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n+ \"5.0\", \"6.0\", \"7.0\",\n+ \"8\"]\n+ libcxx: [libstdc++, libstdc++11, libc++]\n+ apple-clang:\n+ version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n+ libcxx: [libstdc++, libc++]\n+\n+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+\"\"\"\n+\n+settings_1_12_1 = settings_1_12_0\n+settings_1_12_2 = settings_1_12_1\n+settings_1_12_3 = settings_1_12_2\n+settings_1_13_0 = \"\"\"\n+# Only for cross building, 'os_build/arch_build' is the system that runs Conan\n+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]\n+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n+\n+# Only for building cross compilation tools, 'os_target/arch_target' is the system for\n+# which the tools generate code\n+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]\n+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n+\n+# Rest of the settings are \"host\" settings:\n+# - For native building/cross building: Where the library/program will run.\n+# - For building cross compilation tools: Where the cross compiler will run.\n+os:\n+ Windows:\n+ subsystem: [None, cygwin, msys, msys2, wsl]\n+ WindowsStore:\n+ version: [\"8.1\", \"10.0\"]\n+ Linux:\n+ Macos:\n+ version: [None, \"10.6\", \"10.7\", \"10.8\", \"10.9\", \"10.10\", \"10.11\", \"10.12\", \"10.13\", \"10.14\"]\n+ Android:\n+ api_level: ANY\n+ iOS:\n+ version: [\"7.0\", \"7.1\", \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"10.0\", \"10.1\", \"10.2\", \"10.3\", \"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ watchOS:\n+ version: [\"4.0\", \"4.1\", \"4.2\", \"4.3\", \"5.0\", \"5.1\"]\n+ tvOS:\n+ version: [\"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ FreeBSD:\n+ SunOS:\n+ Arduino:\n+ board: ANY\n+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]\n+compiler:\n+ sun-cc:\n+ version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n+ threads: [None, posix]\n+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n+ gcc:\n+ version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n+ \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n+ \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n+ \"7\", \"7.1\", \"7.2\", \"7.3\",\n+ \"8\", \"8.1\", \"8.2\"]\n+ libcxx: [libstdc++, libstdc++11]\n+ threads: [None, posix, win32] # Windows MinGW\n+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n+ Visual Studio:\n+ runtime: [MD, MT, MTd, MDd]\n+ version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\", \"16\"]\n+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]\n+ clang:\n+ version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n+ \"5.0\", \"6.0\", \"7.0\",\n+ \"8\"]\n+ libcxx: [libstdc++, libstdc++11, libc++]\n+ apple-clang:\n+ version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n+ libcxx: [libstdc++, libc++]\n+\n+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+\"\"\"\n+\n+settings_1_13_1 = settings_1_13_0\n+settings_1_13_2 = settings_1_13_1\n+settings_1_13_3 = settings_1_13_2\n+settings_1_14_0 = \"\"\"\n+# Only for cross building, 'os_build/arch_build' is the system that runs Conan\n+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]\n+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]\n+\n+# Only for building cross compilation tools, 'os_target/arch_target' is the system for\n+# which the tools generate code\n+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]\n+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]\n+\n+# Rest of the settings are \"host\" settings:\n+# - For native building/cross building: Where the library/program will run.\n+# - For building cross compilation tools: Where the cross compiler will run.\n+os:\n+ Windows:\n+ subsystem: [None, cygwin, msys, msys2, wsl]\n+ WindowsStore:\n+ version: [\"8.1\", \"10.0\"]\n+ Linux:\n+ Macos:\n+ version: [None, \"10.6\", \"10.7\", \"10.8\", \"10.9\", \"10.10\", \"10.11\", \"10.12\", \"10.13\", \"10.14\"]\n+ Android:\n+ api_level: ANY\n+ iOS:\n+ version: [\"7.0\", \"7.1\", \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"10.0\", \"10.1\", \"10.2\", \"10.3\", \"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ watchOS:\n+ version: [\"4.0\", \"4.1\", \"4.2\", \"4.3\", \"5.0\", \"5.1\"]\n+ tvOS:\n+ version: [\"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ FreeBSD:\n+ SunOS:\n+ Arduino:\n+ board: ANY\n+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]\n+compiler:\n+ sun-cc:\n+ version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n+ threads: [None, posix]\n+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n+ gcc:\n+ version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n+ \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n+ \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n+ \"7\", \"7.1\", \"7.2\", \"7.3\",\n+ \"8\", \"8.1\", \"8.2\"]\n+ libcxx: [libstdc++, libstdc++11]\n+ threads: [None, posix, win32] # Windows MinGW\n+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n+ Visual Studio:\n+ runtime: [MD, MT, MTd, MDd]\n+ version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\", \"16\"]\n+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]\n+ clang:\n+ version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n+ \"5.0\", \"6.0\", \"7.0\",\n+ \"8\"]\n+ libcxx: [libstdc++, libstdc++11, libc++]\n+ apple-clang:\n+ version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n+ libcxx: [libstdc++, libc++]\n+\n+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+\"\"\"\n+\n+settings_1_14_1 = settings_1_14_0\n+settings_1_14_2 = settings_1_14_1\n+settings_1_14_3 = settings_1_14_2\n+settings_1_14_4 = settings_1_14_3\ndiff --git a/conans/test/functional/old/test_migrations.py b/conans/test/functional/old/test_migrations.py\nindex bbb08d95049..99208c1285c 100644\n--- a/conans/test/functional/old/test_migrations.py\n+++ b/conans/test/functional/old/test_migrations.py\n@@ -2,11 +2,13 @@\n import os\n import unittest\n \n+from nose.plugins.skip import Skip\n from six import StringIO\n \n from conans import __version__\n from conans.client.migrations import migrate_plugins_to_hooks, migrate_to_default_profile\n from conans.client.output import ConanOutput\n+from conans.client.tools.version import Version\n from conans.migrations import CONAN_VERSION\n from conans.model.ref import ConanFileReference\n from conans.test.utils.conanfile import TestConanFile\n@@ -17,6 +19,20 @@\n \n class TestMigrations(unittest.TestCase):\n \n+ def is_there_var_for_settings_previous_version_test(self):\n+ from conans import __version__ as current_version\n+\n+ tmp = Version(current_version)\n+ if tmp.minor == 0:\n+ return unittest.skip(\"2.0, this will make sense for 2.1\")\n+\n+ previous_version = \"{}.{}\".format(tmp.major, int(tmp.minor) - 1)\n+\n+ from conans.client import migrations_settings\n+ var_name = \"settings_{}\".format(previous_version.replace(\".\", \"_\"))\n+ self.assertTrue(any([i for i in dir(migrations_settings) if i.startswith(var_name)]),\n+ \"Introduce the previous settings.yml file in the 'migrations_settings.yml\")\n+\n def test_migrate_revision_metadata(self):\n # https://github.com/conan-io/conan/issues/4898\n client = TestClient()\n"
}
|
[
{
"diff_hunk": "@@ -17,6 +19,20 @@\n \n class TestMigrations(unittest.TestCase):\n \n+ def is_there_var_for_settings_previous_version_test(self):\n+ from conans import __version__ as current_version\n+\n+ tmp = Version(current_version)\n+ if tmp.minor == 0:\n+ return unittest.skip(\"2.0, this will make sense for 2.1\")\n+\n+ previous_version = \"{}.{}\".format(tmp.major, int(tmp.minor) - 1)",
"line": null,
"original_line": 29,
"original_start_line": null,
"path": "conans/test/functional/old/test_migrations.py",
"start_line": null,
"text": "@author:\nTodo: improve with detection of previous patch too when patch > 0"
},
{
"diff_hunk": "@@ -25,33 +26,49 @@ def __init__(self, cache, current_version, out):\n super(ClientMigrator, self).__init__(cache.conan_folder, cache.store,\n current_version, out)\n \n- def _update_settings_yml(self, old_settings):\n+ def _update_settings_yml(self, old_version):\n+\n from conans.client.conf import default_settings_yml\n settings_path = self.cache.settings_path\n if not os.path.exists(settings_path):\n self.out.warn(\"Migration: This conan installation doesn't have settings yet\")\n self.out.warn(\"Nothing to migrate here, settings will be generated automatically\")\n return\n \n- current_settings = load(self.cache.settings_path)\n- if current_settings != default_settings_yml:\n- self.out.warn(\"Migration: Updating settings.yml\")\n- if current_settings != old_settings:\n- new_path = self.cache.settings_path + \".new\"\n- save(new_path, default_settings_yml)\n- self.out.warn(\"*\" * 40)\n- self.out.warn(\"settings.yml is locally modified, can't be updated\")\n- self.out.warn(\"The new settings.yml has been stored in: %s\" % new_path)\n- self.out.warn(\"*\" * 40)\n+ var_name = \"settings_{}\".format(old_version.replace(\".\", \"_\"))\n+\n+ def save_new():\n+ new_path = self.cache.settings_path + \".new\"\n+ save(new_path, default_settings_yml)\n+ self.out.warn(\"*\" * 40)\n+ self.out.warn(\"settings.yml is locally modified, can't be updated\")\n+ self.out.warn(\"The new settings.yml has been stored in: %s\" % new_path)\n+ self.out.warn(\"*\" * 40)\n+\n+ self.out.warn(\"Migration: Updating settings.yml\")\n+ if hasattr(migrations_settings, var_name):\n+ version_default_contents = getattr(migrations_settings, var_name)\n+ if version_default_contents != default_settings_yml:\n+ current_settings = load(self.cache.settings_path)\n+ if current_settings != version_default_contents:\n+ save_new()\n+ else:\n+ save(self.cache.settings_path, default_settings_yml)\n else:\n- save(self.cache.settings_path, default_settings_yml)\n+ self.out.warn(\"Migration: Settings already up to date\")",
"line": null,
"original_line": 58,
"original_start_line": null,
"path": "conans/client/migrations.py",
"start_line": null,
"text": "@user1:\nWhy a warning?"
}
] |
58571da28f70b6b90a8e5fac16d371eb155818f0
|
diff --git a/conans/client/migrations.py b/conans/client/migrations.py
index db5a91efb42..199bca0957c 100644
--- a/conans/client/migrations.py
+++ b/conans/client/migrations.py
@@ -2,6 +2,7 @@
import shutil
from conans import DEFAULT_REVISION_V1
+from conans.client import migrations_settings
from conans.client.cache.cache import CONAN_CONF, PROFILES_FOLDER
from conans.client.conf.config_installer import _ConfigOrigin, _save_configs
from conans.client.tools import replace_in_file
@@ -25,7 +26,8 @@ def __init__(self, cache, current_version, out):
super(ClientMigrator, self).__init__(cache.conan_folder, cache.store,
current_version, out)
- def _update_settings_yml(self, old_settings):
+ def _update_settings_yml(self, old_version):
+
from conans.client.conf import default_settings_yml
settings_path = self.cache.settings_path
if not os.path.exists(settings_path):
@@ -33,18 +35,30 @@ def _update_settings_yml(self, old_settings):
self.out.warn("Nothing to migrate here, settings will be generated automatically")
return
- current_settings = load(self.cache.settings_path)
- if current_settings != default_settings_yml:
- self.out.warn("Migration: Updating settings.yml")
- if current_settings != old_settings:
- new_path = self.cache.settings_path + ".new"
- save(new_path, default_settings_yml)
- self.out.warn("*" * 40)
- self.out.warn("settings.yml is locally modified, can't be updated")
- self.out.warn("The new settings.yml has been stored in: %s" % new_path)
- self.out.warn("*" * 40)
+ var_name = "settings_{}".format(old_version.replace(".", "_"))
+
+ def save_new():
+ new_path = self.cache.settings_path + ".new"
+ save(new_path, default_settings_yml)
+ self.out.warn("*" * 40)
+ self.out.warn("settings.yml is locally modified, can't be updated")
+ self.out.warn("The new settings.yml has been stored in: %s" % new_path)
+ self.out.warn("*" * 40)
+
+ self.out.warn("Migration: Updating settings.yml")
+ if hasattr(migrations_settings, var_name):
+ version_default_contents = getattr(migrations_settings, var_name)
+ if version_default_contents != default_settings_yml:
+ current_settings = load(self.cache.settings_path)
+ if current_settings != version_default_contents:
+ save_new()
+ else:
+ save(self.cache.settings_path, default_settings_yml)
else:
- save(self.cache.settings_path, default_settings_yml)
+ self.out.info("Migration: Settings already up to date")
+ else:
+ # We don't have the value for that version, so don't override
+ save_new()
def _make_migrations(self, old_version):
# ############### FILL THIS METHOD WITH THE REQUIRED ACTIONS ##############
@@ -52,6 +66,9 @@ def _make_migrations(self, old_version):
if old_version is None:
return
+ # Migrate the settings if they were the default for that version
+ self._update_settings_yml(old_version)
+
if old_version < Version("0.25"):
from conans.paths import DEFAULT_PROFILE_NAME
default_profile_path = os.path.join(self.cache.conan_folder, PROFILES_FOLDER,
@@ -73,75 +90,6 @@ def _make_migrations(self, old_version):
migrate_plugins_to_hooks(self.cache)
if old_version < Version("1.13.0"):
- old_settings = """
-# Only for cross building, 'os_build/arch_build' is the system that runs Conan
-os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
-arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
-
-# Only for building cross compilation tools, 'os_target/arch_target' is the system for
-# which the tools generate code
-os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
-arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
-
-# Rest of the settings are "host" settings:
-# - For native building/cross building: Where the library/program will run.
-# - For building cross compilation tools: Where the cross compiler will run.
-os:
- Windows:
- subsystem: [None, cygwin, msys, msys2, wsl]
- WindowsStore:
- version: ["8.1", "10.0"]
- Linux:
- Macos:
- version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14"]
- Android:
- api_level: ANY
- iOS:
- version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
- watchOS:
- version: ["4.0", "4.1", "4.2", "4.3", "5.0", "5.1"]
- tvOS:
- version: ["11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
- FreeBSD:
- SunOS:
- Arduino:
- board: ANY
-arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
-compiler:
- sun-cc:
- version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
- threads: [None, posix]
- libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
- gcc:
- version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
- "5", "5.1", "5.2", "5.3", "5.4", "5.5",
- "6", "6.1", "6.2", "6.3", "6.4",
- "7", "7.1", "7.2", "7.3",
- "8", "8.1", "8.2"]
- libcxx: [libstdc++, libstdc++11]
- threads: [None, posix, win32] # Windows MinGW
- exception: [None, dwarf2, sjlj, seh] # Windows MinGW
- Visual Studio:
- runtime: [MD, MT, MTd, MDd]
- version: ["8", "9", "10", "11", "12", "14", "15"]
- toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,
- v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,
- LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,
- LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2]
- clang:
- version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0",
- "5.0", "6.0", "7.0",
- "8"]
- libcxx: [libstdc++, libstdc++11, libc++]
- apple-clang:
- version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0"]
- libcxx: [libstdc++, libc++]
-
-build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
-cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
-"""
- self._update_settings_yml(old_settings)
-
# MIGRATE LOCAL CACHE TO GENERATE MISSING METADATA.json
_migrate_create_metadata(self.cache, self.out)
diff --git a/conans/client/migrations_settings.py b/conans/client/migrations_settings.py
new file mode 100644
index 00000000000..b96bdbc186a
--- /dev/null
+++ b/conans/client/migrations_settings.py
@@ -0,0 +1,290 @@
+settings_1_9_0 = """
+# Only for cross building, 'os_build/arch_build' is the system that runs Conan
+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
+arch_build: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
+
+# Only for building cross compilation tools, 'os_target/arch_target' is the system for
+# which the tools generate code
+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
+arch_target: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
+
+# Rest of the settings are "host" settings:
+# - For native building/cross building: Where the library/program will run.
+# - For building cross compilation tools: Where the cross compiler will run.
+os:
+ Windows:
+ subsystem: [None, cygwin, msys, msys2, wsl]
+ WindowsStore:
+ version: ["8.1", "10.0"]
+ Linux:
+ Macos:
+ version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14"]
+ Android:
+ api_level: ANY
+ iOS:
+ version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0"]
+ watchOS:
+ version: ["4.0"]
+ tvOS:
+ version: ["11.0"]
+ FreeBSD:
+ SunOS:
+ Arduino:
+ board: ANY
+arch: [x86, x86_64, ppc64le, ppc64, armv6, armv7, armv7hf, armv8, sparc, sparcv9, mips, mips64, avr, armv7s, armv7k]
+compiler:
+ sun-cc:
+ version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
+ threads: [None, posix]
+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
+ gcc:
+ version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
+ "5", "5.1", "5.2", "5.3", "5.4", "5.5",
+ "6", "6.1", "6.2", "6.3", "6.4",
+ "7", "7.1", "7.2", "7.3",
+ "8", "8.1", "8.2"]
+ libcxx: [libstdc++, libstdc++11]
+ threads: [None, posix, win32] # Windows MinGW
+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW
+ Visual Studio:
+ runtime: [MD, MT, MTd, MDd]
+ version: ["8", "9", "10", "11", "12", "14", "15"]
+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,
+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,
+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,
+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2]
+ clang:
+ version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0",
+ "5.0", "6.0", "7.0",
+ "8"]
+ libcxx: [libstdc++, libstdc++11, libc++]
+ apple-clang:
+ version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0"]
+ libcxx: [libstdc++, libc++]
+
+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
+"""
+
+settings_1_9_1 = settings_1_9_0
+settings_1_9_2 = settings_1_9_1
+settings_1_10_0 = settings_1_9_2
+settings_1_10_1 = settings_1_10_0
+settings_1_10_2 = settings_1_10_1
+settings_1_11_0 = settings_1_10_2
+settings_1_11_1 = settings_1_11_0
+settings_1_11_2 = settings_1_11_1
+settings_1_12_0 = """
+# Only for cross building, 'os_build/arch_build' is the system that runs Conan
+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
+
+# Only for building cross compilation tools, 'os_target/arch_target' is the system for
+# which the tools generate code
+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
+
+# Rest of the settings are "host" settings:
+# - For native building/cross building: Where the library/program will run.
+# - For building cross compilation tools: Where the cross compiler will run.
+os:
+ Windows:
+ subsystem: [None, cygwin, msys, msys2, wsl]
+ WindowsStore:
+ version: ["8.1", "10.0"]
+ Linux:
+ Macos:
+ version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14"]
+ Android:
+ api_level: ANY
+ iOS:
+ version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
+ watchOS:
+ version: ["4.0", "4.1", "4.2", "4.3", "5.0", "5.1"]
+ tvOS:
+ version: ["11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
+ FreeBSD:
+ SunOS:
+ Arduino:
+ board: ANY
+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
+compiler:
+ sun-cc:
+ version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
+ threads: [None, posix]
+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
+ gcc:
+ version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
+ "5", "5.1", "5.2", "5.3", "5.4", "5.5",
+ "6", "6.1", "6.2", "6.3", "6.4",
+ "7", "7.1", "7.2", "7.3",
+ "8", "8.1", "8.2"]
+ libcxx: [libstdc++, libstdc++11]
+ threads: [None, posix, win32] # Windows MinGW
+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW
+ Visual Studio:
+ runtime: [MD, MT, MTd, MDd]
+ version: ["8", "9", "10", "11", "12", "14", "15"]
+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,
+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,
+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,
+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2]
+ clang:
+ version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0",
+ "5.0", "6.0", "7.0",
+ "8"]
+ libcxx: [libstdc++, libstdc++11, libc++]
+ apple-clang:
+ version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0"]
+ libcxx: [libstdc++, libc++]
+
+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
+"""
+
+settings_1_12_1 = settings_1_12_0
+settings_1_12_2 = settings_1_12_1
+settings_1_12_3 = settings_1_12_2
+settings_1_13_0 = """
+# Only for cross building, 'os_build/arch_build' is the system that runs Conan
+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
+
+# Only for building cross compilation tools, 'os_target/arch_target' is the system for
+# which the tools generate code
+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
+
+# Rest of the settings are "host" settings:
+# - For native building/cross building: Where the library/program will run.
+# - For building cross compilation tools: Where the cross compiler will run.
+os:
+ Windows:
+ subsystem: [None, cygwin, msys, msys2, wsl]
+ WindowsStore:
+ version: ["8.1", "10.0"]
+ Linux:
+ Macos:
+ version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14"]
+ Android:
+ api_level: ANY
+ iOS:
+ version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
+ watchOS:
+ version: ["4.0", "4.1", "4.2", "4.3", "5.0", "5.1"]
+ tvOS:
+ version: ["11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
+ FreeBSD:
+ SunOS:
+ Arduino:
+ board: ANY
+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr]
+compiler:
+ sun-cc:
+ version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
+ threads: [None, posix]
+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
+ gcc:
+ version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
+ "5", "5.1", "5.2", "5.3", "5.4", "5.5",
+ "6", "6.1", "6.2", "6.3", "6.4",
+ "7", "7.1", "7.2", "7.3",
+ "8", "8.1", "8.2"]
+ libcxx: [libstdc++, libstdc++11]
+ threads: [None, posix, win32] # Windows MinGW
+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW
+ Visual Studio:
+ runtime: [MD, MT, MTd, MDd]
+ version: ["8", "9", "10", "11", "12", "14", "15", "16"]
+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,
+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,
+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,
+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]
+ clang:
+ version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0",
+ "5.0", "6.0", "7.0",
+ "8"]
+ libcxx: [libstdc++, libstdc++11, libc++]
+ apple-clang:
+ version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0"]
+ libcxx: [libstdc++, libc++]
+
+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
+"""
+
+settings_1_13_1 = settings_1_13_0
+settings_1_13_2 = settings_1_13_1
+settings_1_13_3 = settings_1_13_2
+settings_1_14_0 = """
+# Only for cross building, 'os_build/arch_build' is the system that runs Conan
+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]
+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]
+
+# Only for building cross compilation tools, 'os_target/arch_target' is the system for
+# which the tools generate code
+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]
+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]
+
+# Rest of the settings are "host" settings:
+# - For native building/cross building: Where the library/program will run.
+# - For building cross compilation tools: Where the cross compiler will run.
+os:
+ Windows:
+ subsystem: [None, cygwin, msys, msys2, wsl]
+ WindowsStore:
+ version: ["8.1", "10.0"]
+ Linux:
+ Macos:
+ version: [None, "10.6", "10.7", "10.8", "10.9", "10.10", "10.11", "10.12", "10.13", "10.14"]
+ Android:
+ api_level: ANY
+ iOS:
+ version: ["7.0", "7.1", "8.0", "8.1", "8.2", "8.3", "9.0", "9.1", "9.2", "9.3", "10.0", "10.1", "10.2", "10.3", "11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
+ watchOS:
+ version: ["4.0", "4.1", "4.2", "4.3", "5.0", "5.1"]
+ tvOS:
+ version: ["11.0", "11.1", "11.2", "11.3", "11.4", "12.0", "12.1"]
+ FreeBSD:
+ SunOS:
+ Arduino:
+ board: ANY
+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]
+compiler:
+ sun-cc:
+ version: ["5.10", "5.11", "5.12", "5.13", "5.14"]
+ threads: [None, posix]
+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]
+ gcc:
+ version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
+ "5", "5.1", "5.2", "5.3", "5.4", "5.5",
+ "6", "6.1", "6.2", "6.3", "6.4",
+ "7", "7.1", "7.2", "7.3",
+ "8", "8.1", "8.2"]
+ libcxx: [libstdc++, libstdc++11]
+ threads: [None, posix, win32] # Windows MinGW
+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW
+ Visual Studio:
+ runtime: [MD, MT, MTd, MDd]
+ version: ["8", "9", "10", "11", "12", "14", "15", "16"]
+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,
+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,
+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,
+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]
+ clang:
+ version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0",
+ "5.0", "6.0", "7.0",
+ "8"]
+ libcxx: [libstdc++, libstdc++11, libc++]
+ apple-clang:
+ version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0"]
+ libcxx: [libstdc++, libc++]
+
+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
+"""
+
+settings_1_14_1 = settings_1_14_0
+settings_1_14_2 = settings_1_14_1
+settings_1_14_3 = settings_1_14_2
+settings_1_14_4 = settings_1_14_3
diff --git a/conans/test/functional/old/test_migrations.py b/conans/test/functional/old/test_migrations.py
index bbb08d95049..6398d296c56 100644
--- a/conans/test/functional/old/test_migrations.py
+++ b/conans/test/functional/old/test_migrations.py
@@ -2,11 +2,13 @@
import os
import unittest
+from nose.plugins.skip import Skip
from six import StringIO
from conans import __version__
from conans.client.migrations import migrate_plugins_to_hooks, migrate_to_default_profile
from conans.client.output import ConanOutput
+from conans.client.tools.version import Version
from conans.migrations import CONAN_VERSION
from conans.model.ref import ConanFileReference
from conans.test.utils.conanfile import TestConanFile
@@ -17,6 +19,22 @@
class TestMigrations(unittest.TestCase):
+ def is_there_var_for_settings_previous_version_test(self):
+ from conans import __version__ as current_version
+
+ tmp = Version(current_version)
+ if int(tmp.minor) == 0:
+ return unittest.skip("2.0, this will make sense for 2.1")
+ if int(tmp.patch) > 0:
+ previous_version = "{}.{}.{}".format(tmp.major, tmp.minor, int(tmp.patch) - 1)
+ else:
+ previous_version = "{}.{}.0".format(tmp.major, int(tmp.minor) - 1)
+
+ from conans.client import migrations_settings
+ var_name = "settings_{}".format(previous_version.replace(".", "_"))
+ self.assertTrue(any([i for i in dir(migrations_settings) if i == var_name]),
+ "Introduce the previous settings.yml file in the 'migrations_settings.yml")
+
def test_migrate_revision_metadata(self):
# https://github.com/conan-io/conan/issues/4898
client = TestClient()
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Code Refactoring / Architectural Improvement"
}
|
|
conan-io__conan-5206@cb98a7b
|
conan-io/conan
|
Python
| 5,206
|
Exception errors to stderr
|
Changelog: Feature: Print errors and warnings to `stderr`
Docs: omit
Close #5207
|
2019-05-23T11:19:14Z
|
stderr not used by Conan
At least, the exception messages printed in the `command.py` should go to stderr. This is important to improve the CI automation feedback on errors.
Maybe we should have the stderr stream on the conan output and print there also the .error messages
|
[
{
"body": "At least, the exception messages printed in the `command.py` should go to stderr. This is important to improve the CI automation feedback on errors.\r\n\r\nMaybe we should have the stderr stream on the conan output and print there also the .error messages",
"number": 5207,
"title": "stderr not used by Conan"
}
] |
a8eccd5da44e390f83f6c3242628eeb8e8f2a36c
|
{
"head_commit": "cb98a7bcac5108cacb3b557fddbd78a1d2f742a4",
"head_commit_message": "Exception errors to stderr",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex a1085b3985d..585e64914ec 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -1711,19 +1711,23 @@ def run(self, *args):\n if exc.code != 0:\n logger.error(exc)\n self._user_io.out.error(\"Exiting with code: %d\" % exc.code)\n+ sys.stderr.write(\"Exiting with code: %d\" % exc.code)\n ret_code = exc.code\n except ConanInvalidConfiguration as exc:\n ret_code = ERROR_INVALID_CONFIGURATION\n self._user_io.out.error(exc)\n+ sys.stderr.write(str(exc))\n except ConanException as exc:\n ret_code = ERROR_GENERAL\n self._user_io.out.error(exc)\n+ sys.stderr.write(str(exc))\n except Exception as exc:\n import traceback\n print(traceback.format_exc())\n ret_code = ERROR_GENERAL\n msg = exception_message_safe(exc)\n self._user_io.out.error(msg)\n+ sys.stderr.write(msg)\n \n return ret_code\n \n"
}
|
[
{
"diff_hunk": "@@ -1711,19 +1711,23 @@ def run(self, *args):\n if exc.code != 0:\n logger.error(exc)\n self._user_io.out.error(\"Exiting with code: %d\" % exc.code)\n+ sys.stderr.write(\"Exiting with code: %d\" % exc.code)\n ret_code = exc.code\n except ConanInvalidConfiguration as exc:\n ret_code = ERROR_INVALID_CONFIGURATION\n self._user_io.out.error(exc)\n+ sys.stderr.write(str(exc))\n except ConanException as exc:\n ret_code = ERROR_GENERAL\n self._user_io.out.error(exc)\n+ sys.stderr.write(str(exc))",
"line": null,
"original_line": 1723,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user2:\nThis duplicates errors in users output:\r\n```bash\r\n$ conan install zlib/1.2.11@user1/stable -s os=AIX\r\nERROR: Invalid setting 'AIX' is not a valid 'settings.os' value.\r\nPossible values are ['Android', 'Arduino', 'FreeBSD', 'Linux', 'Macos', 'SunOS', 'Windows', 'WindowsStore', 'iOS', 'tvOS', 'watchOS']\r\nRead \"http://docs.conan.io/en/latest/faq/troubleshooting.html#error-invalid-setting\"\r\nInvalid setting 'AIX' is not a valid 'settings.os' value.\r\nPossible values are ['Android', 'Arduino', 'FreeBSD', 'Linux', 'Macos', 'SunOS', 'Windows', 'WindowsStore', 'iOS', 'tvOS', 'watchOS']\r\nRead \"http://docs.conan.io/en/latest/faq/troubleshooting.html#error-invalid-setting\"\r\n```\n\n@author:\nSo then we have to do it correctly managing both streams in the conan ouput.\n\n@author:\nI'll update"
}
] |
bdb59e7acb91f120bcea0bf5d4b29e76729f6f49
|
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index 97fe4e9e562..ada65621e49 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -156,7 +156,7 @@ def factory(interactive=None):
"""Factory"""
# Respect color env setting or check tty if unset
color = colorama_initialize()
- out = ConanOutput(sys.stdout, color)
+ out = ConanOutput(sys.stdout, sys.stderr, color)
user_io = UserIO(out=out)
user_home = get_conan_user_home()
diff --git a/conans/client/graph/graph_binaries.py b/conans/client/graph/graph_binaries.py
index 7f830a38df2..08e06d7960b 100644
--- a/conans/client/graph/graph_binaries.py
+++ b/conans/client/graph/graph_binaries.py
@@ -55,7 +55,7 @@ def _evaluate_node(self, node, build_mode, update, evaluated_nodes, remotes):
return
if build_mode.forced(conanfile, ref):
- output.warn('Forced build from source')
+ output.info('Forced build from source')
node.binary = BINARY_BUILD
node.prev = None
return
diff --git a/conans/client/output.py b/conans/client/output.py
index 8136f0d4da6..994f6bb3531 100644
--- a/conans/client/output.py
+++ b/conans/client/output.py
@@ -61,18 +61,19 @@ class ConanOutput(object):
and auxiliary info, success, warn methods for convenience.
"""
- def __init__(self, stream, color=False):
+ def __init__(self, stream, stream_err=None, color=False):
self._stream = stream
+ self._stream_err = stream_err or stream
self._color = color
@property
def is_terminal(self):
return hasattr(self._stream, "isatty") and self._stream.isatty()
- def writeln(self, data, front=None, back=None):
- self.write(data, front, back, True)
+ def writeln(self, data, front=None, back=None, error=False):
+ self.write(data, front, back, newline=True, error=error)
- def write(self, data, front=None, back=None, newline=False):
+ def write(self, data, front=None, back=None, newline=False, error=False):
if six.PY2:
if isinstance(data, str):
data = decode_text(data) # Keep python 2 compatibility
@@ -86,7 +87,10 @@ def write(self, data, front=None, back=None, newline=False):
# Windows output locks produce IOErrors
for _ in range(3):
try:
- self._stream.write(data)
+ if error:
+ self._stream_err.write(data)
+ else:
+ self._stream.write(data)
break
except IOError:
import time
@@ -106,10 +110,10 @@ def success(self, data):
self.writeln(data, Color.BRIGHT_GREEN)
def warn(self, data):
- self.writeln("WARN: {}".format(data), Color.BRIGHT_YELLOW)
+ self.writeln("WARN: {}".format(data), Color.BRIGHT_YELLOW, error=True)
def error(self, data):
- self.writeln("ERROR: {}".format(data), Color.BRIGHT_RED)
+ self.writeln("ERROR: {}".format(data), Color.BRIGHT_RED, error=True)
def input_text(self, data):
self.write(data, Color.GREEN)
@@ -133,9 +137,12 @@ class ScopedOutput(ConanOutput):
def __init__(self, scope, output):
self.scope = scope
self._stream = output._stream
+ self._stream_err = output._stream_err
self._color = output._color
- def write(self, data, front=None, back=None, newline=False):
+ def write(self, data, front=None, back=None, newline=False, error=False):
assert self.scope != "virtual", "printing with scope==virtual"
- super(ScopedOutput, self).write("%s: " % self.scope, front, back, False)
- super(ScopedOutput, self).write("%s" % data, Color.BRIGHT_WHITE, back, newline)
+ super(ScopedOutput, self).write("%s: " % self.scope, front=front, back=back,
+ newline=False, error=error)
+ super(ScopedOutput, self).write("%s" % data, front=Color.BRIGHT_WHITE, back=back,
+ newline=newline, error=error)
diff --git a/conans/client/tools/files.py b/conans/client/tools/files.py
index 83ae13e5a06..97da6d971b3 100644
--- a/conans/client/tools/files.py
+++ b/conans/client/tools/files.py
@@ -176,7 +176,7 @@ def patch(base_path=None, patch_file=None, patch_string=None, strip=0, output=No
class PatchLogHandler(logging.Handler):
def __init__(self):
logging.Handler.__init__(self, logging.DEBUG)
- self.output = output or ConanOutput(sys.stdout, True)
+ self.output = output or ConanOutput(sys.stdout, sys.stderr, color=True)
self.patchname = patch_file if patch_file else "patch"
def emit(self, record):
diff --git a/conans/client/userio.py b/conans/client/userio.py
index 9bf24343d69..b1bac731702 100644
--- a/conans/client/userio.py
+++ b/conans/client/userio.py
@@ -19,7 +19,7 @@ def __init__(self, ins=sys.stdin, out=None):
"""
self._ins = ins
if not out:
- out = ConanOutput(sys.stdout)
+ out = ConanOutput(sys.stdout, sys.stderr)
self.out = out
self._interactive = True
diff --git a/conans/test/functional/command/build_test.py b/conans/test/functional/command/build_test.py
index 564a52d01cd..3deb1c4f218 100644
--- a/conans/test/functional/command/build_test.py
+++ b/conans/test/functional/command/build_test.py
@@ -323,7 +323,7 @@ class FooConan(ConanFile):
"""
client.save({CONANFILE: conanfile})
client.run("create --build foo/1.0@user/stable . user/stable")
- self.assertIn("foo/1.0@user/stable: WARN: Forced build from source", client.out)
+ self.assertIn("foo/1.0@user/stable: Forced build from source", client.out)
def build_multiple_full_reference_test(self):
client = TestClient()
@@ -347,8 +347,8 @@ class BarConan(ConanFile):
"""
client.save({CONANFILE: conanfile}, clean_first=True)
client.run("create --build foo/1.0@user/stable --build bar/1.0@user/testing . user/testing")
- self.assertIn("foo/1.0@user/stable: WARN: Forced build from source", client.out)
- self.assertIn("bar/1.0@user/testing: WARN: Forced build from source", client.out)
+ self.assertIn("foo/1.0@user/stable: Forced build from source", client.out)
+ self.assertIn("bar/1.0@user/testing: Forced build from source", client.out)
def debug_build_release_deps_test(self):
# https://github.com/conan-io/conan/issues/2899
diff --git a/conans/test/functional/command/create_test.py b/conans/test/functional/command/create_test.py
index 537dbf4f474..961d00e5293 100644
--- a/conans/test/functional/command/create_test.py
+++ b/conans/test/functional/command/create_test.py
@@ -58,13 +58,13 @@ def test(self):
'''
client.save({"conanfile.py": conanfile, "test_package/conanfile.py": test_package})
client.run("create . lasote/testing")
- self.assertIn("HelloBar/0.1@lasote/testing: WARN: Forced build from source",
+ self.assertIn("HelloBar/0.1@lasote/testing: Forced build from source",
client.user_io.out)
client.save({"conanfile.py": conanfile.replace("HelloBar", "Hello") +
" requires='HelloBar/0.1@lasote/testing'",
"test_package/conanfile.py": test_package.replace("HelloBar", "Hello")})
client.run("create . lasote/stable")
- self.assertNotIn("HelloBar/0.1@lasote/testing: WARN: Forced build from source",
+ self.assertNotIn("HelloBar/0.1@lasote/testing: Forced build from source",
client.user_io.out)
@parameterized.expand([(True, ), (False, )])
@@ -433,13 +433,13 @@ def test(self):
'''
client.save({"conanfile.py": conanfile, "test_package/conanfile.py": test_package})
client.run("create . lasote/testing")
- self.assertIn("HelloBar/0.1@lasote/testing: WARN: Forced build from source",
+ self.assertIn("HelloBar/0.1@lasote/testing: Forced build from source",
client.out)
client.save({"conanfile.py": conanfile.replace("HelloBar", "Hello") +
" requires='HelloBar/0.1@lasote/testing'",
"test_package/conanfile.py": test_package.replace("HelloBar", "Hello")})
client.run("create . lasote/stable")
- self.assertIn("HelloBar/0.1@lasote/testing: WARN: Forced build from source",
+ self.assertIn("HelloBar/0.1@lasote/testing: Forced build from source",
client.out)
def test_build_folder_handling_test(self):
diff --git a/conans/test/functional/command/install_test.py b/conans/test/functional/command/install_test.py
index a732157f96b..36c91ba619a 100644
--- a/conans/test/functional/command/install_test.py
+++ b/conans/test/functional/command/install_test.py
@@ -218,7 +218,7 @@ def install_combined_test(self):
self.client.run("install . %s --build=missing --build Hello1" % (self.settings))
self.assertIn("Hello0/0.1@lasote/stable: Already installed!",
self.client.user_io.out)
- self.assertIn("Hello1/0.1@lasote/stable: WARN: Forced build from source",
+ self.assertIn("Hello1/0.1@lasote/stable: Forced build from source",
self.client.user_io.out)
def install_transitive_cache_test(self):
diff --git a/conans/test/functional/command/test_package_test.py b/conans/test/functional/command/test_package_test.py
index c27d8e2ee37..4906b2f3497 100644
--- a/conans/test/functional/command/test_package_test.py
+++ b/conans/test/functional/command/test_package_test.py
@@ -46,9 +46,9 @@ def test(self):
client.run("test test_package Hello/0.1@lasote/stable")
self.assertNotIn("Exporting package recipe", client.out)
- self.assertNotIn("WARN: Forced build from source", client.out)
+ self.assertNotIn("Forced build from source", client.out)
self.assertNotIn("Package '%s' created" % NO_SETTINGS_PACKAGE_ID, client.out)
- self.assertNotIn("WARN: Forced build from source", client.out)
+ self.assertNotIn("Forced build from source", client.out)
self.assertIn("Hello/0.1@lasote/stable: Already installed!", client.out)
client.save({"test_package/conanfile.py": test_conanfile}, clean_first=True)
diff --git a/conans/test/functional/conan_api/two_conan_creates_test.py b/conans/test/functional/conan_api/two_conan_creates_test.py
index c4f578a922a..974769b3a72 100644
--- a/conans/test/functional/conan_api/two_conan_creates_test.py
+++ b/conans/test/functional/conan_api/two_conan_creates_test.py
@@ -52,6 +52,7 @@ def test_api_conanfile_loader_shouldnt_cache(self):
old_stdout = sys.stdout
result = StringIO()
sys.stdout = result
+ sys.stderr = sys.stdout
api, _, _ = ConanAPIV1.factory()
api._user_io.out = TestBufferConanOutput()
conanfile = dedent("""
diff --git a/conans/test/integration/install_outdated_test.py b/conans/test/integration/install_outdated_test.py
index 25ce4da7dec..264def0775b 100644
--- a/conans/test/integration/install_outdated_test.py
+++ b/conans/test/integration/install_outdated_test.py
@@ -126,7 +126,7 @@ def install_outdated_and_dep_test(self):
# binary is in the "same version" than local cached Hello0
new_client.run("install Hello1/0.1@lasote/stable --build outdated --build Hello1")
self.assertIn("Downloading conan_package.tgz", new_client.user_io.out)
- self.assertIn("Hello1/0.1@lasote/stable: WARN: Forced build from source",
+ self.assertIn("Hello1/0.1@lasote/stable: Forced build from source",
new_client.user_io.out)
def install_outdated_checking_updates_test(self):
diff --git a/conans/test/integration/only_source_test.py b/conans/test/integration/only_source_test.py
index 27ea09b98fe..17cea4d7465 100644
--- a/conans/test/integration/only_source_test.py
+++ b/conans/test/integration/only_source_test.py
@@ -65,12 +65,12 @@ def test(self):
# Now Hello2 should be built and not fail
client.run("create . lasote/stable")
self.assertNotIn("Can't find a 'Hello2/2.2@lasote/stable' package", client.user_io.out)
- self.assertIn('Hello2/2.2@lasote/stable: WARN: Forced build from source',
+ self.assertIn('Hello2/2.2@lasote/stable: Forced build from source',
client.user_io.out)
# Now package is generated but should be built again
client.run("create . lasote/stable")
- self.assertIn('Hello2/2.2@lasote/stable: WARN: Forced build from source',
+ self.assertIn('Hello2/2.2@lasote/stable: Forced build from source',
client.user_io.out)
def build_policies_update_test(self):
diff --git a/conans/tools.py b/conans/tools.py
index 5647ce86292..c07cc08e15c 100644
--- a/conans/tools.py
+++ b/conans/tools.py
@@ -52,7 +52,7 @@ def get_global_instances():
# Assign a default, will be overwritten in the factory of the ConanAPI
-set_global_instances(the_output=ConanOutput(sys.stdout, True), the_requester=requests)
+set_global_instances(the_output=ConanOutput(sys.stdout, sys.stderr, True), the_requester=requests)
"""
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-4991@63a518a
|
conan-io/conan
|
Python
| 4,991
|
Skip symlink checks
|
Changelog: Feature: You can disable broken symlinks checks when packaging using `CONAN_SKIP_BROKEN_SYMLINKS_CHECK` env var or `config.skip_broken_symlinks_check=1`
Docs: https://github.com/conan-io/docs/pull/1272
Closes #4990
|
2019-04-17T09:03:58Z
|
Opt-in skip broken symlinks check packaging
We need to package directories with broken symlinks. We are trying to package yocto sdk and we didn't manage to fix them or remove them, because the sdk stops working. So we would need something like `CONAN_SKIP_BROKEN_SYMLINKS_CHECK` to remove the check in `manifest.py` line 39
|
[
{
"body": "We need to package directories with broken symlinks. We are trying to package yocto sdk and we didn't manage to fix them or remove them, because the sdk stops working. So we would need something like `CONAN_SKIP_BROKEN_SYMLINKS_CHECK` to remove the check in `manifest.py` line 39",
"number": 4990,
"title": "Opt-in skip broken symlinks check packaging"
}
] |
44dbdef0ffe44d520e1c401b74b9c01c787ff45a
|
{
"head_commit": "63a518a396a900f606b45f979526fba695f6da64",
"head_commit_message": "Skip symlink checks",
"patch_to_review": "diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py\nindex 085ddf9d86f..94d1e56312c 100644\n--- a/conans/client/conf/__init__.py\n+++ b/conans/client/conf/__init__.py\n@@ -104,6 +104,7 @@\n # use_always_short_paths = False # environment CONAN_USE_ALWAYS_SHORT_PATHS\n # skip_vs_projects_upgrade = False # environment CONAN_SKIP_VS_PROJECTS_UPGRADE\n # non_interactive = False # environment CONAN_NON_INTERACTIVE\n+# skip_broken_symlinks_check = False # enviornment CONAN_SKIP_BROKEN_SYMLINKS_CHECK\n \n # conan_make_program = make # environment CONAN_MAKE_PROGRAM (overrides the make program used in AutoToolsBuildEnvironment.make)\n # conan_cmake_program = cmake # environment CONAN_CMAKE_PROGRAM (overrides the make program used in CMake.cmake_program)\n@@ -174,6 +175,7 @@ def env_vars(self):\n \"CONAN_PRINT_RUN_COMMANDS\": self._env_c(\"log.print_run_commands\", \"CONAN_PRINT_RUN_COMMANDS\", \"False\"),\n \"CONAN_COMPRESSION_LEVEL\": self._env_c(\"general.compression_level\", \"CONAN_COMPRESSION_LEVEL\", \"9\"),\n \"CONAN_NON_INTERACTIVE\": self._env_c(\"general.non_interactive\", \"CONAN_NON_INTERACTIVE\", \"False\"),\n+ \"CONAN_SKIP_BROKEN_SYMLINKS_CHECK\": self._env_c(\"general.skip_broken_symlinks_check\", \"CONAN_SKIP_BROKEN_SYMLINKS_CHECK\", \"False\"),\n \"CONAN_PYLINTRC\": self._env_c(\"general.pylintrc\", \"CONAN_PYLINTRC\", None),\n \"CONAN_CACHE_NO_LOCKS\": self._env_c(\"general.cache_no_locks\", \"CONAN_CACHE_NO_LOCKS\", \"False\"),\n \"CONAN_PYLINT_WERR\": self._env_c(\"general.pylint_werr\", \"CONAN_PYLINT_WERR\", None),\ndiff --git a/conans/model/manifest.py b/conans/model/manifest.py\nindex fded7c273fe..766472ff8b0 100644\n--- a/conans/model/manifest.py\n+++ b/conans/model/manifest.py\n@@ -5,6 +5,7 @@\n \n from conans.errors import ConanException\n from conans.paths import CONAN_MANIFEST, EXPORT_SOURCES_TGZ_NAME, EXPORT_TGZ_NAME, PACKAGE_TGZ_NAME\n+from conans.util.env_reader import get_env\n from conans.util.files import load, md5, md5sum, save, walk\n \n \n@@ -36,9 +37,10 @@ def gather_files(folder):\n if os.path.exists(abs_path):\n file_dict[rel_path] = abs_path\n else:\n- raise ConanException(\"The file is a broken symlink, verify that \"\n- \"you are packaging the needed destination files: '%s'\"\n- % abs_path)\n+ if not get_env(\"CONAN_SKIP_BROKEN_SYMLINKS_CHECK\", False):\n+ raise ConanException(\"The file is a broken symlink, verify that \"\n+ \"you are packaging the needed destination files: '%s'\"\n+ % abs_path)\n \n return file_dict, symlinks\n \ndiff --git a/conans/test/functional/configuration/skip_broken_symlinks.py b/conans/test/functional/configuration/skip_broken_symlinks.py\nnew file mode 100644\nindex 00000000000..84b6d76470c\n--- /dev/null\n+++ b/conans/test/functional/configuration/skip_broken_symlinks.py\n@@ -0,0 +1,57 @@\n+import os\n+import platform\n+import unittest\n+\n+from conans.model.ref import ConanFileReference\n+from conans.test.utils.tools import TestServer, TurboTestClient\n+\n+\n+class TestSkipBrokenSymlinks(unittest.TestCase):\n+\n+ @unittest.skipIf(platform.system() == \"Windows\", \"Better to test only in NIX the symlinks\")\n+ def test_package_broken_symlinks(self):\n+ server = TestServer()\n+ client = TurboTestClient(servers={\"default\": server})\n+ client2 = TurboTestClient(servers={\"default\": server})\n+\n+ conanfile = \"\"\"\n+import os\n+from conans import ConanFile, tools\n+\n+class HelloConan(ConanFile):\n+\n+ def package(self):\n+ # Link to file.txt and then remove it\n+ tools.save(os.path.join(self.package_folder, \"file.txt\"), \"contents\")\n+ os.symlink(os.path.join(self.package_folder, \"file.txt\"), \n+ os.path.join(self.package_folder, \"link.txt\")) \n+ os.unlink(os.path.join(self.package_folder, \"file.txt\"))\n+\n+\"\"\"\n+ ref = ConanFileReference.loads(\"lib/1.0@conan/stable\")\n+ # By default it is not allowed\n+ client.create(ref, conanfile=conanfile, assert_error=True)\n+ self.assertIn(\"The file is a broken symlink\", client.out)\n+\n+ # Until we deactivate the checks\n+ client.run(\"config set general.skip_broken_symlinks_check=True\")\n+ pref = client.create(ref, conanfile=conanfile)\n+ self.assertIn(\"Created package\", client.out)\n+ p_folder = client.cache.package_layout(pref.ref).package(pref)\n+\n+ # The link is there\n+ link_path = os.path.join(p_folder, \"link.txt\")\n+ self.assertTrue(os.path.islink(link_path))\n+\n+ # The link is broken\n+ target_path = os.readlink(link_path)\n+ self.assertFalse(os.path.exists(target_path))\n+\n+ # We can upload the package and reuse it\n+ client.upload_all(ref)\n+\n+ client2.run(\"install {}\".format(ref))\n+ self.assertIn(\"Downloaded package\", client2.out)\n+\n+\n+\ndiff --git a/conans/util/files.py b/conans/util/files.py\nindex 472fa854e70..79040d1b539 100644\n--- a/conans/util/files.py\n+++ b/conans/util/files.py\n@@ -73,7 +73,9 @@ def touch(fname, times=None):\n def touch_folder(folder):\n for dirname, _, filenames in walk(folder):\n for fname in filenames:\n- os.utime(os.path.join(dirname, fname), None)\n+ path = os.path.join(dirname, fname)\n+ if not os.path.islink(path):\n+ os.utime(path, None)\n \n \n def normalize(text):\n"
}
|
[
{
"diff_hunk": "@@ -73,7 +73,9 @@ def touch(fname, times=None):\n def touch_folder(folder):\n for dirname, _, filenames in walk(folder):\n for fname in filenames:\n- os.utime(os.path.join(dirname, fname), None)\n+ path = os.path.join(dirname, fname)\n+ if not os.path.islink(path):",
"line": null,
"original_line": 77,
"original_start_line": null,
"path": "conans/util/files.py",
"start_line": null,
"text": "@user1:\nMaybe better a try-except, as this will almost always be true, try-except might be faster."
}
] |
fb20b3431a2de97eb09413af4e14d03d8761fc56
|
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index 085ddf9d86f..94d1e56312c 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -104,6 +104,7 @@
# use_always_short_paths = False # environment CONAN_USE_ALWAYS_SHORT_PATHS
# skip_vs_projects_upgrade = False # environment CONAN_SKIP_VS_PROJECTS_UPGRADE
# non_interactive = False # environment CONAN_NON_INTERACTIVE
+# skip_broken_symlinks_check = False # enviornment CONAN_SKIP_BROKEN_SYMLINKS_CHECK
# conan_make_program = make # environment CONAN_MAKE_PROGRAM (overrides the make program used in AutoToolsBuildEnvironment.make)
# conan_cmake_program = cmake # environment CONAN_CMAKE_PROGRAM (overrides the make program used in CMake.cmake_program)
@@ -174,6 +175,7 @@ def env_vars(self):
"CONAN_PRINT_RUN_COMMANDS": self._env_c("log.print_run_commands", "CONAN_PRINT_RUN_COMMANDS", "False"),
"CONAN_COMPRESSION_LEVEL": self._env_c("general.compression_level", "CONAN_COMPRESSION_LEVEL", "9"),
"CONAN_NON_INTERACTIVE": self._env_c("general.non_interactive", "CONAN_NON_INTERACTIVE", "False"),
+ "CONAN_SKIP_BROKEN_SYMLINKS_CHECK": self._env_c("general.skip_broken_symlinks_check", "CONAN_SKIP_BROKEN_SYMLINKS_CHECK", "False"),
"CONAN_PYLINTRC": self._env_c("general.pylintrc", "CONAN_PYLINTRC", None),
"CONAN_CACHE_NO_LOCKS": self._env_c("general.cache_no_locks", "CONAN_CACHE_NO_LOCKS", "False"),
"CONAN_PYLINT_WERR": self._env_c("general.pylint_werr", "CONAN_PYLINT_WERR", None),
diff --git a/conans/model/manifest.py b/conans/model/manifest.py
index fded7c273fe..766472ff8b0 100644
--- a/conans/model/manifest.py
+++ b/conans/model/manifest.py
@@ -5,6 +5,7 @@
from conans.errors import ConanException
from conans.paths import CONAN_MANIFEST, EXPORT_SOURCES_TGZ_NAME, EXPORT_TGZ_NAME, PACKAGE_TGZ_NAME
+from conans.util.env_reader import get_env
from conans.util.files import load, md5, md5sum, save, walk
@@ -36,9 +37,10 @@ def gather_files(folder):
if os.path.exists(abs_path):
file_dict[rel_path] = abs_path
else:
- raise ConanException("The file is a broken symlink, verify that "
- "you are packaging the needed destination files: '%s'"
- % abs_path)
+ if not get_env("CONAN_SKIP_BROKEN_SYMLINKS_CHECK", False):
+ raise ConanException("The file is a broken symlink, verify that "
+ "you are packaging the needed destination files: '%s'"
+ % abs_path)
return file_dict, symlinks
diff --git a/conans/test/functional/configuration/skip_broken_symlinks.py b/conans/test/functional/configuration/skip_broken_symlinks.py
new file mode 100644
index 00000000000..84b6d76470c
--- /dev/null
+++ b/conans/test/functional/configuration/skip_broken_symlinks.py
@@ -0,0 +1,57 @@
+import os
+import platform
+import unittest
+
+from conans.model.ref import ConanFileReference
+from conans.test.utils.tools import TestServer, TurboTestClient
+
+
+class TestSkipBrokenSymlinks(unittest.TestCase):
+
+ @unittest.skipIf(platform.system() == "Windows", "Better to test only in NIX the symlinks")
+ def test_package_broken_symlinks(self):
+ server = TestServer()
+ client = TurboTestClient(servers={"default": server})
+ client2 = TurboTestClient(servers={"default": server})
+
+ conanfile = """
+import os
+from conans import ConanFile, tools
+
+class HelloConan(ConanFile):
+
+ def package(self):
+ # Link to file.txt and then remove it
+ tools.save(os.path.join(self.package_folder, "file.txt"), "contents")
+ os.symlink(os.path.join(self.package_folder, "file.txt"),
+ os.path.join(self.package_folder, "link.txt"))
+ os.unlink(os.path.join(self.package_folder, "file.txt"))
+
+"""
+ ref = ConanFileReference.loads("lib/1.0@conan/stable")
+ # By default it is not allowed
+ client.create(ref, conanfile=conanfile, assert_error=True)
+ self.assertIn("The file is a broken symlink", client.out)
+
+ # Until we deactivate the checks
+ client.run("config set general.skip_broken_symlinks_check=True")
+ pref = client.create(ref, conanfile=conanfile)
+ self.assertIn("Created package", client.out)
+ p_folder = client.cache.package_layout(pref.ref).package(pref)
+
+ # The link is there
+ link_path = os.path.join(p_folder, "link.txt")
+ self.assertTrue(os.path.islink(link_path))
+
+ # The link is broken
+ target_path = os.readlink(link_path)
+ self.assertFalse(os.path.exists(target_path))
+
+ # We can upload the package and reuse it
+ client.upload_all(ref)
+
+ client2.run("install {}".format(ref))
+ self.assertIn("Downloaded package", client2.out)
+
+
+
diff --git a/conans/util/files.py b/conans/util/files.py
index 472fa854e70..dc3824f77af 100644
--- a/conans/util/files.py
+++ b/conans/util/files.py
@@ -73,7 +73,10 @@ def touch(fname, times=None):
def touch_folder(folder):
for dirname, _, filenames in walk(folder):
for fname in filenames:
- os.utime(os.path.join(dirname, fname), None)
+ try:
+ os.utime(os.path.join(dirname, fname), None)
+ except Exception:
+ pass
def normalize(text):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
conan-io__conan-4767@d3b200f
|
conan-io/conan
|
Python
| 4,767
|
Allow to specify revision_mode for each recipe
|
Changelog: Feature: Allow to specify `revision_mode` for each recipe, values accepted are `scm` or `hash` (default)
Docs: https://github.com/conan-io/docs/pull/1126
- closes #4413: revision mode per recipe
- closes #4728: default revision mode is `hash`
|
2019-03-19T11:33:25Z
|
Revisions: Choose revision_mode
But then, if you use the `scm` feature for third-party source code, you would be using the hash of the sources as the recipe revision, and not taking into account the recipe changes.
Should we provide a mechanism to indicate that the sources are not from the recipe?
`scm = {"url": "auto", "revision": "foo", "recipe_repo": False}` Other?
Should the default revision mode be "hash"?
Maybe it is a better default than the scm, it doesn't depend on external tools, not "dirty/pristine" issues... the scm mode with revisions could be the opt-in.
|
Maybe an attribute to the conanfile:
```
revision_mode = "auto" # Will choose between scm (if auto)/hash automatically
revision_mode = "scm" # Will use the scm revision ALWAYS
revision_mode = "hash" # Will use the hash of the exported
```
|
[
{
"body": "But then, if you use the `scm` feature for third-party source code, you would be using the hash of the sources as the recipe revision, and not taking into account the recipe changes.\r\n\r\nShould we provide a mechanism to indicate that the sources are not from the recipe?\r\n\r\n`scm = {\"url\": \"auto\", \"revision\": \"foo\", \"recipe_repo\": False}` Other?",
"number": 4413,
"title": "Revisions: Choose revision_mode"
},
{
"body": "Maybe it is a better default than the scm, it doesn't depend on external tools, not \"dirty/pristine\" issues... the scm mode with revisions could be the opt-in.",
"number": 4728,
"title": "Should the default revision mode be \"hash\"?"
}
] |
41995bbe4bd1636cd91c983606d9b895671a71fe
|
{
"head_commit": "d3b200f7193d54ff99b22c68747867e08b96c210",
"head_commit_message": "default for revision_mode is 'hash', value 'auto' is no longer needed",
"patch_to_review": "diff --git a/conans/client/cmd/export.py b/conans/client/cmd/export.py\nindex 7bd0e0840f0..6c040e15c53 100644\n--- a/conans/client/cmd/export.py\n+++ b/conans/client/cmd/export.py\n@@ -18,12 +18,14 @@\n \n \n def export_alias(package_layout, target_ref, output, revisions_enabled):\n+ revision_mode = \"hash\"\n conanfile = \"\"\"\n from conans import ConanFile\n \n class AliasConanfile(ConanFile):\n alias = \"%s\"\n-\"\"\" % target_ref.full_repr()\n+ revision_mode = \"%s\"\n+\"\"\" % (target_ref.full_repr(), revision_mode)\n \n save(package_layout.conanfile(), conanfile)\n digest = FileTreeManifest.create(package_layout.export())\n@@ -31,7 +33,8 @@ class AliasConanfile(ConanFile):\n \n # Create the metadata for the alias\n _update_revision_in_metadata(package_layout=package_layout, revisions_enabled=revisions_enabled,\n- output=output, path=None, digest=digest)\n+ output=output, path=None, digest=digest,\n+ revision_mode=revision_mode)\n \n \n def check_casing_conflict(cache, ref):\n@@ -97,7 +100,8 @@ def cmd_export(package_layout, conanfile_path, conanfile, keep_source, revisions\n revisions_enabled=revisions_enabled,\n output=output,\n path=os.path.dirname(conanfile_path),\n- digest=digest)\n+ digest=digest,\n+ revision_mode=conanfile.revision_mode)\n \n # FIXME: Conan 2.0 Clear the registry entry if the recipe has changed\n source_folder = package_layout.source()\n@@ -239,19 +243,30 @@ def _detect_scm_revision(path):\n return repo_obj.get_revision(), repo_type, repo_obj.is_pristine()\n \n \n-def _update_revision_in_metadata(package_layout, revisions_enabled, output, path, digest):\n+def _update_revision_in_metadata(package_layout, revisions_enabled, output, path, digest,\n+ revision_mode):\n+ if revision_mode not in [\"scm\", \"hash\"]:\n+ raise ConanException(\"Revision mode should be one of 'hash' (default) or 'scm'\")\n \n- scm_revision_detected, repo_type, is_pristine = _detect_scm_revision(path)\n- revision = scm_revision_detected or digest.summary_hash\n- if revisions_enabled:\n- if scm_revision_detected:\n- output.info(\"Using {} commit as the recipe\"\n- \" revision: {} \".format(repo_type, revision))\n- if not is_pristine:\n- output.warn(\"Repo status is not pristine: there might be modified files\")\n- else:\n+ # Use the proper approach depending on 'revision_mode'\n+ if revision_mode == \"hash\":\n+ revision = digest.summary_hash\n+ if revisions_enabled:\n output.info(\"Using the exported files summary hash as the recipe\"\n \" revision: {} \".format(revision))\n+ else:\n+ rev_detected, repo_type, is_pristine = _detect_scm_revision(path)\n+ if not rev_detected:\n+ raise ConanException(\"Cannot detect revision using '{}' mode\"\n+ \" from repository at '{}'\".format(revision_mode, path))\n+\n+ revision = rev_detected\n+\n+ if revisions_enabled:\n+ output.info(\"Using %s commit as the recipe revision: %s\" % (repo_type, revision))\n+ if not is_pristine:\n+ output.warn(\"Repo status is not pristine: there might be modified files\")\n+\n with package_layout.update_metadata() as metadata:\n metadata.recipe.revision = revision\n \ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 58d3c01556b..2c8fd593de8 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -300,8 +300,8 @@ def inspect(self, path, attributes, remote_name=None):\n if not attributes:\n attributes = ['name', 'version', 'url', 'homepage', 'license', 'author',\n 'description', 'topics', 'generators', 'exports', 'exports_sources',\n- 'short_paths', 'apply_env', 'build_policy', 'settings', 'options',\n- 'default_options']\n+ 'short_paths', 'apply_env', 'build_policy', 'revision_mode', 'settings',\n+ 'options', 'default_options']\n for attribute in attributes:\n try:\n attr = getattr(conanfile, attribute)\ndiff --git a/conans/model/conan_file.py b/conans/model/conan_file.py\nindex 4df47fea3ce..848ea67b279 100644\n--- a/conans/model/conan_file.py\n+++ b/conans/model/conan_file.py\n@@ -97,6 +97,7 @@ class ConanFile(object):\n exports = None\n exports_sources = None\n generators = [\"txt\"]\n+ revision_mode = \"hash\"\n \n # Vars to control the build steps (build(), package())\n should_configure = True\ndiff --git a/conans/test/functional/command/export_test.py b/conans/test/functional/command/export_test.py\nindex 1e797d62fe0..ba6401c88aa 100644\n--- a/conans/test/functional/command/export_test.py\n+++ b/conans/test/functional/command/export_test.py\n@@ -1,5 +1,6 @@\n import os\n import stat\n+import textwrap\n import unittest\n \n from parameterized import parameterized\n@@ -9,6 +10,7 @@\n from conans.paths import CONANFILE, CONAN_MANIFEST\n from conans.test.utils.cpp_test_files import cpp_hello_conan_files\n from conans.test.utils.tools import TestClient\n+from conans.test.utils.tools import create_local_git_repo\n from conans.util.files import load, save\n \n \n@@ -419,3 +421,45 @@ def _create_packages_and_builds(self):\n save(file_path, content)\n file_list.append(file_path)\n return file_list\n+\n+\n+class ExportMetadataTest(unittest.TestCase):\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ \n+ class Lib(ConanFile):\n+ revision_mode = \"{revision_mode}\"\n+ \"\"\")\n+\n+ summary_hash = {\"hash\": \"bfe8b4a6a2a74966c0c4e0b34705004a\",\n+ \"auto\": \"9a5aa68b863d3f6d774b13af32abc6c1\"}\n+\n+ def test_revision_mode_hash(self):\n+ t = TestClient()\n+ t.save({'conanfile.py': self.conanfile.format(revision_mode=\"hash\")})\n+\n+ ref = ConanFileReference.loads(\"name/version@user/channel\")\n+ t.run(\"export . {}\".format(ref))\n+\n+ meta = t.cache.package_layout(ref, short_paths=False).load_metadata()\n+ self.assertEqual(meta.recipe.revision, self.summary_hash[\"hash\"])\n+\n+ def test_revision_mode_scm(self):\n+ path, rev = create_local_git_repo(\n+ files={'conanfile.py': self.conanfile.format(revision_mode=\"scm\")})\n+ t = TestClient(current_folder=path)\n+\n+ ref = ConanFileReference.loads(\"name/version@user/channel\")\n+ t.run(\"export . {}\".format(ref))\n+\n+ meta = t.cache.package_layout(ref, short_paths=False).load_metadata()\n+ self.assertEqual(meta.recipe.revision, rev)\n+\n+ def test_revision_mode_invalid(self):\n+ conanfile = self.conanfile.format(revision_mode=\"auto\")\n+\n+ t = TestClient()\n+ t.save({'conanfile.py': conanfile})\n+ ref = ConanFileReference.loads(\"name/version@user/channel\")\n+ t.run(\"export . {}\".format(ref), assert_error=True)\n+ self.assertIn(\"ERROR: Revision mode should be one of 'hash' (default) or 'scm'\", t.out)\ndiff --git a/conans/test/functional/command/inspect_test.py b/conans/test/functional/command/inspect_test.py\nindex ae41a84ceeb..3aecaa2b2e3 100644\n--- a/conans/test/functional/command/inspect_test.py\n+++ b/conans/test/functional/command/inspect_test.py\n@@ -83,6 +83,8 @@ class Pkg(ConanFile):\n self.assertIn(\"version: None\", client.out)\n client.run(\"inspect . -a=settings\")\n self.assertIn(\"settings: ('os', 'compiler', 'arch')\", client.out)\n+ client.run(\"inspect . -a=revision_mode\")\n+ self.assertIn(\"revision_mode: hash\", client.out)\n \n client.run(\"inspect . -a=unexisting_attr\", assert_error=True)\n self.assertIn(\"ERROR: 'Pkg' object has no attribute 'unexisting_attr'\", client.out)\n@@ -147,6 +149,7 @@ def build(self):\n short_paths: False\n apply_env: True\n build_policy: None\n+revision_mode: hash\n settings: None\n options: None\n default_options: None\n@@ -169,6 +172,7 @@ class Pkg(ConanFile):\n options = {\"foo\": [True, False], \"bar\": [True, False]}\n default_options = {\"foo\": True, \"bar\": False}\n _private = \"Nothing\"\n+ revision_mode = \"scm\"\n def build(self):\n pass\n \"\"\"\n@@ -188,6 +192,7 @@ def build(self):\n short_paths: False\n apply_env: True\n build_policy: None\n+revision_mode: scm\n settings: ('os', 'arch', 'build_type', 'compiler')\n options:\n bar: [True, False]\n@@ -229,6 +234,7 @@ class OpenSSLConan(ConanFile):\n short_paths: False\n apply_env: True\n build_policy: None\n+revision_mode: hash\n settings: ('os', 'compiler', 'arch', 'build_type')\n options:\n 386: [True, False]\n@@ -276,6 +282,7 @@ def build(self):\n short_paths: False\n apply_env: True\n build_policy: None\n+revision_mode: hash\n settings: ('os', 'arch', 'build_type', 'compiler')\n options:\n bar: [True, False]\n@@ -302,6 +309,7 @@ def build(self):\n short_paths: False\n apply_env: True\n build_policy: None\n+revision_mode: hash\n settings: ('os', 'arch', 'build_type', 'compiler')\n options:\n bar: [True, False]\n@@ -309,4 +317,4 @@ def build(self):\n default_options:\n bar: True\n foo: True\n-\"\"\", client.out)\n\\ No newline at end of file\n+\"\"\", client.out)\ndiff --git a/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py b/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py\nindex 8ebfeee7078..9e01e97fc87 100644\n--- a/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py\n+++ b/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py\n@@ -2,26 +2,64 @@\n \n \n import unittest\n+from collections import namedtuple\n \n+import six\n from mock import mock\n \n from conans.client.cmd.export import _update_revision_in_metadata\n from conans.model.ref import ConanFileReference\n from conans.paths.package_layouts.package_cache_layout import PackageCacheLayout\n from conans.test.utils.test_files import temp_folder\n+from conans.errors import ConanException\n from conans.test.utils.tools import TestBufferConanOutput\n \n \n class UpdateRevisionInMetadataTests(unittest.TestCase):\n \n- def test_warn_not_pristine(self):\n- output = TestBufferConanOutput()\n+ def setUp(self):\n+ ref = ConanFileReference.loads(\"lib/version@user/channel\")\n+ self.package_layout = PackageCacheLayout(base_folder=temp_folder(), ref=ref,\n+ short_paths=False, no_lock=True)\n+ self.output = TestBufferConanOutput()\n \n+ def test_scm_warn_not_pristine(self):\n with mock.patch(\"conans.client.cmd.export._detect_scm_revision\",\n return_value=(\"revision\", \"git\", False)):\n- path = digest = None\n- ref = ConanFileReference.loads(\"lib/version@user/channel\")\n- package_layout = PackageCacheLayout(base_folder=temp_folder(), ref=ref,\n- short_paths=False, no_lock=True)\n- _update_revision_in_metadata(package_layout, True, output, path, digest)\n- self.assertIn(\"WARN: Repo status is not pristine: there might be modified files\", output)\n+ path = None\n+ digest = namedtuple(\"Digest\", \"summary_hash\")\n+ _update_revision_in_metadata(self.package_layout, True, self.output,\n+ path, digest, \"scm\")\n+ self.assertIn(\"WARN: Repo status is not pristine: there might be modified files\",\n+ self.output)\n+\n+ def test_scm_behavior(self):\n+ revision_mode = \"scm\"\n+\n+ digest = None\n+ path = None\n+ with mock.patch(\"conans.client.cmd.export._detect_scm_revision\",\n+ return_value=(\"1234\", \"git\", True)):\n+ rev = _update_revision_in_metadata(self.package_layout, True, self.output,\n+ path, digest, revision_mode)\n+ self.assertEqual(rev, \"1234\")\n+ self.assertIn(\"Using git commit as the recipe revision\", self.output)\n+\n+ def test_hash_behavior(self):\n+ revision_mode = \"hash\"\n+\n+ digest = namedtuple(\"Digest\", \"summary_hash\")\n+ digest.summary_hash = \"1234\"\n+ path = None\n+ rev = _update_revision_in_metadata(self.package_layout, True, self.output,\n+ path, digest, revision_mode)\n+ self.assertEqual(rev, \"1234\")\n+ self.assertIn(\"Using the exported files summary hash as the recipe revision\", self.output)\n+\n+ def test_invalid_behavior(self):\n+ revision_mode = \"auto\"\n+ digest = path = None\n+\n+ with six.assertRaisesRegex(self, ConanException, \"Revision mode should be\"):\n+ _update_revision_in_metadata(self.package_layout, True, self.output,\n+ path, digest, revision_mode)\n"
}
|
[
{
"diff_hunk": "@@ -419,3 +421,45 @@ def _create_packages_and_builds(self):\n save(file_path, content)\n file_list.append(file_path)\n return file_list\n+\n+\n+class ExportMetadataTest(unittest.TestCase):\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ \n+ class Lib(ConanFile):\n+ revision_mode = \"{revision_mode}\"\n+ \"\"\")\n+\n+ summary_hash = {\"hash\": \"bfe8b4a6a2a74966c0c4e0b34705004a\",\n+ \"auto\": \"9a5aa68b863d3f6d774b13af32abc6c1\"}",
"line": null,
"original_line": 435,
"original_start_line": null,
"path": "conans/test/functional/command/export_test.py",
"start_line": null,
"text": "@user1:\n``auto`` is no longer valid"
}
] |
cfed5ec7e3e5ac325d53e1cc08ca4980e58a7f45
|
diff --git a/conans/client/cmd/export.py b/conans/client/cmd/export.py
index 7bd0e0840f0..6c040e15c53 100644
--- a/conans/client/cmd/export.py
+++ b/conans/client/cmd/export.py
@@ -18,12 +18,14 @@
def export_alias(package_layout, target_ref, output, revisions_enabled):
+ revision_mode = "hash"
conanfile = """
from conans import ConanFile
class AliasConanfile(ConanFile):
alias = "%s"
-""" % target_ref.full_repr()
+ revision_mode = "%s"
+""" % (target_ref.full_repr(), revision_mode)
save(package_layout.conanfile(), conanfile)
digest = FileTreeManifest.create(package_layout.export())
@@ -31,7 +33,8 @@ class AliasConanfile(ConanFile):
# Create the metadata for the alias
_update_revision_in_metadata(package_layout=package_layout, revisions_enabled=revisions_enabled,
- output=output, path=None, digest=digest)
+ output=output, path=None, digest=digest,
+ revision_mode=revision_mode)
def check_casing_conflict(cache, ref):
@@ -97,7 +100,8 @@ def cmd_export(package_layout, conanfile_path, conanfile, keep_source, revisions
revisions_enabled=revisions_enabled,
output=output,
path=os.path.dirname(conanfile_path),
- digest=digest)
+ digest=digest,
+ revision_mode=conanfile.revision_mode)
# FIXME: Conan 2.0 Clear the registry entry if the recipe has changed
source_folder = package_layout.source()
@@ -239,19 +243,30 @@ def _detect_scm_revision(path):
return repo_obj.get_revision(), repo_type, repo_obj.is_pristine()
-def _update_revision_in_metadata(package_layout, revisions_enabled, output, path, digest):
+def _update_revision_in_metadata(package_layout, revisions_enabled, output, path, digest,
+ revision_mode):
+ if revision_mode not in ["scm", "hash"]:
+ raise ConanException("Revision mode should be one of 'hash' (default) or 'scm'")
- scm_revision_detected, repo_type, is_pristine = _detect_scm_revision(path)
- revision = scm_revision_detected or digest.summary_hash
- if revisions_enabled:
- if scm_revision_detected:
- output.info("Using {} commit as the recipe"
- " revision: {} ".format(repo_type, revision))
- if not is_pristine:
- output.warn("Repo status is not pristine: there might be modified files")
- else:
+ # Use the proper approach depending on 'revision_mode'
+ if revision_mode == "hash":
+ revision = digest.summary_hash
+ if revisions_enabled:
output.info("Using the exported files summary hash as the recipe"
" revision: {} ".format(revision))
+ else:
+ rev_detected, repo_type, is_pristine = _detect_scm_revision(path)
+ if not rev_detected:
+ raise ConanException("Cannot detect revision using '{}' mode"
+ " from repository at '{}'".format(revision_mode, path))
+
+ revision = rev_detected
+
+ if revisions_enabled:
+ output.info("Using %s commit as the recipe revision: %s" % (repo_type, revision))
+ if not is_pristine:
+ output.warn("Repo status is not pristine: there might be modified files")
+
with package_layout.update_metadata() as metadata:
metadata.recipe.revision = revision
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index 58d3c01556b..2c8fd593de8 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -300,8 +300,8 @@ def inspect(self, path, attributes, remote_name=None):
if not attributes:
attributes = ['name', 'version', 'url', 'homepage', 'license', 'author',
'description', 'topics', 'generators', 'exports', 'exports_sources',
- 'short_paths', 'apply_env', 'build_policy', 'settings', 'options',
- 'default_options']
+ 'short_paths', 'apply_env', 'build_policy', 'revision_mode', 'settings',
+ 'options', 'default_options']
for attribute in attributes:
try:
attr = getattr(conanfile, attribute)
diff --git a/conans/model/conan_file.py b/conans/model/conan_file.py
index 4df47fea3ce..848ea67b279 100644
--- a/conans/model/conan_file.py
+++ b/conans/model/conan_file.py
@@ -97,6 +97,7 @@ class ConanFile(object):
exports = None
exports_sources = None
generators = ["txt"]
+ revision_mode = "hash"
# Vars to control the build steps (build(), package())
should_configure = True
diff --git a/conans/test/functional/command/export_test.py b/conans/test/functional/command/export_test.py
index 1e797d62fe0..14646bd28e7 100644
--- a/conans/test/functional/command/export_test.py
+++ b/conans/test/functional/command/export_test.py
@@ -1,5 +1,6 @@
import os
import stat
+import textwrap
import unittest
from parameterized import parameterized
@@ -9,6 +10,7 @@
from conans.paths import CONANFILE, CONAN_MANIFEST
from conans.test.utils.cpp_test_files import cpp_hello_conan_files
from conans.test.utils.tools import TestClient
+from conans.test.utils.tools import create_local_git_repo
from conans.util.files import load, save
@@ -419,3 +421,44 @@ def _create_packages_and_builds(self):
save(file_path, content)
file_list.append(file_path)
return file_list
+
+
+class ExportMetadataTest(unittest.TestCase):
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Lib(ConanFile):
+ revision_mode = "{revision_mode}"
+ """)
+
+ summary_hash = "bfe8b4a6a2a74966c0c4e0b34705004a"
+
+ def test_revision_mode_hash(self):
+ t = TestClient()
+ t.save({'conanfile.py': self.conanfile.format(revision_mode="hash")})
+
+ ref = ConanFileReference.loads("name/version@user/channel")
+ t.run("export . {}".format(ref))
+
+ meta = t.cache.package_layout(ref, short_paths=False).load_metadata()
+ self.assertEqual(meta.recipe.revision, self.summary_hash)
+
+ def test_revision_mode_scm(self):
+ path, rev = create_local_git_repo(
+ files={'conanfile.py': self.conanfile.format(revision_mode="scm")})
+ t = TestClient(current_folder=path)
+
+ ref = ConanFileReference.loads("name/version@user/channel")
+ t.run("export . {}".format(ref))
+
+ meta = t.cache.package_layout(ref, short_paths=False).load_metadata()
+ self.assertEqual(meta.recipe.revision, rev)
+
+ def test_revision_mode_invalid(self):
+ conanfile = self.conanfile.format(revision_mode="auto")
+
+ t = TestClient()
+ t.save({'conanfile.py': conanfile})
+ ref = ConanFileReference.loads("name/version@user/channel")
+ t.run("export . {}".format(ref), assert_error=True)
+ self.assertIn("ERROR: Revision mode should be one of 'hash' (default) or 'scm'", t.out)
diff --git a/conans/test/functional/command/inspect_test.py b/conans/test/functional/command/inspect_test.py
index ae41a84ceeb..3aecaa2b2e3 100644
--- a/conans/test/functional/command/inspect_test.py
+++ b/conans/test/functional/command/inspect_test.py
@@ -83,6 +83,8 @@ class Pkg(ConanFile):
self.assertIn("version: None", client.out)
client.run("inspect . -a=settings")
self.assertIn("settings: ('os', 'compiler', 'arch')", client.out)
+ client.run("inspect . -a=revision_mode")
+ self.assertIn("revision_mode: hash", client.out)
client.run("inspect . -a=unexisting_attr", assert_error=True)
self.assertIn("ERROR: 'Pkg' object has no attribute 'unexisting_attr'", client.out)
@@ -147,6 +149,7 @@ def build(self):
short_paths: False
apply_env: True
build_policy: None
+revision_mode: hash
settings: None
options: None
default_options: None
@@ -169,6 +172,7 @@ class Pkg(ConanFile):
options = {"foo": [True, False], "bar": [True, False]}
default_options = {"foo": True, "bar": False}
_private = "Nothing"
+ revision_mode = "scm"
def build(self):
pass
"""
@@ -188,6 +192,7 @@ def build(self):
short_paths: False
apply_env: True
build_policy: None
+revision_mode: scm
settings: ('os', 'arch', 'build_type', 'compiler')
options:
bar: [True, False]
@@ -229,6 +234,7 @@ class OpenSSLConan(ConanFile):
short_paths: False
apply_env: True
build_policy: None
+revision_mode: hash
settings: ('os', 'compiler', 'arch', 'build_type')
options:
386: [True, False]
@@ -276,6 +282,7 @@ def build(self):
short_paths: False
apply_env: True
build_policy: None
+revision_mode: hash
settings: ('os', 'arch', 'build_type', 'compiler')
options:
bar: [True, False]
@@ -302,6 +309,7 @@ def build(self):
short_paths: False
apply_env: True
build_policy: None
+revision_mode: hash
settings: ('os', 'arch', 'build_type', 'compiler')
options:
bar: [True, False]
@@ -309,4 +317,4 @@ def build(self):
default_options:
bar: True
foo: True
-""", client.out)
\ No newline at end of file
+""", client.out)
diff --git a/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py b/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py
index 8ebfeee7078..9e01e97fc87 100644
--- a/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py
+++ b/conans/test/unittests/client/cmd/export/test_update_revision_in_metadata.py
@@ -2,26 +2,64 @@
import unittest
+from collections import namedtuple
+import six
from mock import mock
from conans.client.cmd.export import _update_revision_in_metadata
from conans.model.ref import ConanFileReference
from conans.paths.package_layouts.package_cache_layout import PackageCacheLayout
from conans.test.utils.test_files import temp_folder
+from conans.errors import ConanException
from conans.test.utils.tools import TestBufferConanOutput
class UpdateRevisionInMetadataTests(unittest.TestCase):
- def test_warn_not_pristine(self):
- output = TestBufferConanOutput()
+ def setUp(self):
+ ref = ConanFileReference.loads("lib/version@user/channel")
+ self.package_layout = PackageCacheLayout(base_folder=temp_folder(), ref=ref,
+ short_paths=False, no_lock=True)
+ self.output = TestBufferConanOutput()
+ def test_scm_warn_not_pristine(self):
with mock.patch("conans.client.cmd.export._detect_scm_revision",
return_value=("revision", "git", False)):
- path = digest = None
- ref = ConanFileReference.loads("lib/version@user/channel")
- package_layout = PackageCacheLayout(base_folder=temp_folder(), ref=ref,
- short_paths=False, no_lock=True)
- _update_revision_in_metadata(package_layout, True, output, path, digest)
- self.assertIn("WARN: Repo status is not pristine: there might be modified files", output)
+ path = None
+ digest = namedtuple("Digest", "summary_hash")
+ _update_revision_in_metadata(self.package_layout, True, self.output,
+ path, digest, "scm")
+ self.assertIn("WARN: Repo status is not pristine: there might be modified files",
+ self.output)
+
+ def test_scm_behavior(self):
+ revision_mode = "scm"
+
+ digest = None
+ path = None
+ with mock.patch("conans.client.cmd.export._detect_scm_revision",
+ return_value=("1234", "git", True)):
+ rev = _update_revision_in_metadata(self.package_layout, True, self.output,
+ path, digest, revision_mode)
+ self.assertEqual(rev, "1234")
+ self.assertIn("Using git commit as the recipe revision", self.output)
+
+ def test_hash_behavior(self):
+ revision_mode = "hash"
+
+ digest = namedtuple("Digest", "summary_hash")
+ digest.summary_hash = "1234"
+ path = None
+ rev = _update_revision_in_metadata(self.package_layout, True, self.output,
+ path, digest, revision_mode)
+ self.assertEqual(rev, "1234")
+ self.assertIn("Using the exported files summary hash as the recipe revision", self.output)
+
+ def test_invalid_behavior(self):
+ revision_mode = "auto"
+ digest = path = None
+
+ with six.assertRaisesRegex(self, ConanException, "Revision mode should be"):
+ _update_revision_in_metadata(self.package_layout, True, self.output,
+ path, digest, revision_mode)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-4766@b1d42b7
|
conan-io/conan
|
Python
| 4,766
|
raise error instead of skipping files if short_paths
|
Changelog: Bugfix: Raise an error if source files cannot be correctly copied to build folder because of long paths in Windows.
Docs: omit
Close #4484
|
2019-03-19T11:19:29Z
|
Conan should throw an exception instead of a warning when it can't copy a source file because the path is too long on Windows
I just spent a lot of time debugging an issue happening only on Windows. Basically, the build would fail to link when building with conan. But it worked when I built it manually. I eventually figured out that conan was skipping files while copying the sources to the build directory warning me like this:
WARN: Filename too long, file excluded:
I believe this should throw an error instead of a warning. I can't see a case where omitting source files would lead to good results. Enabling short_paths fixed the issue in the end. But I think conan should have stopped executing instead of moving on knowing full well it skipped some source files.
|
Interesting thing: this issue was implemented this way, long, long time ago, and the offending package was failing to unzip some .html of the documentation. It was totally ok to ignore them.
I think it might be changed, but for that it should be considered a bug, because otherwise it might be breaking, and we cannot break existing users (unless until 2.0). Lets discuss it, thanks very much for raising this issue.
Yes, this is totally a bug, it doesn't make sense that a unzip operation will be discarding files.
|
[
{
"body": "I just spent a lot of time debugging an issue happening only on Windows. Basically, the build would fail to link when building with conan. But it worked when I built it manually. I eventually figured out that conan was skipping files while copying the sources to the build directory warning me like this:\r\n\r\nWARN: Filename too long, file excluded:\r\n\r\nI believe this should throw an error instead of a warning. I can't see a case where omitting source files would lead to good results. Enabling short_paths fixed the issue in the end. But I think conan should have stopped executing instead of moving on knowing full well it skipped some source files.",
"number": 4484,
"title": "Conan should throw an exception instead of a warning when it can't copy a source file because the path is too long on Windows"
}
] |
41995bbe4bd1636cd91c983606d9b895671a71fe
|
{
"head_commit": "b1d42b79c330098a8f3c6788a40a3106b9f7ab65",
"head_commit_message": "raise error instead of skipping files if short_paths",
"patch_to_review": "diff --git a/conans/client/installer.py b/conans/client/installer.py\nindex 5a81447dfbd..8c1b2511c23 100644\n--- a/conans/client/installer.py\n+++ b/conans/client/installer.py\n@@ -1,5 +1,4 @@\n import os\n-import platform\n import shutil\n import time\n \n@@ -90,13 +89,12 @@ def prepare_build(self):\n mkdir(self.build_folder)\n self._conan_file.source_folder = self.source_folder\n else:\n- if platform.system() == \"Windows\" and os.getenv(\"CONAN_USER_HOME_SHORT\") != \"None\":\n- from conans.util.windows import ignore_long_path_files\n- ignore = ignore_long_path_files(self.source_folder, self.build_folder, self._out)\n- else:\n- ignore = None\n-\n- shutil.copytree(self.source_folder, self.build_folder, symlinks=True, ignore=ignore)\n+ try:\n+ shutil.copytree(self.source_folder, self.build_folder, symlinks=True)\n+ except Exception as e:\n+ msg = str(e)\n+ msg += \"\\nConsider using short_paths=True if paths too long\" if \"206\" in msg else \"\"\n+ raise ConanException(\"%s\\nError copying sources to build folder\" % msg)\n logger.debug(\"BUILD: Copied to %s\", self.build_folder)\n logger.debug(\"BUILD: Files copied %s\", \",\".join(os.listdir(self.build_folder)))\n self._conan_file.source_folder = self.build_folder\ndiff --git a/conans/client/tools/files.py b/conans/client/tools/files.py\nindex bfb9680013b..40495f11484 100644\n--- a/conans/client/tools/files.py\n+++ b/conans/client/tools/files.py\n@@ -262,7 +262,8 @@ def normalized_text(text):\n normalized_search = normalized_text(search)\n index = normalized_content.find(normalized_search)\n if index == -1:\n- return _manage_text_not_found(search, file_path, strict, \"replace_path_in_file\", output=output)\n+ return _manage_text_not_found(search, file_path, strict, \"replace_path_in_file\",\n+ output=output)\n \n while index != -1:\n content = content[:index] + replace + content[index + len(search):]\n@@ -334,7 +335,8 @@ def verify(filepath):\n return None\n \n def _get_possible_filenames(filename):\n- extensions_win = os.getenv(\"PATHEXT\", \".COM;.EXE;.BAT;.CMD\").split(\";\") if \".\" not in filename else []\n+ extensions_win = (os.getenv(\"PATHEXT\", \".COM;.EXE;.BAT;.CMD\").split(\";\")\n+ if \".\" not in filename else [])\n extensions = [\".sh\"] if platform.system() != \"Windows\" else extensions_win\n extensions.insert(1, \"\") # No extension\n return [\"%s%s\" % (filename, entry.lower()) for entry in extensions]\ndiff --git a/conans/test/functional/old/path_limit_test.py b/conans/test/functional/old/path_limit_test.py\nindex 51b0184a154..7449d640a68 100644\n--- a/conans/test/functional/old/path_limit_test.py\n+++ b/conans/test/functional/old/path_limit_test.py\n@@ -8,6 +8,7 @@\n from conans.test import CONAN_TEST_FOLDER\n from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient, TestServer\n from conans.util.files import load\n+from textwrap import dedent\n \n base = '''\n from conans import ConanFile\n@@ -39,6 +40,28 @@ def package(self):\n \n \n class PathLengthLimitTest(unittest.TestCase):\n+ @unittest.skipUnless(platform.system() == \"Windows\", \"requires Win\")\n+ def failure_copy_test(self):\n+ client = TestClient()\n+ conanfile = dedent(\"\"\"\n+ from conans import ConanFile\n+ from conans.tools import save\n+ import os\n+\n+ class ConanLib(ConanFile):\n+ def source(self):\n+ cwd = os.getcwd()\n+ size = len(os.getcwd())\n+ sub = \"a/\"*((240-size)/2)\n+ path = os.path.join(cwd, sub, \"file.txt\")\n+ path = os.path.normpath(path)\n+ save(path, \"contents\")\n+\n+ \"\"\")\n+ client.save({\"conanfile.py\": conanfile})\n+ client.run(\"create . pkg/1.0@user/testing\", assert_error=True)\n+ self.assertIn(\"Consider using short_paths=True if paths too long\", client.out)\n+ self.assertIn(\"Error copying sources to build folder\", client.out)\n \n def remove_test(self):\n short_home = tempfile.mkdtemp(dir=CONAN_TEST_FOLDER)\ndiff --git a/conans/util/windows.py b/conans/util/windows.py\nindex 1b95ffe8fd1..76a7525ab57 100644\n--- a/conans/util/windows.py\n+++ b/conans/util/windows.py\n@@ -89,23 +89,6 @@ def path_shortener(path, short_paths):\n return redirect\n \n \n-def ignore_long_path_files(src_folder, build_folder, output):\n- def _filter(src, files):\n- filtered_files = []\n- for the_file in files:\n- source_path = os.path.join(src, the_file)\n- # Without storage path, just relative\n- rel_path = os.path.relpath(source_path, src_folder)\n- dest_path = os.path.normpath(os.path.join(build_folder, rel_path))\n- # it is NOT that \"/\" is counted as \"\\\\\" so it counts double\n- # seems a bug in python, overflows paths near the limit of 260,\n- if len(dest_path) >= 249:\n- filtered_files.append(the_file)\n- output.warn(\"Filename too long, file excluded: %s\" % dest_path)\n- return filtered_files\n- return _filter\n-\n-\n def rm_conandir(path):\n \"\"\"removal of a directory that might contain a link to a short path\"\"\"\n link = os.path.join(path, CONAN_LINK)\n"
}
|
[
{
"diff_hunk": "@@ -90,13 +89,12 @@ def prepare_build(self):\n mkdir(self.build_folder)\n self._conan_file.source_folder = self.source_folder\n else:\n- if platform.system() == \"Windows\" and os.getenv(\"CONAN_USER_HOME_SHORT\") != \"None\":\n- from conans.util.windows import ignore_long_path_files\n- ignore = ignore_long_path_files(self.source_folder, self.build_folder, self._out)\n- else:\n- ignore = None\n-\n- shutil.copytree(self.source_folder, self.build_folder, symlinks=True, ignore=ignore)\n+ try:\n+ shutil.copytree(self.source_folder, self.build_folder, symlinks=True)\n+ except Exception as e:\n+ msg = str(e)\n+ msg += \"\\nConsider using short_paths=True if paths too long\" if \"206\" in msg else \"\"",
"line": null,
"original_line": 96,
"original_start_line": null,
"path": "conans/client/installer.py",
"start_line": null,
"text": "@user1:\nThe `206` looks like some kind of magic number, I would add a comment with a link or maybe check for `WindowsError: [Error 206]` to make it clearer (maybe the full error message).\r\n\r\nAlso, can the message be assertive? if the Windows error is `WindowsError: [Error 206] The filename or extension is too long` probably we can tell the user without doubts that **the error is because of long paths** and recommend the `short_paths` solution.\n\n@author:\nFun fact it is not a ``WindowsError``. It is a ``shutil.Error`` wrapping a system error, with no further metadata, because it comes from the system. The only thing that seems to be useful is the system error number, cause I guess the error string might be localized."
}
] |
11c4742e78e38009d59e645b38a2ca8c7d092285
|
diff --git a/conans/client/installer.py b/conans/client/installer.py
index 5a81447dfbd..1e22f39856c 100644
--- a/conans/client/installer.py
+++ b/conans/client/installer.py
@@ -1,5 +1,4 @@
import os
-import platform
import shutil
import time
@@ -90,13 +89,13 @@ def prepare_build(self):
mkdir(self.build_folder)
self._conan_file.source_folder = self.source_folder
else:
- if platform.system() == "Windows" and os.getenv("CONAN_USER_HOME_SHORT") != "None":
- from conans.util.windows import ignore_long_path_files
- ignore = ignore_long_path_files(self.source_folder, self.build_folder, self._out)
- else:
- ignore = None
-
- shutil.copytree(self.source_folder, self.build_folder, symlinks=True, ignore=ignore)
+ try:
+ shutil.copytree(self.source_folder, self.build_folder, symlinks=True)
+ except Exception as e:
+ msg = str(e)
+ if "206" in msg: # System error shutil.Error 206: Filename or extension too long
+ msg += "\nUse short_paths=True if paths too long"
+ raise ConanException("%s\nError copying sources to build folder" % msg)
logger.debug("BUILD: Copied to %s", self.build_folder)
logger.debug("BUILD: Files copied %s", ",".join(os.listdir(self.build_folder)))
self._conan_file.source_folder = self.build_folder
diff --git a/conans/client/tools/files.py b/conans/client/tools/files.py
index bfb9680013b..40495f11484 100644
--- a/conans/client/tools/files.py
+++ b/conans/client/tools/files.py
@@ -262,7 +262,8 @@ def normalized_text(text):
normalized_search = normalized_text(search)
index = normalized_content.find(normalized_search)
if index == -1:
- return _manage_text_not_found(search, file_path, strict, "replace_path_in_file", output=output)
+ return _manage_text_not_found(search, file_path, strict, "replace_path_in_file",
+ output=output)
while index != -1:
content = content[:index] + replace + content[index + len(search):]
@@ -334,7 +335,8 @@ def verify(filepath):
return None
def _get_possible_filenames(filename):
- extensions_win = os.getenv("PATHEXT", ".COM;.EXE;.BAT;.CMD").split(";") if "." not in filename else []
+ extensions_win = (os.getenv("PATHEXT", ".COM;.EXE;.BAT;.CMD").split(";")
+ if "." not in filename else [])
extensions = [".sh"] if platform.system() != "Windows" else extensions_win
extensions.insert(1, "") # No extension
return ["%s%s" % (filename, entry.lower()) for entry in extensions]
diff --git a/conans/test/functional/old/path_limit_test.py b/conans/test/functional/old/path_limit_test.py
index 51b0184a154..288fed7dffd 100644
--- a/conans/test/functional/old/path_limit_test.py
+++ b/conans/test/functional/old/path_limit_test.py
@@ -8,6 +8,7 @@
from conans.test import CONAN_TEST_FOLDER
from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient, TestServer
from conans.util.files import load
+from textwrap import dedent
base = '''
from conans import ConanFile
@@ -39,6 +40,28 @@ def package(self):
class PathLengthLimitTest(unittest.TestCase):
+ @unittest.skipUnless(platform.system() == "Windows", "requires Win")
+ def failure_copy_test(self):
+ client = TestClient()
+ conanfile = dedent("""
+ from conans import ConanFile
+ from conans.tools import save
+ import os
+
+ class ConanLib(ConanFile):
+ def source(self):
+ cwd = os.getcwd()
+ size = len(os.getcwd())
+ sub = "a/"*(int((240-size)/2))
+ path = os.path.join(cwd, sub, "file.txt")
+ path = os.path.normpath(path)
+ save(path, "contents")
+
+ """)
+ client.save({"conanfile.py": conanfile})
+ client.run("create . pkg/1.0@user/testing", assert_error=True)
+ self.assertIn("Use short_paths=True if paths too long", client.out)
+ self.assertIn("Error copying sources to build folder", client.out)
def remove_test(self):
short_home = tempfile.mkdtemp(dir=CONAN_TEST_FOLDER)
diff --git a/conans/util/windows.py b/conans/util/windows.py
index 1b95ffe8fd1..76a7525ab57 100644
--- a/conans/util/windows.py
+++ b/conans/util/windows.py
@@ -89,23 +89,6 @@ def path_shortener(path, short_paths):
return redirect
-def ignore_long_path_files(src_folder, build_folder, output):
- def _filter(src, files):
- filtered_files = []
- for the_file in files:
- source_path = os.path.join(src, the_file)
- # Without storage path, just relative
- rel_path = os.path.relpath(source_path, src_folder)
- dest_path = os.path.normpath(os.path.join(build_folder, rel_path))
- # it is NOT that "/" is counted as "\\" so it counts double
- # seems a bug in python, overflows paths near the limit of 260,
- if len(dest_path) >= 249:
- filtered_files.append(the_file)
- output.warn("Filename too long, file excluded: %s" % dest_path)
- return filtered_files
- return _filter
-
-
def rm_conandir(path):
"""removal of a directory that might contain a link to a short path"""
link = os.path.join(path, CONAN_LINK)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-4917@28cf036
|
conan-io/conan
|
Python
| 4,917
|
Add cppstd as a subsetting of compiler
|
Changelog: Feature: Add `compiler.cppstd` setting (mark `cppstd` as deprecated)
Docs: https://github.com/conan-io/docs/pull/1266
This PR includes changes from PR #4986 and #4942 (really short ones)
----
Some hints about the new/old behavior:
- It keeps previous behavior: all use cases (tests) should be summarized here https://github.com/conan-io/conan/pull/4986,
- The default value (or `None`) for `compiler.cppstd` generates the same ID as it was generating for `cppstd`.
- If the user specifies `cppstd` and `compiler.cppstd`, Conan will raise. Yes, even if those have the same values.
- Package ID for non-default values generates different ID for `cppstd` and `compiler.cppstd`
- If no value is provided to `compiler.cppstd` then the default value will be assigned to it.
Summarizing:
```
package_id("cppstd=default") == package_id("cppstd=None")
package_id("cppstd=default") == package_id("compiler.cppstd=default")
package_id("compiler.cppstd=default") == package_id("compiler.cppstd=None")
package_id("cppstd=20") != package_id("compiler.cppstd=20")
```
----
closes #4873
@TAGS: slow
|
2019-04-05T10:17:12Z
|
cppstd as a subsetting
We considered better to have the cppstd as a subsetting of every compiler:
- If you don't specify the -s compiler.cppstd=XXX it won't change anything, because will come with a None default value.
- We won't break anything, the global setting will be kept, but deprecated (comment at settings.yml and docs).
- So any recipe could receive the subsetting and compile with a different version of the language automatically.
- The reasons to do a first level setting was to not generate new binary IDS for C libraries that doesn't remove the subsetting. We consider more important to be "injectable" at any recipe.
- Conan will fail if both are specified.
- Conan will try internally to manage the value of the new one, copying from the old one if possible (to ease the code and future deprecation of the global setting)
- Each compiler will have their own values, it has a lot of sense.
|
Q: Although each compiler will have its own flags to set the standard, I wonder if we should add those different names in our `settings.yml` or it is better to set `None, 98, 11, 14, 17, 20` for all the compilers and make the translation inside Conan code.
We don't have to introduce the flag name in the settings. But values that make sense for the compiler. For instance, the "gnu" ones for VS doesn't make sense. So yes, translation still will happen. I would use the same names as today, but removing the non-sense ones, unless you find that maybe for vs other values makes more sense.
I was thinking about the way CMake handles it through the [`CXX_STANDARD` variable](https://cmake.org/cmake/help/v3.14/prop_tgt/CXX_STANDARD.html), they use the same in the `CMakeLists.txt` for all the generators. And then, CMake has a different flag, `CXX_EXTENSIONS` to handle compiler specific extensions.
Let's list all the valid values for each generator and see how it looks like.
|
[
{
"body": "We considered better to have the cppstd as a subsetting of every compiler:\r\n\r\n- If you don't specify the -s compiler.cppstd=XXX it won't change anything, because will come with a None default value.\r\n- We won't break anything, the global setting will be kept, but deprecated (comment at settings.yml and docs).\r\n- So any recipe could receive the subsetting and compile with a different version of the language automatically.\r\n- The reasons to do a first level setting was to not generate new binary IDS for C libraries that doesn't remove the subsetting. We consider more important to be \"injectable\" at any recipe.\r\n- Conan will fail if both are specified.\r\n- Conan will try internally to manage the value of the new one, copying from the old one if possible (to ease the code and future deprecation of the global setting)\r\n- Each compiler will have their own values, it has a lot of sense.\r\n",
"number": 4873,
"title": "cppstd as a subsetting"
}
] |
5f204b37644035d0e5c92e8503fe7813728530bb
|
{
"head_commit": "28cf036d6994e8afb04e29e734f1d064bd00c3f4",
"head_commit_message": "remove comment",
"patch_to_review": "diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex abb9fbbe46d..0ea1b833ae7 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -1173,7 +1173,7 @@ def get_graph_info(profile_names, settings, options, env, cwd, install_folder, c\n % install_folder)\n graph_info = None\n \n- if profile_names or settings or options or profile_names or env or not graph_info:\n+ if profile_names or settings or options or env or not graph_info:\n if graph_info:\n # FIXME: Convert to Exception in Conan 2.0\n output.warn(\"Settings, options, env or profile specified. \"\ndiff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py\nindex 50e200b16ff..ae6909d52e3 100644\n--- a/conans/client/conf/__init__.py\n+++ b/conans/client/conf/__init__.py\n@@ -62,6 +62,7 @@\n libcxx: [libstdc++, libstdc++11]\n threads: [None, posix, win32] # Windows MinGW\n exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n Visual Studio:\n runtime: [MD, MT, MTd, MDd]\n version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\", \"16\"]\n@@ -69,17 +70,19 @@\n v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]\n+ cppstd: [None, 14, 17, 20]\n clang:\n version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n- \"5.0\", \"6.0\", \"7.0\",\n- \"8\"]\n+ \"5.0\", \"6.0\", \"7.0\", \"8\"]\n libcxx: [libstdc++, libstdc++11, libc++]\n+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n apple-clang:\n version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n libcxx: [libstdc++, libc++]\n+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n \n build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n-cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20] # Deprecated, use compiler.cppstd\n \"\"\"\n \n default_client_conf = \"\"\"\ndiff --git a/conans/client/migrations_settings.py b/conans/client/migrations_settings.py\nindex b96bdbc186a..78c197cb077 100644\n--- a/conans/client/migrations_settings.py\n+++ b/conans/client/migrations_settings.py\n@@ -288,3 +288,75 @@\n settings_1_14_2 = settings_1_14_1\n settings_1_14_3 = settings_1_14_2\n settings_1_14_4 = settings_1_14_3\n+\n+settings_1_15_0 = \"\"\"\n+# Only for cross building, 'os_build/arch_build' is the system that runs Conan\n+os_build: [Windows, WindowsStore, Linux, Macos, FreeBSD, SunOS]\n+arch_build: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]\n+\n+# Only for building cross compilation tools, 'os_target/arch_target' is the system for\n+# which the tools generate code\n+os_target: [Windows, Linux, Macos, Android, iOS, watchOS, tvOS, FreeBSD, SunOS, Arduino]\n+arch_target: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]\n+\n+# Rest of the settings are \"host\" settings:\n+# - For native building/cross building: Where the library/program will run.\n+# - For building cross compilation tools: Where the cross compiler will run.\n+os:\n+ Windows:\n+ subsystem: [None, cygwin, msys, msys2, wsl]\n+ WindowsStore:\n+ version: [\"8.1\", \"10.0\"]\n+ Linux:\n+ Macos:\n+ version: [None, \"10.6\", \"10.7\", \"10.8\", \"10.9\", \"10.10\", \"10.11\", \"10.12\", \"10.13\", \"10.14\"]\n+ Android:\n+ api_level: ANY\n+ iOS:\n+ version: [\"7.0\", \"7.1\", \"8.0\", \"8.1\", \"8.2\", \"8.3\", \"9.0\", \"9.1\", \"9.2\", \"9.3\", \"10.0\", \"10.1\", \"10.2\", \"10.3\", \"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ watchOS:\n+ version: [\"4.0\", \"4.1\", \"4.2\", \"4.3\", \"5.0\", \"5.1\"]\n+ tvOS:\n+ version: [\"11.0\", \"11.1\", \"11.2\", \"11.3\", \"11.4\", \"12.0\", \"12.1\"]\n+ FreeBSD:\n+ SunOS:\n+ Arduino:\n+ board: ANY\n+arch: [x86, x86_64, ppc32, ppc64le, ppc64, armv5el, armv5hf, armv6, armv7, armv7hf, armv7s, armv7k, armv8, armv8_32, armv8.3, sparc, sparcv9, mips, mips64, avr, s390, s390x]\n+compiler:\n+ sun-cc:\n+ version: [\"5.10\", \"5.11\", \"5.12\", \"5.13\", \"5.14\"]\n+ threads: [None, posix]\n+ libcxx: [libCstd, libstdcxx, libstlport, libstdc++]\n+ gcc:\n+ version: [\"4.1\", \"4.4\", \"4.5\", \"4.6\", \"4.7\", \"4.8\", \"4.9\",\n+ \"5\", \"5.1\", \"5.2\", \"5.3\", \"5.4\", \"5.5\",\n+ \"6\", \"6.1\", \"6.2\", \"6.3\", \"6.4\",\n+ \"7\", \"7.1\", \"7.2\", \"7.3\",\n+ \"8\", \"8.1\", \"8.2\"]\n+ libcxx: [libstdc++, libstdc++11]\n+ threads: [None, posix, win32] # Windows MinGW\n+ exception: [None, dwarf2, sjlj, seh] # Windows MinGW\n+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+ Visual Studio:\n+ runtime: [MD, MT, MTd, MDd]\n+ version: [\"8\", \"9\", \"10\", \"11\", \"12\", \"14\", \"15\", \"16\"]\n+ toolset: [None, v90, v100, v110, v110_xp, v120, v120_xp,\n+ v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,\n+ LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,\n+ LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]\n+ cppstd: [None, 14, 17, 20]\n+ clang:\n+ version: [\"3.3\", \"3.4\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"4.0\",\n+ \"5.0\", \"6.0\", \"7.0\",\n+ \"8\"]\n+ libcxx: [libstdc++, libstdc++11, libc++]\n+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+ apple-clang:\n+ version: [\"5.0\", \"5.1\", \"6.0\", \"6.1\", \"7.0\", \"7.3\", \"8.0\", \"8.1\", \"9.0\", \"9.1\", \"10.0\"]\n+ libcxx: [libstdc++, libc++]\n+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]\n+\n+build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]\n+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20] # Deprecated, use compiler.cppstd\n+\"\"\"\n\\ No newline at end of file\ndiff --git a/conans/client/settings_preprocessor.py b/conans/client/settings_preprocessor.py\nindex 73b0dcc3c81..3b9a1815cda 100644\n--- a/conans/client/settings_preprocessor.py\n+++ b/conans/client/settings_preprocessor.py\n@@ -1,30 +1,73 @@\n+import warnings\n+\n+from conans.client.build.cppstd_flags import cppstd_default\n from conans.client.build.cppstd_flags import cppstd_flag\n from conans.errors import ConanException\n from conans.util.log import logger\n \n \n def preprocess(settings):\n- fill_runtime(settings)\n- check_cppstd(settings)\n+ _fill_runtime(settings)\n+ _fill_compiler_cppstd(settings)\n+ _check_cppstd(settings)\n+\n+\n+def _fill_compiler_cppstd(settings):\n+ compiler = settings.get_safe(\"compiler\")\n+ compiler_version = settings.get_safe(\"compiler.version\")\n+ cppstd = settings.get_safe(\"cppstd\")\n+ compiler_cppstd = settings.get_safe(\"compiler.cppstd\")\n+\n+ # Assign the explicit default value to compiler.cppstd (only if not given cppstd)\n+ if not cppstd and not compiler_cppstd and compiler and compiler_version:\n+ default_cppstd = cppstd_default(compiler, compiler_version)\n+ if default_cppstd:\n+ try:\n+ settings.compiler.cppstd = default_cppstd\n+ except Exception:\n+ # Settings structure does not have cppstd\n+ pass\n \n \n-def check_cppstd(settings):\n+def _check_cppstd(settings):\n compiler = settings.get_safe(\"compiler\")\n compiler_version = settings.get_safe(\"compiler.version\")\n cppstd = settings.get_safe(\"cppstd\")\n- if not cppstd or compiler not in (\"gcc\", \"clang\", \"apple-clang\", \"Visual Studio\"):\n+ compiler_cppstd = settings.get_safe(\"compiler.cppstd\")\n+\n+ if not cppstd and not compiler_cppstd:\n return\n- cpp_values = settings.cppstd.values_range\n- available = [v for v in cpp_values if cppstd_flag(compiler, compiler_version, v)]\n- if str(cppstd) not in available:\n- raise ConanException(\"The specified 'cppstd=%s' is not available \"\n- \"for '%s %s'. Possible values are %s'\" % (cppstd,\n- compiler,\n- compiler_version,\n- available))\n+\n+ # Checks: one or the other, but not both\n+ if cppstd and compiler_cppstd:\n+ raise ConanException(\"Do not use settings 'compiler.cppstd' together with 'cppstd'.\"\n+ \" Use only the former one.\")\n+\n+ if cppstd:\n+ warnings.warn(\"Setting 'cppstd' is deprecated in favor of 'compiler.cppstd'\")\n+\n+ if compiler not in (\"gcc\", \"clang\", \"apple-clang\", \"Visual Studio\"):\n+ return\n+\n+ # Check that we have a flag available for that value of the C++ Standard\n+ def check_flag_available(values_range, value, setting_id):\n+ available = [v for v in values_range if cppstd_flag(compiler, compiler_version, v)]\n+ if str(value) not in available:\n+ raise ConanException(\"The specified '%s=%s' is not available \"\n+ \"for '%s %s'. Possible values are %s'\" % (setting_id,\n+ value,\n+ compiler,\n+ compiler_version,\n+ available))\n+\n+ if cppstd:\n+ check_flag_available(settings.cppstd.values_range, cppstd, \"cppstd\")\n+ else:\n+ check_flag_available(settings.compiler.cppstd.values_range,\n+ compiler_cppstd, \"compiler.cppstd\")\n \n \n-def fill_runtime(settings):\n+def _fill_runtime(settings):\n try:\n if settings.compiler == \"Visual Studio\":\n if settings.get_safe(\"compiler.runtime\") is None:\ndiff --git a/conans/client/tools/system_pm.py b/conans/client/tools/system_pm.py\nindex d3599758427..7c66f0ceece 100644\n--- a/conans/client/tools/system_pm.py\n+++ b/conans/client/tools/system_pm.py\n@@ -148,8 +148,8 @@ def update(self):\n pass\n \n def install(self, package_name):\n- self._output.warn(\"Only available for linux with apt-get, yum, or pacman or OSX with brew or \"\n- \"FreeBSD with pkg or Solaris with pkgutil\")\n+ self._output.warn(\"Only available for linux with apt-get, yum, or pacman or OSX with brew or\"\n+ \" FreeBSD with pkg or Solaris with pkgutil\")\n \n def installed(self, package_name):\n return False\ndiff --git a/conans/model/conan_file.py b/conans/model/conan_file.py\nindex 848ea67b279..92c876d662c 100644\n--- a/conans/model/conan_file.py\n+++ b/conans/model/conan_file.py\n@@ -3,6 +3,7 @@\n \n from conans.client import tools\n from conans.client.output import Color, ScopedOutput\n+from conans.client.settings_preprocessor import preprocess\n from conans.client.tools.env import environment_append, no_op, pythonpath\n from conans.client.tools.oss import OSInfo\n from conans.errors import ConanException\n@@ -132,6 +133,12 @@ def initialize(self, settings, env):\n self.options = create_options(self)\n self.requires = create_requirements(self)\n self.settings = create_settings(self, settings)\n+\n+ try:\n+ preprocess(self.settings)\n+ except ConanException as e:\n+ raise ConanException(\"Package '{}': {}\".format(self.display_name, e))\n+\n try:\n if self.settings.os_build and self.settings.os:\n self.output.writeln(\"*\"*60, front=Color.BRIGHT_RED)\n@@ -145,6 +152,10 @@ def initialize(self, settings, env):\n except ConanException:\n pass\n \n+ if 'cppstd' in self.settings.fields:\n+ self.output.writeln(\"Setting 'cppstd' is deprecated in favor of 'compiler.cppstd',\"\n+ \" please update your recipe.\", front=Color.BRIGHT_RED)\n+\n # needed variables to pack the project\n self.cpp_info = None # Will be initialized at processing time\n self.deps_cpp_info = DepsCppInfo()\ndiff --git a/conans/model/info.py b/conans/model/info.py\nindex 475034bfa47..4b7ed51d2db 100644\n--- a/conans/model/info.py\n+++ b/conans/model/info.py\n@@ -231,6 +231,9 @@ def serialize(self):\n \n class ConanInfo(object):\n \n+ def __init__(self):\n+ self._adjust_settings_for_std = lambda u: u\n+\n def copy(self):\n \"\"\" Useful for build_id implementation\n \"\"\"\n@@ -271,6 +274,9 @@ def loads(text):\n result = ConanInfo()\n result.settings = Values.loads(parser.settings)\n result.full_settings = Values.loads(parser.full_settings)\n+ # TODO: Apply here the _fill_compiler_cppstd for settings and full_settings for legacy\n+ # TODO: packages, but there is no way to know the std_matching mode if ID is requested?\n+\n result.options = OptionsValues.loads(parser.options)\n result.full_options = OptionsValues.loads(parser.full_options)\n result.full_requires = _PackageReferenceList.loads(parser.full_requires)\n@@ -336,7 +342,7 @@ def package_id(self):\n options and settings\n \"\"\"\n result = []\n- result.append(self.settings.sha)\n+ result.append(self._settings_sha())\n # Only are valid requires for OPtions those Non-Dev who are still in requires\n self.options.filter_used(self.requires.pkg_names)\n result.append(self.options.sha)\n@@ -398,14 +404,31 @@ def default_std_matching(self):\n same as specifying None, packages are the same\n \"\"\"\n \n- if self.full_settings.cppstd and \\\n- self.full_settings.compiler and \\\n- self.full_settings.compiler.version:\n+ if self.full_settings.compiler and \\\n+ self.full_settings.compiler.version:\n default = cppstd_default(str(self.full_settings.compiler),\n str(self.full_settings.compiler.version))\n- if default == str(self.full_settings.cppstd):\n+\n+ if str(self.full_settings.cppstd) == default:\n self.settings.cppstd = None\n \n+ if str(self.full_settings.compiler.cppstd) == default:\n+ def remove_cppstd(settings):\n+ try:\n+ settings.compiler.cppstd = None # It's the default, assign None\n+ except AttributeError:\n+ # Settings can be different at the moment of executing this function\n+ pass\n+ finally:\n+ return settings\n+ self._adjust_settings_for_std = remove_cppstd\n+\n def default_std_non_matching(self):\n if self.full_settings.cppstd:\n self.settings.cppstd = self.full_settings.cppstd\n+ self._adjust_settings_for_std = lambda u: u # Do nothing\n+\n+ def _settings_sha(self):\n+ settings = self.settings.copy()\n+ settings = self._adjust_settings_for_std(settings)\n+ return settings.sha\ndiff --git a/conans/test/functional/build_helpers/cmake_flags_test.py b/conans/test/functional/build_helpers/cmake_flags_test.py\nindex 4d470d026ac..f6b466b1742 100644\n--- a/conans/test/functional/build_helpers/cmake_flags_test.py\n+++ b/conans/test/functional/build_helpers/cmake_flags_test.py\n@@ -6,9 +6,9 @@\n from nose.plugins.attrib import attr\n from parameterized.parameterized import parameterized\n \n-from conans import load\n from conans.client.build.cmake import CMake\n from conans.model.version import Version\n+from conans.test.utils.deprecation import catch_deprecation_warning\n from conans.test.utils.tools import TestClient\n \n conanfile_py = \"\"\"\n@@ -322,20 +322,26 @@ def build(self):\n \"\"\"})\n \n if platform.system() != \"Windows\":\n- client.run(\"install . --install-folder=build -s cppstd=gnu98\")\n- client.run(\"build . --build-folder=build\", assert_error=True)\n+ with catch_deprecation_warning(self, n=2):\n+ client.run(\"install . --install-folder=build -s cppstd=gnu98\")\n+ with catch_deprecation_warning(self):\n+ client.run(\"build . --build-folder=build\", assert_error=True)\n self.assertIn(\"Error in build()\", client.out)\n \n # Now specify c++14\n- client.run(\"install . --install-folder=build -s cppstd=gnu14\")\n- client.run(\"build . --build-folder=build\")\n+ with catch_deprecation_warning(self, n=2):\n+ client.run(\"install . --install-folder=build -s cppstd=gnu14\")\n+ with catch_deprecation_warning(self):\n+ client.run(\"build . --build-folder=build\")\n self.assertIn(\"CPP STANDARD: 14 WITH EXTENSIONS ON\", client.out)\n libname = \"libmylib.a\" if platform.system() != \"Windows\" else \"mylib.lib\"\n libpath = os.path.join(client.current_folder, \"build\", \"lib\", libname)\n self.assertTrue(os.path.exists(libpath))\n \n- client.run(\"install . --install-folder=build -s cppstd=14\")\n- client.run(\"build . --build-folder=build\")\n+ with catch_deprecation_warning(self, n=2):\n+ client.run(\"install . --install-folder=build -s cppstd=14\")\n+ with catch_deprecation_warning(self):\n+ client.run(\"build . --build-folder=build\")\n self.assertIn(\"CPP STANDARD: 14 WITH EXTENSIONS OFF\", client.out)\n self.assertNotIn(\"Conan setting CXX_FLAGS flags\", client.out)\n libname = \"libmylib.a\" if platform.system() != \"Windows\" else \"mylib.lib\"\n@@ -375,15 +381,17 @@ def conan_set_std_branch():\n cmake_version = CMake.get_version()\n return cmake_version < Version(\"3.12\")\n \n- client.run(\"create . user/channel -s cppstd=gnu20 -s compiler=gcc -s compiler.version=8 \"\n- \"-s compiler.libcxx=libstdc++11\")\n+ with catch_deprecation_warning(self, n=2):\n+ client.run(\"create . user/channel -s cppstd=gnu20 -s compiler=gcc -s compiler.version=8 \"\n+ \"-s compiler.libcxx=libstdc++11\")\n if conan_set_std_branch():\n self.assertIn(\"Conan setting CXX_FLAGS flags: -std=gnu++2a\", client.out)\n else:\n self.assertIn(\"Conan setting CPP STANDARD: 20 WITH EXTENSIONS ON\", client.out)\n \n- client.run(\"create . user/channel -s cppstd=20 -s compiler=gcc -s compiler.version=8 \"\n- \"-s compiler.libcxx=libstdc++11\")\n+ with catch_deprecation_warning(self, n=2):\n+ client.run(\"create . user/channel -s cppstd=20 -s compiler=gcc -s compiler.version=8 \"\n+ \"-s compiler.libcxx=libstdc++11\")\n if conan_set_std_branch():\n self.assertIn(\"Conan setting CXX_FLAGS flags: -std=c++2a\", client.out)\n else:\ndiff --git a/conans/test/functional/build_helpers/msbuild_test.py b/conans/test/functional/build_helpers/msbuild_test.py\nindex 8a0923770b6..97c1457efb3 100644\n--- a/conans/test/functional/build_helpers/msbuild_test.py\n+++ b/conans/test/functional/build_helpers/msbuild_test.py\n@@ -10,6 +10,7 @@\n from conans.paths import CONANFILE\n from conans.test.utils.tools import TestClient\n from conans.test.utils.visual_project_files import get_vs_project_files\n+from conans.test.utils.deprecation import catch_deprecation_warning\n \n \n class MSBuildTest(unittest.TestCase):\n@@ -41,10 +42,12 @@ def package(self):\n files[CONANFILE] = conan_build_vs\n \n client.save(files)\n- client.run('create . Hello/1.2.1@lasote/stable -s cppstd=11 -s '\n- 'compiler=\"Visual Studio\" -s compiler.version=14', assert_error=True)\n- client.run('create . Hello/1.2.1@lasote/stable -s cppstd=17 '\n- '-s compiler=\"Visual Studio\" -s compiler.version=14')\n+ with catch_deprecation_warning(self):\n+ client.run('create . Hello/1.2.1@lasote/stable -s cppstd=11 -s '\n+ 'compiler=\"Visual Studio\" -s compiler.version=14', assert_error=True)\n+ with catch_deprecation_warning(self, n=2):\n+ client.run('create . Hello/1.2.1@lasote/stable -s cppstd=17 '\n+ '-s compiler=\"Visual Studio\" -s compiler.version=14')\n self.assertIn(\"Packaged 1 '.exe' file: MyProject.exe\", client.out)\n \n files = get_vs_project_files()\ndiff --git a/conans/test/functional/configuration/profile_test.py b/conans/test/functional/configuration/profile_test.py\nindex 44bf0bea550..fb0681a7062 100644\n--- a/conans/test/functional/configuration/profile_test.py\n+++ b/conans/test/functional/configuration/profile_test.py\n@@ -551,6 +551,7 @@ def profile_crazy_inheritance_test(self):\n [settings]\n arch=x86_64\n compiler=Visual Studio\n+ compiler.cppstd=14\n compiler.runtime=MD\n compiler.version=15\n os=Windows\"\"\"), self.client.out)\ndiff --git a/conans/test/functional/cppstd/compiler_cppstd_test.py b/conans/test/functional/cppstd/compiler_cppstd_test.py\nnew file mode 100644\nindex 00000000000..2da8acbb502\n--- /dev/null\n+++ b/conans/test/functional/cppstd/compiler_cppstd_test.py\n@@ -0,0 +1,164 @@\n+# coding=utf-8\n+\n+import os\n+import textwrap\n+import unittest\n+\n+from parameterized import parameterized\n+from parameterized.parameterized import parameterized_class\n+\n+from conans.client.tools import environment_append, save\n+from conans.test.utils.deprecation import catch_deprecation_warning\n+from conans.test.utils.test_files import temp_folder\n+from conans.test.utils.tools import TestClient\n+\n+\n+@parameterized_class([{\"recipe_cppstd\": True}, {\"recipe_cppstd\": False}, ])\n+class SettingsCppStdScopedPackageTests(unittest.TestCase):\n+ # Validation of scoped settings is delayed until graph computation, a conanfile can\n+ # declare a different set of settings, so we should wait until then to validate it.\n+\n+ default_profile = textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Linux\n+ arch=x86\n+ compiler=gcc\n+ compiler.version=7\n+ compiler.libcxx=libstdc++11\n+ \"\"\")\n+\n+ def run(self, *args, **kwargs):\n+ default_profile_path = os.path.join(temp_folder(), \"default.profile\")\n+ save(default_profile_path, self.default_profile)\n+ with environment_append({\"CONAN_DEFAULT_PROFILE_PATH\": default_profile_path}):\n+ unittest.TestCase.run(self, *args, **kwargs)\n+\n+ def setUp(self):\n+ self.tmp_folder = temp_folder()\n+ self.t = TestClient(base_folder=self.tmp_folder)\n+\n+ settings = [\"os\", \"compiler\", \"build_type\", \"arch\"]\n+ if self.recipe_cppstd:\n+ settings += [\"cppstd\"]\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ \n+ class Lib(ConanFile):\n+ settings = \"{}\" \n+ \"\"\".format('\", \"'.join(settings)))\n+ self.t.save({\"conanfile.py\": conanfile})\n+\n+ def test_value_invalid(self):\n+ self.t.run(\"create . hh/0.1@user/channel -shh:compiler=apple-clang -shh:compiler.cppstd=144\",\n+ assert_error=True)\n+ self.assertIn(\"Invalid setting '144' is not a valid 'settings.compiler.cppstd' value\",\n+ self.t.out)\n+\n+ def test_value_different_with_scoped_setting(self):\n+ self.t.run(\"create . hh/0.1@user/channel\"\n+ \" -s hh:cppstd=11\"\n+ \" -s hh:compiler=gcc\"\n+ \" -s hh:compiler.cppstd=14\", assert_error=self.recipe_cppstd)\n+ if self.recipe_cppstd:\n+ self.assertIn(\"Package 'hh/0.1@user/channel': Do not use settings 'compiler.cppstd'\"\n+ \" together with 'cppstd'. Use only the former one.\", self.t.out)\n+ else:\n+ # TODO: Settings are being constrained before checking...\n+ pass\n+\n+ def test_value_different_with_general_setting(self):\n+ deprecation_number = 1 if self.recipe_cppstd else 0\n+ with catch_deprecation_warning(self, n=deprecation_number):\n+ self.t.run(\"create . hh/0.1@user/channel\"\n+ \" -s cppstd=17\"\n+ \" -s hh:compiler=gcc\"\n+ \" -s hh:compiler.cppstd=14\", assert_error=self.recipe_cppstd)\n+ if self.recipe_cppstd:\n+ self.assertIn(\"Package 'hh/0.1@user/channel': Do not use settings 'compiler.cppstd'\"\n+ \" together with 'cppstd'. Use only the former one.\", self.t.out)\n+\n+ def test_conanfile_without_compiler(self):\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+\n+ class Lib(ConanFile):\n+ settings = \"os\", \"arch\"\n+ \"\"\")\n+ t = TestClient(base_folder=temp_folder())\n+ t.save({'conanfile.py': conanfile})\n+\n+ with catch_deprecation_warning(self):\n+ # No mismatch, because settings for this conanfile does not include `compiler`\n+ t.run(\"create . hh/0.1@user/channel\"\n+ \" -s cppstd=17\"\n+ \" -s hh:compiler=gcc\"\n+ \" -s hh:compiler.cppstd=14\")\n+ # TODO: Settings are being constrained before checking...\n+\n+ def test_conanfile_without_compiler_but_cppstd(self):\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+\n+ class Lib(ConanFile):\n+ settings = \"os\", \"arch\", \"cppstd\"\n+ \n+ def configure(self):\n+ self.output.info(\">>> cppstd: {}\".format(self.settings.cppstd))\n+ \"\"\")\n+ t = TestClient(base_folder=temp_folder())\n+ t.save({'conanfile.py': conanfile}, clean_first=True)\n+\n+ with catch_deprecation_warning(self, n=2):\n+ # No mismatch, because settings for this conanfile does not include `compiler`\n+ t.run(\"create . hh/0.1@user/channel\"\n+ \" -s cppstd=17\"\n+ \" -s hh:compiler=gcc\"\n+ \" -s hh:compiler.cppstd=14\")\n+ self.assertIn(\"Setting 'cppstd' is deprecated in favor of 'compiler.cppstd'\", t.out)\n+ self.assertIn(\">>> cppstd: 17\", t.out)\n+ # TODO: Settings are being constrained before checking...\n+\n+\n+class UseCompilerCppStdSettingTests(unittest.TestCase):\n+\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ \n+ class Lib(ConanFile):\n+ settings = \"cppstd\", \"os\", \"compiler\", \"arch\", \"build_type\"\n+ \n+ def configure(self):\n+ self.output.info(\">>> cppstd: {}\".format(self.settings.cppstd))\n+ self.output.info(\">>> compiler.cppstd: {}\".format(self.settings.compiler.cppstd))\n+ \"\"\")\n+\n+ def setUp(self):\n+ self.t = TestClient()\n+ self.t.save({'conanfile.py': self.conanfile})\n+\n+ def test_user_notice(self):\n+ self.t.run(\"info .\")\n+ self.assertIn(\"Setting 'cppstd' is deprecated in favor of 'compiler.cppstd',\"\n+ \" please update your recipe.\", self.t.out)\n+\n+ def test_only_cppstd(self):\n+ with catch_deprecation_warning(self, n=2):\n+ self.t.run(\"info . -s cppstd=14\")\n+ self.assertNotIn(\">>> compiler.cppstd: 14\", self.t.out)\n+ self.assertIn(\">>> cppstd: 14\", self.t.out)\n+ self.assertIn(\">>> compiler.cppstd: None\", self.t.out)\n+\n+ def test_only_compiler_cppstd(self):\n+ \"\"\" settings.cppstd is available only if declared explicitly (otherwise it is deprecated) \"\"\"\n+ self.t.run(\"info . -s compiler.cppstd=14\")\n+ self.assertNotIn(\">>> cppstd: 14\", self.t.out)\n+ self.assertIn(\">>> cppstd: None\", self.t.out)\n+ self.assertIn(\">>> compiler.cppstd: 14\", self.t.out)\n+\n+ def test_both(self):\n+ settings_str = \"-s cppstd=14 -s compiler.cppstd=14\"\n+ self.t.run(\"info . {}\".format(settings_str), assert_error=True)\n+ self.assertIn(\"Do not use settings 'compiler.cppstd' together with 'cppstd'.\"\n+ \" Use only the former one.\", self.t.out)\n+\ndiff --git a/conans/test/functional/cppstd/default_cppstd_test.py b/conans/test/functional/cppstd/default_cppstd_test.py\nindex a1188f46f05..b9cd086da0e 100644\n--- a/conans/test/functional/cppstd/default_cppstd_test.py\n+++ b/conans/test/functional/cppstd/default_cppstd_test.py\n@@ -7,13 +7,12 @@\n \n from conans.client.build.cppstd_flags import cppstd_default\n from conans.client.tools import environment_append, save, load\n+from conans.test.utils.deprecation import catch_deprecation_warning\n from conans.test.utils.test_files import temp_folder\n from conans.test.utils.tools import TestClient\n \n \n class DefaultCppTestCase(unittest.TestCase):\n- # Validate package ID computed taking into account different cppstd scenarios\n-\n compiler = \"gcc\"\n compiler_version = \"7\"\n \n@@ -34,14 +33,15 @@ class Library(ConanFile):\n \n def configure(self):\n cppstd = self.settings.get_safe(\"cppstd\")\n+ compiler_cppstd = self.settings.get_safe(\"compiler.cppstd\")\n self.output.info(\">>>> settings: {{}}\".format(self.settings.fields))\n self.output.info(\">>>> cppstd: {{}}\".format(cppstd))\n+ self.output.info(\">>>> compiler.cppstd: {{}}\".format(compiler_cppstd))\n \"\"\")\n \n id_default = \"d17189cfe7b11efbc5d701339a32d203745f8b81\"\n \n def run(self, *args, **kwargs):\n- # Create and use a different default profile\n default_profile_path = os.path.join(temp_folder(), \"default.profile\")\n save(default_profile_path, self.default_profile)\n with environment_append({\"CONAN_DEFAULT_PROFILE_PATH\": default_profile_path}):\n@@ -54,6 +54,7 @@ def setUp(self):\n self.assertEqual(target_id, self.id_default)\n self.assertIn(\">>>> settings: ['compiler', 'os']\", output)\n self.assertIn(\">>>> cppstd: None\", output)\n+ self.assertIn(\">>>> compiler.cppstd: gnu14\", output)\n \n def _get_id(self, with_cppstd, settings_values=None):\n # Create the conanfile with corresponding settings\n@@ -75,14 +76,22 @@ def _get_id(self, with_cppstd, settings_values=None):\n data = json.loads(load(json_file))\n self.assertEqual(len(data), 1)\n \n- # Return: ID, output\n+ # Return ID, output\n return data[0][\"id\"], info_output\n \n+\n+class SettingsCppStdTests(DefaultCppTestCase):\n+ \"\"\"\n+ Validate package ID computed taking into account different scenarios for 'cppstd'. The ID\n+ should be the same if the setting is not provided and if it has the default value.\n+ \"\"\"\n+\n def test_no_value(self):\n # No value passed for setting 'cppstd'\n id_with, output = self._get_id(with_cppstd=True) # TODO: Should raise?\n self.assertIn(\">>>> settings: ['compiler', 'cppstd', 'os']\", output)\n self.assertIn(\">>>> cppstd: None\", output)\n+ self.assertIn(\">>>> compiler.cppstd: gnu14\", output)\n self.assertEqual(self.id_default, id_with)\n \n def test_value_none(self):\n@@ -90,20 +99,84 @@ def test_value_none(self):\n id_with, output = self._get_id(with_cppstd=True, settings_values={\"cppstd\": \"None\"})\n self.assertIn(\">>>> settings: ['compiler', 'cppstd', 'os']\", output)\n self.assertIn(\">>>> cppstd: None\", output)\n+ self.assertIn(\">>>> compiler.cppstd: gnu14\", output)\n self.assertEqual(self.id_default, id_with)\n \n def test_value_default(self):\n # Explicit value (equals to default) passed to setting 'cppstd'\n cppstd = cppstd_default(self.compiler, self.compiler_version)\n- id_with, output = self._get_id(with_cppstd=True, settings_values={\"cppstd\": cppstd})\n+ with catch_deprecation_warning(self, n=2):\n+ id_with, output = self._get_id(with_cppstd=True, settings_values={\"cppstd\": cppstd})\n self.assertIn(\">>>> settings: ['compiler', 'cppstd', 'os']\", output)\n self.assertIn(\">>>> cppstd: gnu14\", output)\n+ self.assertIn(\">>>> compiler.cppstd: None\", output)\n self.assertEqual(self.id_default, id_with)\n \n- def test_value_other(self):\n+ def test_value_non_default(self):\n # Explicit value (not the default) passed to setting 'cppstd'\n- id_with, output = self._get_id(with_cppstd=True, settings_values={\"cppstd\": \"14\"})\n+ with catch_deprecation_warning(self, n=2):\n+ id_with, output = self._get_id(with_cppstd=True, settings_values={\"cppstd\": \"14\"})\n self.assertIn(\">>>> settings: ['compiler', 'cppstd', 'os']\", output)\n self.assertIn(\">>>> cppstd: 14\", output)\n+ self.assertIn(\">>>> compiler.cppstd: None\", output)\n self.assertNotEqual(self.id_default, id_with)\n \n+\n+class SettingsCompilerCppStdTests(DefaultCppTestCase):\n+ \"\"\"\n+ Validate package ID computed taking into account different scenarios for 'compiler.cppstd'. The\n+ ID has to be the same if the setting is not informed and if it has the default value, also\n+ these values should be the same as the ones using the 'cppstd' approach.\n+ \"\"\"\n+\n+ def _get_id(self, with_cppstd=False, settings_values=None):\n+ assert not with_cppstd\n+ return super(SettingsCompilerCppStdTests, self)._get_id(with_cppstd=False,\n+ settings_values=settings_values)\n+\n+ def test_value_none(self):\n+ # Explicit value 'None' passed to setting 'cppstd'\n+ id_with, output = self._get_id(settings_values={\"compiler.cppstd\": \"None\"})\n+ self.assertIn(\">>>> settings: ['compiler', 'os']\", output)\n+ self.assertIn(\">>>> cppstd: None\", output)\n+ self.assertIn(\">>>> compiler.cppstd: gnu14\", output)\n+ self.assertEqual(self.id_default, id_with)\n+\n+ def test_value_default(self):\n+ # Explicit value (equals to default) passed to setting 'compiler.cppstd'\n+ cppstd = cppstd_default(self.compiler, self.compiler_version)\n+ id_with, output = self._get_id(settings_values={\"compiler.cppstd\": cppstd})\n+ self.assertIn(\">>>> settings: ['compiler', 'os']\", output)\n+ self.assertIn(\">>>> cppstd: None\", output)\n+ self.assertIn(\">>>> compiler.cppstd: gnu14\", output)\n+ self.assertEqual(self.id_default, id_with)\n+\n+ def test_value_other(self):\n+ # Explicit value (not the default) passed to setting 'cppstd'\n+ id_with, output = self._get_id(settings_values={\"compiler.cppstd\": \"14\"})\n+ self.assertIn(\">>>> settings: ['compiler', 'os']\", output)\n+ self.assertIn(\">>>> cppstd: None\", output)\n+ self.assertIn(\">>>> compiler.cppstd: 14\", output)\n+ self.assertNotEqual(self.id_default, id_with)\n+\n+\n+class SettingsCompareCppStdApproaches(DefaultCppTestCase):\n+ \"\"\"\n+ Check scenario using 'cppstd' and 'compiler.cppstd', if those are given the same value\n+ (but different from the default one) then the ID for the packages is not required to be\n+ the same.\n+ \"\"\"\n+\n+ def test_cppstd_non_defaults(self):\n+ cppstd_value = \"14\" # Not the default\n+ with catch_deprecation_warning(self, n=2):\n+ id_with_old, _ = self._get_id(with_cppstd=True, settings_values={\"cppstd\": cppstd_value})\n+ id_with_new, _ = self._get_id(with_cppstd=False,\n+ settings_values={'compiler.cppstd': cppstd_value})\n+\n+ # Those are different from the target one (ID using default value or None)\n+ self.assertNotEqual(self.id_default, id_with_old)\n+ self.assertNotEqual(self.id_default, id_with_new)\n+\n+ # They are different between them\n+ self.assertNotEqual(id_with_new, id_with_old)\ndiff --git a/conans/test/integration/cppstd_test.py b/conans/test/integration/cppstd_test.py\nindex a0f63efda2b..8f5e1a88c6e 100644\n--- a/conans/test/integration/cppstd_test.py\n+++ b/conans/test/integration/cppstd_test.py\n@@ -1,6 +1,7 @@\n import unittest\n \n from conans.paths import CONANFILE\n+from conans.test.utils.deprecation import catch_deprecation_warning\n from conans.test.utils.tools import TestClient\n \n \n@@ -18,15 +19,17 @@ class TestConan(ConanFile):\n \n \"\"\"\n client.save({CONANFILE: conanfile})\n- client.run('create . user/testing -s compiler=\"gcc\" '\n- '-s compiler.libcxx=\"libstdc++11\" '\n- '-s compiler.version=\"4.6\" -s cppstd=17', assert_error=True)\n+ with catch_deprecation_warning(self):\n+ client.run('create . user/testing -s compiler=\"gcc\" '\n+ '-s compiler.libcxx=\"libstdc++11\" '\n+ '-s compiler.version=\"4.6\" -s cppstd=17', assert_error=True)\n \n self.assertIn(\"The specified 'cppstd=17' is not available for 'gcc 4.6'\", client.out)\n self.assertIn(\"Possible values are ['11', '98', 'gnu11', 'gnu98']\", client.out)\n \n- client.run('create . user/testing -s compiler=\"gcc\" -s compiler.libcxx=\"libstdc++11\" '\n- '-s compiler.version=\"6.3\" -s cppstd=17')\n+ with catch_deprecation_warning(self, n=2):\n+ client.run('create . user/testing -s compiler=\"gcc\" -s compiler.libcxx=\"libstdc++11\" '\n+ '-s compiler.version=\"6.3\" -s cppstd=17')\n \n def gcc_8_std_20_test(self):\n client = TestClient()\n@@ -40,9 +43,10 @@ class TestConan(ConanFile):\n \n \"\"\"\n client.save({CONANFILE: conanfile})\n- client.run('create . user/testing -s compiler=\"gcc\" '\n- '-s compiler.libcxx=\"libstdc++11\" '\n- '-s compiler.version=\"8\" -s cppstd=20')\n+ with catch_deprecation_warning(self, n=2):\n+ client.run('create . user/testing -s compiler=\"gcc\" '\n+ '-s compiler.libcxx=\"libstdc++11\" '\n+ '-s compiler.version=\"8\" -s cppstd=20')\n \n def set_default_package_id_test(self):\n client = TestClient()\n@@ -56,7 +60,8 @@ class TestConan(ConanFile):\n def build(self):\n self.output.warn(\"BUILDING!\")\n \"\"\"\n- client.save({CONANFILE: conanfile % \"\"}) # Without the setting\n+ # Without the setting\n+ client.save({CONANFILE: conanfile % \"\"})\n client.run('create . user/testing -s compiler=\"gcc\" -s compiler.version=\"7.1\" '\n '-s compiler.libcxx=\"libstdc++\" '\n '--build missing')\n@@ -64,10 +69,12 @@ def build(self):\n \n # Add the setting but with the default value, should not build again\n client.save({CONANFILE: conanfile % '\"cppstd\"'}) # With the setting\n- client.run('create . user/testing -s compiler=\"gcc\" -s compiler.version=\"7.1\" '\n- '-s compiler.libcxx=\"libstdc++\" '\n- '-s cppstd=gnu14 '\n- '--build missing')\n+ with catch_deprecation_warning(self, n=2):\n+ client.run('create . user/testing -s compiler=\"gcc\" -s compiler.version=\"7.1\" '\n+ '-s compiler.libcxx=\"libstdc++\" '\n+ '-s cppstd=gnu14 '\n+ '--build missing')\n+\n if client.cache.config.revisions_enabled:\n self.assertIn(\"doesn't belong to the installed recipe revision, removing folder\",\n client.out)\n@@ -77,8 +84,9 @@ def build(self):\n \n # Add the setting but with a non-default value, should build again\n client.save({CONANFILE: conanfile % '\"cppstd\"'}) # With the setting\n- client.run('create . user/testing -s compiler=\"gcc\" -s compiler.version=\"7.1\" '\n- '-s compiler.libcxx=\"libstdc++\" '\n- '-s cppstd=gnu17 '\n- '--build missing')\n+ with catch_deprecation_warning(self, n=2):\n+ client.run('create . user/testing -s compiler=\"gcc\" -s compiler.version=\"7.1\" '\n+ '-s compiler.libcxx=\"libstdc++\" '\n+ '-s cppstd=gnu17 '\n+ '--build missing')\n self.assertIn(\"BUILDING!\", client.out)\ndiff --git a/conans/test/integration/package_id_test.py b/conans/test/integration/package_id_test.py\nindex 95e84293fb8..b6b261486ff 100644\n--- a/conans/test/integration/package_id_test.py\n+++ b/conans/test/integration/package_id_test.py\n@@ -5,6 +5,7 @@\n from conans.model.ref import ConanFileReference, PackageReference\n from conans.paths import CONANINFO\n from conans.test.utils.conanfile import TestConanFile\n+from conans.test.utils.deprecation import catch_deprecation_warning\n from conans.test.utils.tools import TestClient\n from conans.util.env_reader import get_env\n from conans.util.files import load\n@@ -37,7 +38,6 @@ def _export(self, name, version, package_id_text=None, requires=None,\n default_options=[(\"an_option\", \"%s\" % default_option_value)],\n package_id=package_id_text,\n settings=settings)\n-\n self.client.save({\"conanfile.py\": str(conanfile)}, clean_first=True)\n revisions_enabled = self.client.cache.config.revisions_enabled\n self.client.disable_revisions()\n@@ -369,15 +369,21 @@ def test_standard_version_default_matching(self):\n channel=\"user/testing\",\n settings='\"compiler\", \"cppstd\"')\n \n- self.client.run('install Hello/1.2.0@user/testing '\n- ' -s compiler=\"gcc\" -s compiler.libcxx=libstdc++11'\n- ' -s compiler.version=7.2 -s cppstd=gnu14') # Default, already built\n+ with catch_deprecation_warning(self, n=2):\n+ self.client.run('info Hello/1.2.0@user/testing -s compiler=\"gcc\" '\n+ '-s compiler.libcxx=libstdc++11 -s compiler.version=7.2 '\n+ '-s cppstd=gnu14')\n+ with catch_deprecation_warning(self, n=2):\n+ self.client.run('install Hello/1.2.0@user/testing'\n+ ' -s compiler=\"gcc\" -s compiler.libcxx=libstdc++11'\n+ ' -s compiler.version=7.2 -s cppstd=gnu14') # Default, already built\n \n # Should NOT have binary available\n- self.client.run('install Hello/1.2.0@user/testing'\n- ' -s compiler=\"gcc\" -s compiler.libcxx=libstdc++11'\n- ' -s compiler.version=7.2 -s cppstd=gnu11',\n- assert_error=True)\n+ with catch_deprecation_warning(self, n=2):\n+ self.client.run('install Hello/1.2.0@user/testing'\n+ ' -s compiler=\"gcc\" -s compiler.libcxx=libstdc++11'\n+ ' -s compiler.version=7.2 -s cppstd=gnu11',\n+ assert_error=True)\n \n self.assertIn(\"Missing prebuilt package for 'Hello/1.2.0@user/testing'\", self.client.out)\n \n@@ -394,8 +400,9 @@ def test_standard_version_default_non_matching(self):\n channel=\"user/testing\",\n settings='\"compiler\", \"cppstd\"'\n )\n- self.client.run('install Hello/1.2.0@user/testing '\n- ' -s compiler=\"gcc\" -s compiler.libcxx=libstdc++11'\n- ' -s compiler.version=7.2 -s cppstd=gnu14',\n- assert_error=True) # Default\n+ with catch_deprecation_warning(self, n=2):\n+ self.client.run('install Hello/1.2.0@user/testing'\n+ ' -s compiler=\"gcc\" -s compiler.libcxx=libstdc++11'\n+ ' -s compiler.version=7.2 -s cppstd=gnu14',\n+ assert_error=True) # Default\n self.assertIn(\"Missing prebuilt package for 'Hello/1.2.0@user/testing'\", self.client.out)\ndiff --git a/conans/test/unittests/client/generators/cmake_paths_test.py b/conans/test/unittests/client/generators/cmake_paths_test.py\nindex d8c0c7763c7..41fba6e1e6c 100644\n--- a/conans/test/unittests/client/generators/cmake_paths_test.py\n+++ b/conans/test/unittests/client/generators/cmake_paths_test.py\n@@ -8,13 +8,36 @@\n from conans.model.env_info import EnvValues\n from conans.test.utils.test_files import temp_folder\n from conans.test.utils.tools import TestBufferConanOutput\n+from conans.errors import ConanException\n+\n+\n+class _MockSettings(object):\n+ build_type = None\n+ os = None\n+ os_build = None\n+ fields = []\n+\n+ def __init__(self, build_type=None):\n+ self.build_type = build_type\n+\n+ @property\n+ def compiler(self):\n+ raise ConanException(\"mock: not available\")\n+\n+ def constraint(self, _):\n+ return self\n+\n+ def get_safe(self, _):\n+ return None\n+\n+ def items(self):\n+ return {}\n \n \n class CMakePathsGeneratorTest(unittest.TestCase):\n \n def cmake_vars_unit_test(self):\n- settings_mock = namedtuple(\"Settings\", \"build_type, os, os_build, constraint\")\n- settings = settings_mock(\"Release\", None, None, lambda x: x)\n+ settings = _MockSettings(\"Release\")\n conanfile = ConanFile(TestBufferConanOutput(), None)\n conanfile.initialize(settings, EnvValues())\n tmp = temp_folder()\ndiff --git a/conans/test/unittests/client/generators/cmake_test.py b/conans/test/unittests/client/generators/cmake_test.py\nindex 6490df612fd..98fc66c8cf3 100644\n--- a/conans/test/unittests/client/generators/cmake_test.py\n+++ b/conans/test/unittests/client/generators/cmake_test.py\n@@ -1,11 +1,11 @@\n import os\n import re\n import unittest\n-from collections import namedtuple\n \n from conans.client.conf import default_settings_yml\n from conans.client.generators.cmake import CMakeGenerator\n from conans.client.generators.cmake_multi import CMakeMultiGenerator\n+from conans.errors import ConanException\n from conans.model.build_info import CppInfo\n from conans.model.conan_file import ConanFile\n from conans.model.env_info import EnvValues\n@@ -16,6 +16,29 @@\n from conans.util.files import save\n \n \n+class _MockSettings(object):\n+ build_type = None\n+ os = None\n+ os_build = None\n+ fields = []\n+\n+ def __init__(self, build_type=None):\n+ self.build_type = build_type\n+\n+ @property\n+ def compiler(self):\n+ raise ConanException(\"mock: not available\")\n+\n+ def constraint(self, _):\n+ return self\n+\n+ def get_safe(self, _):\n+ return None\n+\n+ def items(self):\n+ return {}\n+\n+\n class CMakeGeneratorTest(unittest.TestCase):\n \n def _extract_macro(self, name, text):\n@@ -49,10 +72,9 @@ def variables_setup_test(self):\n self.assertIn('set(CONAN_USER_LIB2_MYVAR2 \"myvalue4\")', cmake_lines)\n \n def paths_cmake_multi_user_vars_test(self):\n- settings_mock = namedtuple(\"Settings\", \"build_type, os, os_build, constraint\")\n+ settings_mock = _MockSettings(build_type=\"Release\")\n conanfile = ConanFile(TestBufferConanOutput(), None)\n- conanfile.initialize(settings_mock(\"Release\", None, None,\n- lambda x: x), EnvValues())\n+ conanfile.initialize(settings_mock, EnvValues())\n ref = ConanFileReference.loads(\"MyPkg/0.1@lasote/stables\")\n tmp_folder = temp_folder()\n save(os.path.join(tmp_folder, \"lib\", \"mylib.lib\"), \"\")\n@@ -69,10 +91,9 @@ def paths_cmake_multi_user_vars_test(self):\n self.assertIn('set(CONAN_LIB_DIRS_MYPKG_RELEASE \"root_folder/lib\")', cmake_lines)\n \n def paths_cmake_test(self):\n- settings_mock = namedtuple(\"Settings\", \"build_type, os, os_build, constraint, items\")\n+ settings_mock = _MockSettings()\n conanfile = ConanFile(TestBufferConanOutput(), None)\n- conanfile.initialize(settings_mock(None, None, None, lambda x: x,\n- lambda: {}), EnvValues())\n+ conanfile.initialize(settings_mock, EnvValues())\n ref = ConanFileReference.loads(\"MyPkg/0.1@lasote/stables\")\n tmp_folder = temp_folder()\n save(os.path.join(tmp_folder, \"lib\", \"mylib.lib\"), \"\")\n@@ -89,10 +110,9 @@ def paths_cmake_test(self):\n self.assertIn('set(CONAN_LIB_DIRS_MYPKG_RELEASE \"root_folder/lib\")', cmake_lines)\n \n def variables_cmake_multi_user_vars_test(self):\n- settings_mock = namedtuple(\"Settings\", \"build_type, os, os_build, constraint\")\n+ settings_mock = _MockSettings(build_type=\"Release\")\n conanfile = ConanFile(TestBufferConanOutput(), None)\n- conanfile.initialize(settings_mock(\"Release\", None, None, lambda x: x,),\n- EnvValues())\n+ conanfile.initialize(settings_mock, EnvValues())\n conanfile.deps_user_info[\"LIB1\"].myvar = \"myvalue\"\n conanfile.deps_user_info[\"LIB1\"].myvar2 = \"myvalue2\"\n conanfile.deps_user_info[\"lib2\"].MYVAR2 = \"myvalue4\"\n@@ -104,10 +124,9 @@ def variables_cmake_multi_user_vars_test(self):\n self.assertIn('set(CONAN_USER_LIB2_MYVAR2 \"myvalue4\")', cmake_lines)\n \n def variables_cmake_multi_user_vars_escape_test(self):\n- settings_mock = namedtuple(\"Settings\", \"build_type, os, os_build, constraint\")\n+ settings_mock = _MockSettings(build_type=\"Release\")\n conanfile = ConanFile(TestBufferConanOutput(), None)\n- conanfile.initialize(settings_mock(\"Release\", None, None, lambda x: x,),\n- EnvValues())\n+ conanfile.initialize(settings_mock, EnvValues())\n conanfile.deps_user_info[\"FOO\"].myvar = 'my\"value\"'\n conanfile.deps_user_info[\"FOO\"].myvar2 = 'my${value}'\n conanfile.deps_user_info[\"FOO\"].myvar3 = 'my\\\\value'\ndiff --git a/conans/test/unittests/client/profile_loader/__init__.py b/conans/test/unittests/client/profile_loader/__init__.py\nnew file mode 100644\nindex 00000000000..e69de29bb2d\ndiff --git a/conans/test/unittests/client/profile_loader/compiler_cppstd_test.py b/conans/test/unittests/client/profile_loader/compiler_cppstd_test.py\nnew file mode 100644\nindex 00000000000..8781e23d68e\n--- /dev/null\n+++ b/conans/test/unittests/client/profile_loader/compiler_cppstd_test.py\n@@ -0,0 +1,113 @@\n+# coding=utf-8\n+\n+import os\n+import textwrap\n+import unittest\n+\n+import six\n+from jinja2 import Template\n+\n+from conans.client.cache.cache import ClientCache\n+from conans.client.profile_loader import profile_from_args\n+from conans.errors import ConanException\n+from conans.test.utils.deprecation import catch_deprecation_warning\n+from conans.test.utils.test_files import temp_folder\n+from conans.test.utils.tools import TestBufferConanOutput\n+from conans.util.files import save\n+\n+\n+class SettingsCppStdTests(unittest.TestCase):\n+\n+ def setUp(self):\n+ self.tmp_folder = temp_folder()\n+ self.cache = ClientCache(self.tmp_folder, TestBufferConanOutput())\n+\n+ def _save_profile(self, cppstd=None, compiler_cppstd=None, filename=\"default\"):\n+ fullpath = os.path.join(self.cache.profiles_path, filename)\n+\n+ t = Template(textwrap.dedent(\"\"\"\n+ [settings]\n+ os=Macos\n+ arch=x86_64\n+ compiler=apple-clang\n+ {% if compiler_cppstd %}compiler.cppstd={{ compiler_cppstd }}{% endif %}\n+ compiler.libcxx=libc++\n+ compiler.version=10.0\n+ {% if cppstd %}cppstd={{ cppstd }}{% endif %}\n+ \"\"\"))\n+\n+ save(fullpath, t.render(cppstd=cppstd, compiler_cppstd=compiler_cppstd))\n+ return filename\n+\n+ def test_no_value(self):\n+ self._save_profile()\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ r.process_settings(self.cache)\n+ self.assertIn(\"compiler.cppstd\", r.settings)\n+ self.assertNotIn(\"cppstd\", r.settings)\n+\n+ def test_value_none(self):\n+ self._save_profile(compiler_cppstd=\"None\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ r.process_settings(self.cache)\n+ self.assertEqual(r.settings[\"compiler.cppstd\"], \"gnu98\")\n+ self.assertNotIn(\"cppstd\", r.settings)\n+\n+ def test_value_valid(self):\n+ self._save_profile(compiler_cppstd=\"11\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ r.process_settings(self.cache)\n+ self.assertEqual(r.settings[\"compiler.cppstd\"], \"11\")\n+ self.assertNotIn(\"cppstd\", r.settings)\n+\n+ def test_value_invalid(self):\n+ self._save_profile(compiler_cppstd=\"13\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ with six.assertRaisesRegex(self, ConanException, \"Invalid setting '13' is not a valid \"\n+ \"'settings.compiler.cppstd' value\"):\n+ r.process_settings(self.cache)\n+ self.assertNotIn(\"cppstd\", r.settings)\n+\n+ def test_value_duplicated_None(self):\n+ self._save_profile(compiler_cppstd=\"None\", cppstd=\"None\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ r.process_settings(self.cache)\n+ self.assertEqual(r.settings[\"compiler.cppstd\"], \"gnu98\")\n+ self.assertEqual(r.settings[\"cppstd\"], \"None\")\n+\n+ def test_value_duplicated(self):\n+ self._save_profile(compiler_cppstd=\"11\", cppstd=\"11\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ with six.assertRaisesRegex(self, ConanException, \"Do not use settings 'compiler.cppstd'\"\n+ \" together with 'cppstd'. Use only the\"\n+ \" former one.\"):\n+ with catch_deprecation_warning(self):\n+ r.process_settings(self.cache)\n+ self.assertEqual(r.settings[\"compiler.cppstd\"], \"11\")\n+ self.assertEqual(r.settings[\"cppstd\"], \"11\")\n+\n+ def test_value_different(self):\n+ self._save_profile(cppstd=\"14\", compiler_cppstd=\"11\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ with six.assertRaisesRegex(self, ConanException, \"Do not use settings 'compiler.cppstd'\"\n+ \" together with 'cppstd'. Use only the\"\n+ \" former one\"):\n+ with catch_deprecation_warning(self):\n+ r.process_settings(self.cache)\n+\n+ def test_value_from_cppstd(self):\n+ self._save_profile(cppstd=\"11\")\n+\n+ r = profile_from_args([\"default\", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)\n+ with catch_deprecation_warning(self):\n+ r.process_settings(self.cache)\n+ self.assertNotIn('compiler.cppstd', r.settings)\n+ self.assertEqual(r.settings[\"cppstd\"], \"11\")\n+\ndiff --git a/conans/test/unittests/client/profile_loader_test.py b/conans/test/unittests/client/profile_loader/profile_loader_test.py\nsimilarity index 100%\nrename from conans/test/unittests/client/profile_loader_test.py\nrename to conans/test/unittests/client/profile_loader/profile_loader_test.py\ndiff --git a/conans/test/unittests/model/other_settings_test.py b/conans/test/unittests/model/other_settings_test.py\nindex a77ee1fb21e..8c220371240 100644\n--- a/conans/test/unittests/model/other_settings_test.py\n+++ b/conans/test/unittests/model/other_settings_test.py\n@@ -5,6 +5,7 @@\n from conans.model.ref import PackageReference\n from conans.model.settings import bad_value_msg, undefined_value\n from conans.paths import CONANFILE, CONANINFO\n+from conans.test.utils.deprecation import catch_deprecation_warning\n from conans.test.utils.tools import TestClient\n from conans.util.files import load, save\n \n@@ -46,7 +47,8 @@ class Pkg(ConanFile):\n settings = \"compiler\", \"cppstd\"\n \"\"\"\n client.save({\"conanfile.py\": conanfile})\n- client.run(\"create . Pkg/0.1@lasote/testing\")\n+ with catch_deprecation_warning(self, n=2):\n+ client.run(\"create . Pkg/0.1@lasote/testing\")\n self.assertIn(\"\"\"Configuration:\n [settings]\n compiler=mycomp\ndiff --git a/conans/test/utils/deprecation.py b/conans/test/utils/deprecation.py\nnew file mode 100644\nindex 00000000000..36d1b0bc6cc\n--- /dev/null\n+++ b/conans/test/utils/deprecation.py\n@@ -0,0 +1,14 @@\n+# coding=utf-8\n+\n+import warnings\n+from contextlib import contextmanager\n+\n+\n+@contextmanager\n+def catch_deprecation_warning(test_suite, n=1):\n+ with warnings.catch_warnings(record=True) as w:\n+ warnings.filterwarnings(\"always\", module=\"(.*\\.)?conans\\..*\")\n+ yield\n+ if n:\n+ test_suite.assertEqual(len(w), n)\n+ test_suite.assertTrue(issubclass(w[0].category, UserWarning))\n"
}
|
[
{
"diff_hunk": "@@ -398,14 +404,31 @@ def default_std_matching(self):\n same as specifying None, packages are the same\n \"\"\"\n \n- if self.full_settings.cppstd and \\\n- self.full_settings.compiler and \\\n- self.full_settings.compiler.version:\n+ if self.full_settings.compiler and \\\n+ self.full_settings.compiler.version:\n default = cppstd_default(str(self.full_settings.compiler),\n str(self.full_settings.compiler.version))\n- if default == str(self.full_settings.cppstd):\n+\n+ if str(self.full_settings.cppstd) == default:\n self.settings.cppstd = None\n \n+ if str(self.full_settings.compiler.cppstd) == default:\n+ def remove_cppstd(settings):\n+ try:\n+ settings.compiler.cppstd = None # It's the default, assign None\n+ except AttributeError:\n+ # Settings can be different at the moment of executing this function\n+ pass\n+ finally:\n+ return settings\n+ self._adjust_settings_for_std = remove_cppstd\n+\n def default_std_non_matching(self):\n if self.full_settings.cppstd:\n self.settings.cppstd = self.full_settings.cppstd\n+ self._adjust_settings_for_std = lambda u: u # Do nothing\n+\n+ def _settings_sha(self):\n+ settings = self.settings.copy()",
"line": null,
"original_line": 432,
"original_start_line": null,
"path": "conans/model/info.py",
"start_line": null,
"text": "@user1:\nI suggest:\r\n- Having a default adjust = None\r\n- If adjust = None, no need to copy, no need to call it.\r\n- The adjust would do the copy internally, and return the sha. In this way it is not possible to forget to copy, which would destroy the ``info`` original values.\n\n@author:\nI agree with the `adjust=None`, but I prefer to leave the `copy` and `sha` outside the _adjust_ function, maybe we can add here more _adjustments_ only for computing the ID and this way we can create the pipeline (I'm not sure if it will make sense right now for the `build_settings` and `toolset` ones):\r\n\r\n```python\r\n def _settings_sha(self):\r\n if self._adjust_settings_for_std:\r\n settings = self.settings.copy()\r\n settings = self._adjust_settings_for_std(settings)\r\n settings = self._adjust_other_thing(settings)\r\n settings = self._adjust_...(settings)\r\n settings = ....\r\n return settings.sha\r\n return self.settings.sha\r\n```\r\n\r\nJust tell me if you really think it is better to make the copy inside the function and return the sha."
}
] |
b5f230002a55173708da1432b232defe71e9271f
|
diff --git a/conans/client/build/autotools_environment.py b/conans/client/build/autotools_environment.py
index 737e551481c..f05f4a6320d 100644
--- a/conans/client/build/autotools_environment.py
+++ b/conans/client/build/autotools_environment.py
@@ -8,7 +8,7 @@
format_include_paths, format_libraries,
format_library_paths, libcxx_define, libcxx_flag,
pic_flag, rpath_flags, sysroot_flag)
-from conans.client.build.cppstd_flags import cppstd_flag
+from conans.client.build.cppstd_flags import cppstd_flag, cppstd_from_settings
from conans.client.tools.env import environment_append
from conans.client.tools.oss import OSInfo, args_to_string, cpu_count, cross_building, \
detected_architecture, detected_os, get_gnu_triplet
@@ -41,7 +41,7 @@ def __init__(self, conanfile, win_bash=False, include_rpath_flags=False):
self._compiler = conanfile.settings.get_safe("compiler")
self._compiler_version = conanfile.settings.get_safe("compiler.version")
self._libcxx = conanfile.settings.get_safe("compiler.libcxx")
- self._cppstd = conanfile.settings.get_safe("cppstd")
+ self._cppstd = cppstd_from_settings(conanfile.settings)
# Set the generic objects before mapping to env vars to let the user
# alter some value
diff --git a/conans/client/build/cmake_flags.py b/conans/client/build/cmake_flags.py
index 38c41f40abd..3ab52944bd5 100644
--- a/conans/client/build/cmake_flags.py
+++ b/conans/client/build/cmake_flags.py
@@ -3,7 +3,7 @@
from conans.client import tools
from conans.client.build.compiler_flags import architecture_flag, parallel_compiler_cl_flag
-from conans.client.build.cppstd_flags import cppstd_flag
+from conans.client.build.cppstd_flags import cppstd_flag, cppstd_from_settings
from conans.client.tools import cross_building
from conans.client.tools.oss import get_cross_building_settings
from conans.errors import ConanException
@@ -130,7 +130,7 @@ def _ss(self, setname):
return self._conanfile.settings.get_safe(setname)
def _get_cpp_standard_vars(self):
- cppstd = self._ss("cppstd")
+ cppstd = cppstd_from_settings(self._conanfile.settings)
compiler = self._ss("compiler")
compiler_version = self._ss("compiler.version")
diff --git a/conans/client/build/cppstd_flags.py b/conans/client/build/cppstd_flags.py
index 44e5bde77cd..3087d85ffb2 100644
--- a/conans/client/build/cppstd_flags.py
+++ b/conans/client/build/cppstd_flags.py
@@ -1,6 +1,27 @@
+import warnings
+
+from conans.errors import ConanException
from conans.model.version import Version
+def cppstd_from_settings(settings):
+ cppstd = settings.get_safe("cppstd")
+ compiler_cppstd = settings.get_safe("compiler.cppstd")
+
+ if not cppstd and not compiler_cppstd:
+ return None
+
+ if cppstd and compiler_cppstd:
+ # Both should never arrive with a value to build_helpers
+ warnings.warn("Both settings, 'cppstd' and 'compiler.cppstd', should never arrive"
+ " with values to build_helpers")
+ if cppstd != compiler_cppstd:
+ raise ConanException("Can't decide value for C++ standard, settings mismatch: "
+ "'cppstd={}', 'compiler.cppstd='".format(cppstd, compiler_cppstd))
+
+ return compiler_cppstd or cppstd
+
+
def cppstd_flag(compiler, compiler_version, cppstd):
if not compiler or not compiler_version or not cppstd:
return ""
diff --git a/conans/client/build/meson.py b/conans/client/build/meson.py
index 2f3ab0b2ca3..9183cd64d79 100644
--- a/conans/client/build/meson.py
+++ b/conans/client/build/meson.py
@@ -2,9 +2,10 @@
import subprocess
from conans.client import defs_to_string, join_arguments, tools
+from conans.client.build.cppstd_flags import cppstd_from_settings
from conans.client.tools.oss import args_to_string
from conans.errors import ConanException
-from conans.model.build_info import DEFAULT_BIN, DEFAULT_INCLUDE, DEFAULT_LIB, DEFAULT_SHARE
+from conans.model.build_info import DEFAULT_BIN, DEFAULT_INCLUDE, DEFAULT_LIB
from conans.model.version import Version
from conans.util.files import decode_text, get_abs_path, mkdir
@@ -38,7 +39,7 @@ def __init__(self, conanfile, backend=None, build_type=None):
self.options['includedir'] = DEFAULT_INCLUDE
# C++ standard
- cppstd = self._ss("cppstd")
+ cppstd = cppstd_from_settings(self._conanfile.settings)
cppstd_conan2meson = {
None: 'none',
'98': 'c++03', 'gnu98': 'gnu++03',
diff --git a/conans/client/build/visual_environment.py b/conans/client/build/visual_environment.py
index 8f715f22793..d50eb08598d 100644
--- a/conans/client/build/visual_environment.py
+++ b/conans/client/build/visual_environment.py
@@ -3,7 +3,7 @@
from conans.client.build.compiler_flags import build_type_define, build_type_flags, format_defines, \
include_path_option, parallel_compiler_cl_flag, visual_runtime
-from conans.client.build.cppstd_flags import cppstd_flag
+from conans.client.build.cppstd_flags import cppstd_flag, cppstd_from_settings
class VisualStudioBuildEnvironment(object):
@@ -141,11 +141,11 @@ def vs_build_type_flags(settings, with_flags=True):
def vs_std_cpp(settings):
- if settings.get_safe("compiler") == "Visual Studio" and \
- settings.get_safe("cppstd"):
+ cppstd = cppstd_from_settings(settings)
+ if settings.get_safe("compiler") == "Visual Studio" and cppstd:
flag = cppstd_flag(settings.get_safe("compiler"),
settings.get_safe("compiler.version"),
- settings.get_safe("cppstd"))
+ cppstd)
return flag
return None
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index cfe27092a59..af422b1ce13 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -1175,7 +1175,7 @@ def get_graph_info(profile_names, settings, options, env, cwd, install_folder, c
% install_folder)
graph_info = None
- if profile_names or settings or options or profile_names or env or not graph_info:
+ if profile_names or settings or options or env or not graph_info:
if graph_info:
# FIXME: Convert to Exception in Conan 2.0
output.warn("Settings, options, env or profile specified. "
diff --git a/conans/client/conf/__init__.py b/conans/client/conf/__init__.py
index 52eaf33d3ea..fc8e3fa2ef2 100644
--- a/conans/client/conf/__init__.py
+++ b/conans/client/conf/__init__.py
@@ -63,6 +63,7 @@
libcxx: [libstdc++, libstdc++11]
threads: [None, posix, win32] # Windows MinGW
exception: [None, dwarf2, sjlj, seh] # Windows MinGW
+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
Visual Studio:
runtime: [MD, MT, MTd, MDd]
version: ["8", "9", "10", "11", "12", "14", "15", "16"]
@@ -70,17 +71,20 @@
v140, v140_xp, v140_clang_c2, LLVM-vs2012, LLVM-vs2012_xp,
LLVM-vs2013, LLVM-vs2013_xp, LLVM-vs2014, LLVM-vs2014_xp,
LLVM-vs2017, LLVM-vs2017_xp, v141, v141_xp, v141_clang_c2, v142]
+ cppstd: [None, 14, 17, 20]
clang:
version: ["3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "4.0",
"5.0", "6.0", "7.0",
"8"]
libcxx: [libstdc++, libstdc++11, libc++]
+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
apple-clang:
version: ["5.0", "5.1", "6.0", "6.1", "7.0", "7.3", "8.0", "8.1", "9.0", "9.1", "10.0"]
libcxx: [libstdc++, libc++]
+ cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
build_type: [None, Debug, Release, RelWithDebInfo, MinSizeRel]
-cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
+cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20] # Deprecated, use compiler.cppstd
"""
default_client_conf = """
diff --git a/conans/client/generators/compiler_args.py b/conans/client/generators/compiler_args.py
index 0bfa96cfc91..a4f78796617 100644
--- a/conans/client/generators/compiler_args.py
+++ b/conans/client/generators/compiler_args.py
@@ -4,7 +4,7 @@
format_library_paths, libcxx_define, libcxx_flag,
rpath_flags, sysroot_flag,
visual_linker_option_separator, visual_runtime)
-from conans.client.build.cppstd_flags import cppstd_flag
+from conans.client.build.cppstd_flags import cppstd_flag, cppstd_from_settings
from conans.model import Generator
from conans.paths import BUILD_INFO_COMPILER_ARGS
@@ -63,9 +63,10 @@ def content(self):
flags.extend(self._deps_build_info.sharedlinkflags)
flags.extend(self._deps_build_info.exelinkflags)
flags.extend(self._libcxx_flags())
+ cppstd = cppstd_from_settings(self.conanfile.settings)
flags.append(cppstd_flag(self.conanfile.settings.get_safe("compiler"),
self.conanfile.settings.get_safe("compiler.version"),
- self.conanfile.settings.get_safe("cppstd")))
+ cppstd))
sysrf = sysroot_flag(self._deps_build_info.sysroot, compiler=self.compiler)
if sysrf:
flags.append(sysrf)
diff --git a/conans/client/settings_preprocessor.py b/conans/client/settings_preprocessor.py
index 73b0dcc3c81..f5c36fd1658 100644
--- a/conans/client/settings_preprocessor.py
+++ b/conans/client/settings_preprocessor.py
@@ -1,30 +1,54 @@
+import warnings
+
from conans.client.build.cppstd_flags import cppstd_flag
from conans.errors import ConanException
from conans.util.log import logger
def preprocess(settings):
- fill_runtime(settings)
- check_cppstd(settings)
+ _fill_runtime(settings)
+ _check_cppstd(settings)
-def check_cppstd(settings):
+def _check_cppstd(settings):
compiler = settings.get_safe("compiler")
compiler_version = settings.get_safe("compiler.version")
cppstd = settings.get_safe("cppstd")
- if not cppstd or compiler not in ("gcc", "clang", "apple-clang", "Visual Studio"):
+ compiler_cppstd = settings.get_safe("compiler.cppstd")
+
+ if not cppstd and not compiler_cppstd:
+ return
+
+ # Checks: one or the other, but not both
+ if cppstd and compiler_cppstd:
+ raise ConanException("Do not use settings 'compiler.cppstd' together with 'cppstd'."
+ " Use only the former one.")
+
+ if cppstd:
+ warnings.warn("Setting 'cppstd' is deprecated in favor of 'compiler.cppstd'")
+
+ if compiler not in ("gcc", "clang", "apple-clang", "Visual Studio"):
return
- cpp_values = settings.cppstd.values_range
- available = [v for v in cpp_values if cppstd_flag(compiler, compiler_version, v)]
- if str(cppstd) not in available:
- raise ConanException("The specified 'cppstd=%s' is not available "
- "for '%s %s'. Possible values are %s'" % (cppstd,
- compiler,
- compiler_version,
- available))
+
+ # Check that we have a flag available for that value of the C++ Standard
+ def check_flag_available(values_range, value, setting_id):
+ available = [v for v in values_range if cppstd_flag(compiler, compiler_version, v)]
+ if str(value) not in available:
+ raise ConanException("The specified '%s=%s' is not available "
+ "for '%s %s'. Possible values are %s'" % (setting_id,
+ value,
+ compiler,
+ compiler_version,
+ available))
+
+ if cppstd:
+ check_flag_available(settings.cppstd.values_range, cppstd, "cppstd")
+ else:
+ check_flag_available(settings.compiler.cppstd.values_range,
+ compiler_cppstd, "compiler.cppstd")
-def fill_runtime(settings):
+def _fill_runtime(settings):
try:
if settings.compiler == "Visual Studio":
if settings.get_safe("compiler.runtime") is None:
diff --git a/conans/client/tools/system_pm.py b/conans/client/tools/system_pm.py
index d3599758427..7c66f0ceece 100644
--- a/conans/client/tools/system_pm.py
+++ b/conans/client/tools/system_pm.py
@@ -148,8 +148,8 @@ def update(self):
pass
def install(self, package_name):
- self._output.warn("Only available for linux with apt-get, yum, or pacman or OSX with brew or "
- "FreeBSD with pkg or Solaris with pkgutil")
+ self._output.warn("Only available for linux with apt-get, yum, or pacman or OSX with brew or"
+ " FreeBSD with pkg or Solaris with pkgutil")
def installed(self, package_name):
return False
diff --git a/conans/model/conan_file.py b/conans/model/conan_file.py
index 848ea67b279..fbb259c4994 100644
--- a/conans/model/conan_file.py
+++ b/conans/model/conan_file.py
@@ -132,6 +132,7 @@ def initialize(self, settings, env):
self.options = create_options(self)
self.requires = create_requirements(self)
self.settings = create_settings(self, settings)
+
try:
if self.settings.os_build and self.settings.os:
self.output.writeln("*"*60, front=Color.BRIGHT_RED)
@@ -145,6 +146,10 @@ def initialize(self, settings, env):
except ConanException:
pass
+ if 'cppstd' in self.settings.fields:
+ self.output.warn("Setting 'cppstd' is deprecated in favor of 'compiler.cppstd',"
+ " please update your recipe.")
+
# needed variables to pack the project
self.cpp_info = None # Will be initialized at processing time
self.deps_cpp_info = DepsCppInfo()
diff --git a/conans/model/info.py b/conans/model/info.py
index 475034bfa47..def41ffa824 100644
--- a/conans/model/info.py
+++ b/conans/model/info.py
@@ -398,14 +398,20 @@ def default_std_matching(self):
same as specifying None, packages are the same
"""
- if self.full_settings.cppstd and \
- self.full_settings.compiler and \
- self.full_settings.compiler.version:
+ if self.full_settings.compiler and \
+ self.full_settings.compiler.version:
default = cppstd_default(str(self.full_settings.compiler),
str(self.full_settings.compiler.version))
- if default == str(self.full_settings.cppstd):
+
+ if str(self.full_settings.cppstd) == default:
self.settings.cppstd = None
+ if str(self.full_settings.compiler.cppstd) == default:
+ self.settings.compiler.cppstd = None
+
def default_std_non_matching(self):
if self.full_settings.cppstd:
self.settings.cppstd = self.full_settings.cppstd
+
+ if self.full_settings.compiler.cppstd:
+ self.settings.compiler.cppstd = self.full_settings.compiler.cppstd
diff --git a/conans/model/profile.py b/conans/model/profile.py
index 50e06475e59..f293949175a 100644
--- a/conans/model/profile.py
+++ b/conans/model/profile.py
@@ -1,10 +1,11 @@
import copy
from collections import OrderedDict, defaultdict
+from conans.client import settings_preprocessor
+from conans.errors import ConanException
from conans.model.env_info import EnvValues
from conans.model.options import OptionsValues
from conans.model.values import Values
-from conans.client import settings_preprocessor
class Profile(object):
@@ -29,6 +30,19 @@ def process_settings(self, cache, preprocess=True):
# FIXME: Simplify the values.as_list()
self.settings = OrderedDict(self.processed_settings.values.as_list())
+ # Preprocess also scoped settings
+ for pkg, pkg_settings in self.package_settings.items():
+ pkg_profile = Profile()
+ pkg_profile.settings = self.settings
+ pkg_profile.update_settings(pkg_settings)
+ try:
+ pkg_profile.process_settings(cache=cache, preprocess=True)
+ except Exception as e:
+ pkg_profile = ["{}={}".format(k, v) for k, v in pkg_profile.settings.items()]
+ raise ConanException("Error in resulting settings for package"
+ " '{}': {}\n{}".format(pkg, e, '\n'.join(pkg_profile)))
+ # TODO: Assign the _validated_ settings and do not compute again
+
@property
def package_settings_values(self):
result = {}
diff --git a/conans/test/functional/build_helpers/cmake_flags_test.py b/conans/test/functional/build_helpers/cmake_flags_test.py
index 4d470d026ac..ac88862f9f1 100644
--- a/conans/test/functional/build_helpers/cmake_flags_test.py
+++ b/conans/test/functional/build_helpers/cmake_flags_test.py
@@ -6,9 +6,9 @@
from nose.plugins.attrib import attr
from parameterized.parameterized import parameterized
-from conans import load
from conans.client.build.cmake import CMake
from conans.model.version import Version
+from conans.test.utils.deprecation import catch_deprecation_warning
from conans.test.utils.tools import TestClient
conanfile_py = """
@@ -322,19 +322,22 @@ def build(self):
"""})
if platform.system() != "Windows":
- client.run("install . --install-folder=build -s cppstd=gnu98")
+ with catch_deprecation_warning(self):
+ client.run("install . --install-folder=build -s cppstd=gnu98")
client.run("build . --build-folder=build", assert_error=True)
self.assertIn("Error in build()", client.out)
# Now specify c++14
- client.run("install . --install-folder=build -s cppstd=gnu14")
+ with catch_deprecation_warning(self):
+ client.run("install . --install-folder=build -s cppstd=gnu14")
client.run("build . --build-folder=build")
self.assertIn("CPP STANDARD: 14 WITH EXTENSIONS ON", client.out)
libname = "libmylib.a" if platform.system() != "Windows" else "mylib.lib"
libpath = os.path.join(client.current_folder, "build", "lib", libname)
self.assertTrue(os.path.exists(libpath))
- client.run("install . --install-folder=build -s cppstd=14")
+ with catch_deprecation_warning(self):
+ client.run("install . --install-folder=build -s cppstd=14")
client.run("build . --build-folder=build")
self.assertIn("CPP STANDARD: 14 WITH EXTENSIONS OFF", client.out)
self.assertNotIn("Conan setting CXX_FLAGS flags", client.out)
@@ -375,15 +378,17 @@ def conan_set_std_branch():
cmake_version = CMake.get_version()
return cmake_version < Version("3.12")
- client.run("create . user/channel -s cppstd=gnu20 -s compiler=gcc -s compiler.version=8 "
- "-s compiler.libcxx=libstdc++11")
+ with catch_deprecation_warning(self):
+ client.run("create . user/channel -s cppstd=gnu20 -s compiler=gcc -s compiler.version=8 "
+ "-s compiler.libcxx=libstdc++11")
if conan_set_std_branch():
self.assertIn("Conan setting CXX_FLAGS flags: -std=gnu++2a", client.out)
else:
self.assertIn("Conan setting CPP STANDARD: 20 WITH EXTENSIONS ON", client.out)
- client.run("create . user/channel -s cppstd=20 -s compiler=gcc -s compiler.version=8 "
- "-s compiler.libcxx=libstdc++11")
+ with catch_deprecation_warning(self):
+ client.run("create . user/channel -s cppstd=20 -s compiler=gcc -s compiler.version=8 "
+ "-s compiler.libcxx=libstdc++11")
if conan_set_std_branch():
self.assertIn("Conan setting CXX_FLAGS flags: -std=c++2a", client.out)
else:
diff --git a/conans/test/functional/build_helpers/msbuild_test.py b/conans/test/functional/build_helpers/msbuild_test.py
index 62a062f8735..086152d443f 100644
--- a/conans/test/functional/build_helpers/msbuild_test.py
+++ b/conans/test/functional/build_helpers/msbuild_test.py
@@ -10,6 +10,7 @@
from conans.paths import CONANFILE
from conans.test.utils.tools import TestClient
from conans.test.utils.visual_project_files import get_vs_project_files
+from conans.test.utils.deprecation import catch_deprecation_warning
class MSBuildTest(unittest.TestCase):
@@ -41,10 +42,12 @@ def package(self):
files[CONANFILE] = conan_build_vs
client.save(files)
- client.run('create . Hello/1.2.1@lasote/stable -s cppstd=11 -s '
- 'compiler="Visual Studio" -s compiler.version=14', assert_error=True)
- client.run('create . Hello/1.2.1@lasote/stable -s cppstd=17 '
- '-s compiler="Visual Studio" -s compiler.version=14')
+ with catch_deprecation_warning(self):
+ client.run('create . Hello/1.2.1@lasote/stable -s cppstd=11 -s '
+ 'compiler="Visual Studio" -s compiler.version=14', assert_error=True)
+ with catch_deprecation_warning(self):
+ client.run('create . Hello/1.2.1@lasote/stable -s cppstd=17 '
+ '-s compiler="Visual Studio" -s compiler.version=14')
self.assertIn("Packaged 1 '.exe' file: MyProject.exe", client.out)
files = get_vs_project_files()
diff --git a/conans/test/functional/cppstd/compiler_cppstd_test.py b/conans/test/functional/cppstd/compiler_cppstd_test.py
new file mode 100644
index 00000000000..5994274ccef
--- /dev/null
+++ b/conans/test/functional/cppstd/compiler_cppstd_test.py
@@ -0,0 +1,158 @@
+# coding=utf-8
+
+import os
+import textwrap
+import unittest
+
+from parameterized.parameterized import parameterized_class
+
+from conans.client.tools import environment_append, save
+from conans.test.utils.deprecation import catch_deprecation_warning
+from conans.test.utils.test_files import temp_folder
+from conans.test.utils.tools import TestClient
+
+
+@parameterized_class([{"recipe_cppstd": True}, {"recipe_cppstd": False}, ])
+class SettingsCppStdScopedPackageTests(unittest.TestCase):
+ # Validation of scoped settings is delayed until graph computation, a conanfile can
+ # declare a different set of settings, so we should wait until then to validate it.
+
+ default_profile = textwrap.dedent("""
+ [settings]
+ os=Linux
+ arch=x86
+ compiler=gcc
+ compiler.version=7
+ compiler.libcxx=libstdc++11
+ """)
+
+ def run(self, *args, **kwargs):
+ default_profile_path = os.path.join(temp_folder(), "default.profile")
+ save(default_profile_path, self.default_profile)
+ with environment_append({"CONAN_DEFAULT_PROFILE_PATH": default_profile_path}):
+ unittest.TestCase.run(self, *args, **kwargs)
+
+ def setUp(self):
+ self.tmp_folder = temp_folder()
+ self.t = TestClient(base_folder=self.tmp_folder)
+
+ settings = ["os", "compiler", "build_type", "arch"]
+ if self.recipe_cppstd:
+ settings += ["cppstd"]
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Lib(ConanFile):
+ settings = "{}"
+ """.format('", "'.join(settings)))
+ self.t.save({"conanfile.py": conanfile})
+
+ def test_value_invalid(self):
+ self.t.run("create . hh/0.1@user/channel -shh:compiler=apple-clang -shh:compiler.cppstd=144",
+ assert_error=True)
+ self.assertIn("Invalid setting '144' is not a valid 'settings.compiler.cppstd' value",
+ self.t.out)
+
+ def test_value_different_with_scoped_setting(self):
+ self.t.run("create . hh/0.1@user/channel"
+ " -s hh:cppstd=11"
+ " -s hh:compiler=gcc"
+ " -s hh:compiler.cppstd=14", assert_error=True)
+ self.assertIn("ERROR: Error in resulting settings for package 'hh': Do not use settings"
+ " 'compiler.cppstd' together with 'cppstd'", self.t.out)
+
+ def test_value_different_with_general_setting(self):
+ deprecation_number = 1 if self.recipe_cppstd else 0
+ with catch_deprecation_warning(self, n=deprecation_number):
+ self.t.run("create . hh/0.1@user/channel"
+ " -s cppstd=17"
+ " -s hh:compiler=gcc"
+ " -s hh:compiler.cppstd=14", assert_error=True)
+ self.assertIn("ERROR: Error in resulting settings for package 'hh': Do not use settings"
+ " 'compiler.cppstd' together with 'cppstd'", self.t.out)
+
+ def test_conanfile_without_compiler(self):
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Lib(ConanFile):
+ settings = "os", "arch"
+ """)
+ t = TestClient(base_folder=temp_folder())
+ t.save({'conanfile.py': conanfile})
+
+ with catch_deprecation_warning(self):
+ # No mismatch, because settings for this conanfile does not include `compiler`
+ t.run("create . hh/0.1@user/channel"
+ " -s cppstd=17"
+ " -s hh:compiler=gcc"
+ " -s hh:compiler.cppstd=14", assert_error=True)
+ self.assertIn("ERROR: Error in resulting settings for package 'hh': Do not use settings"
+ " 'compiler.cppstd' together with 'cppstd'", t.out)
+
+ def test_conanfile_without_compiler_but_cppstd(self):
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Lib(ConanFile):
+ settings = "os", "arch", "cppstd"
+
+ def configure(self):
+ self.output.info(">>> cppstd: {}".format(self.settings.cppstd))
+ """)
+ t = TestClient(base_folder=temp_folder())
+ t.save({'conanfile.py': conanfile}, clean_first=True)
+
+ with catch_deprecation_warning(self):
+ # No mismatch, because settings for this conanfile does not include `compiler`
+ t.run("create . hh/0.1@user/channel"
+ " -s cppstd=17"
+ " -s hh:compiler=gcc"
+ " -s hh:compiler.cppstd=14", assert_error=True)
+ self.assertIn("ERROR: Error in resulting settings for package 'hh': Do not use settings"
+ " 'compiler.cppstd' together with 'cppstd'", t.out)
+
+
+class UseCompilerCppStdSettingTests(unittest.TestCase):
+
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+
+ class Lib(ConanFile):
+ settings = "cppstd", "os", "compiler", "arch", "build_type"
+
+ def configure(self):
+ self.output.info(">>> cppstd: {}".format(self.settings.cppstd))
+ self.output.info(">>> compiler.cppstd: {}".format(self.settings.compiler.cppstd))
+ """)
+
+ def setUp(self):
+ self.t = TestClient()
+ self.t.save({'conanfile.py': self.conanfile})
+
+ def test_user_notice(self):
+ self.t.run("info .")
+ self.assertIn("WARN: Setting 'cppstd' is deprecated in favor of 'compiler.cppstd',"
+ " please update your recipe.", self.t.out)
+
+ def test_only_cppstd(self):
+ with catch_deprecation_warning(self):
+ self.t.run("info . -s cppstd=14")
+ self.assertNotIn(">>> compiler.cppstd: 14", self.t.out)
+ self.assertIn(">>> cppstd: 14", self.t.out)
+ self.assertIn(">>> compiler.cppstd: None", self.t.out)
+
+ def test_only_compiler_cppstd(self):
+ """ settings.cppstd is available only if declared explicitly (otherwise it is deprecated) """
+ self.t.run("info . -s compiler.cppstd=14")
+ self.assertNotIn(">>> cppstd: 14", self.t.out)
+ self.assertIn(">>> cppstd: None", self.t.out)
+ self.assertIn(">>> compiler.cppstd: 14", self.t.out)
+
+ def test_both(self):
+ settings_str = "-s cppstd=14 -s compiler.cppstd=14"
+ self.t.run("info . {}".format(settings_str), assert_error=True)
+ self.assertIn("Do not use settings 'compiler.cppstd' together with 'cppstd'."
+ " Use only the former one.", self.t.out)
+
diff --git a/conans/test/functional/cppstd/default_cppstd_test.py b/conans/test/functional/cppstd/default_cppstd_test.py
index a1188f46f05..c3abaa50be7 100644
--- a/conans/test/functional/cppstd/default_cppstd_test.py
+++ b/conans/test/functional/cppstd/default_cppstd_test.py
@@ -7,13 +7,12 @@
from conans.client.build.cppstd_flags import cppstd_default
from conans.client.tools import environment_append, save, load
+from conans.test.utils.deprecation import catch_deprecation_warning
from conans.test.utils.test_files import temp_folder
from conans.test.utils.tools import TestClient
class DefaultCppTestCase(unittest.TestCase):
- # Validate package ID computed taking into account different cppstd scenarios
-
compiler = "gcc"
compiler_version = "7"
@@ -34,14 +33,15 @@ class Library(ConanFile):
def configure(self):
cppstd = self.settings.get_safe("cppstd")
+ compiler_cppstd = self.settings.get_safe("compiler.cppstd")
self.output.info(">>>> settings: {{}}".format(self.settings.fields))
self.output.info(">>>> cppstd: {{}}".format(cppstd))
+ self.output.info(">>>> compiler.cppstd: {{}}".format(compiler_cppstd))
""")
id_default = "d17189cfe7b11efbc5d701339a32d203745f8b81"
def run(self, *args, **kwargs):
- # Create and use a different default profile
default_profile_path = os.path.join(temp_folder(), "default.profile")
save(default_profile_path, self.default_profile)
with environment_append({"CONAN_DEFAULT_PROFILE_PATH": default_profile_path}):
@@ -54,6 +54,7 @@ def setUp(self):
self.assertEqual(target_id, self.id_default)
self.assertIn(">>>> settings: ['compiler', 'os']", output)
self.assertIn(">>>> cppstd: None", output)
+ self.assertIn(">>>> compiler.cppstd: None", output)
def _get_id(self, with_cppstd, settings_values=None):
# Create the conanfile with corresponding settings
@@ -75,14 +76,22 @@ def _get_id(self, with_cppstd, settings_values=None):
data = json.loads(load(json_file))
self.assertEqual(len(data), 1)
- # Return: ID, output
+ # Return ID, output
return data[0]["id"], info_output
+
+class SettingsCppStdTests(DefaultCppTestCase):
+ """
+ Validate package ID computed taking into account different scenarios for 'cppstd'. The ID
+ should be the same if the setting is not provided and if it has the default value.
+ """
+
def test_no_value(self):
# No value passed for setting 'cppstd'
id_with, output = self._get_id(with_cppstd=True) # TODO: Should raise?
self.assertIn(">>>> settings: ['compiler', 'cppstd', 'os']", output)
self.assertIn(">>>> cppstd: None", output)
+ self.assertIn(">>>> compiler.cppstd: None", output)
self.assertEqual(self.id_default, id_with)
def test_value_none(self):
@@ -90,20 +99,84 @@ def test_value_none(self):
id_with, output = self._get_id(with_cppstd=True, settings_values={"cppstd": "None"})
self.assertIn(">>>> settings: ['compiler', 'cppstd', 'os']", output)
self.assertIn(">>>> cppstd: None", output)
+ self.assertIn(">>>> compiler.cppstd: None", output)
self.assertEqual(self.id_default, id_with)
def test_value_default(self):
# Explicit value (equals to default) passed to setting 'cppstd'
cppstd = cppstd_default(self.compiler, self.compiler_version)
- id_with, output = self._get_id(with_cppstd=True, settings_values={"cppstd": cppstd})
+ with catch_deprecation_warning(self):
+ id_with, output = self._get_id(with_cppstd=True, settings_values={"cppstd": cppstd})
self.assertIn(">>>> settings: ['compiler', 'cppstd', 'os']", output)
self.assertIn(">>>> cppstd: gnu14", output)
+ self.assertIn(">>>> compiler.cppstd: None", output)
self.assertEqual(self.id_default, id_with)
- def test_value_other(self):
+ def test_value_non_default(self):
# Explicit value (not the default) passed to setting 'cppstd'
- id_with, output = self._get_id(with_cppstd=True, settings_values={"cppstd": "14"})
+ with catch_deprecation_warning(self):
+ id_with, output = self._get_id(with_cppstd=True, settings_values={"cppstd": "14"})
self.assertIn(">>>> settings: ['compiler', 'cppstd', 'os']", output)
self.assertIn(">>>> cppstd: 14", output)
+ self.assertIn(">>>> compiler.cppstd: None", output)
self.assertNotEqual(self.id_default, id_with)
+
+class SettingsCompilerCppStdTests(DefaultCppTestCase):
+ """
+ Validate package ID computed taking into account different scenarios for 'compiler.cppstd'. The
+ ID has to be the same if the setting is not informed and if it has the default value, also
+ these values should be the same as the ones using the 'cppstd' approach.
+ """
+
+ def _get_id(self, with_cppstd=False, settings_values=None):
+ assert not with_cppstd
+ return super(SettingsCompilerCppStdTests, self)._get_id(with_cppstd=False,
+ settings_values=settings_values)
+
+ def test_value_none(self):
+ # Explicit value 'None' passed to setting 'cppstd'
+ id_with, output = self._get_id(settings_values={"compiler.cppstd": "None"})
+ self.assertIn(">>>> settings: ['compiler', 'os']", output)
+ self.assertIn(">>>> cppstd: None", output)
+ self.assertIn(">>>> compiler.cppstd: None", output)
+ self.assertEqual(self.id_default, id_with)
+
+ def test_value_default(self):
+ # Explicit value (equals to default) passed to setting 'compiler.cppstd'
+ cppstd = cppstd_default(self.compiler, self.compiler_version)
+ id_with, output = self._get_id(settings_values={"compiler.cppstd": cppstd})
+ self.assertIn(">>>> settings: ['compiler', 'os']", output)
+ self.assertIn(">>>> cppstd: None", output)
+ self.assertIn(">>>> compiler.cppstd: gnu14", output)
+ self.assertEqual(self.id_default, id_with)
+
+ def test_value_other(self):
+ # Explicit value (not the default) passed to setting 'cppstd'
+ id_with, output = self._get_id(settings_values={"compiler.cppstd": "14"})
+ self.assertIn(">>>> settings: ['compiler', 'os']", output)
+ self.assertIn(">>>> cppstd: None", output)
+ self.assertIn(">>>> compiler.cppstd: 14", output)
+ self.assertNotEqual(self.id_default, id_with)
+
+
+class SettingsCompareCppStdApproaches(DefaultCppTestCase):
+ """
+ Check scenario using 'cppstd' and 'compiler.cppstd', if those are given the same value
+ (but different from the default one) then the ID for the packages is not required to be
+ the same.
+ """
+
+ def test_cppstd_non_defaults(self):
+ cppstd_value = "14" # Not the default
+ with catch_deprecation_warning(self):
+ id_with_old, _ = self._get_id(with_cppstd=True, settings_values={"cppstd": cppstd_value})
+ id_with_new, _ = self._get_id(with_cppstd=False,
+ settings_values={'compiler.cppstd': cppstd_value})
+
+ # Those are different from the target one (ID using default value or None)
+ self.assertNotEqual(self.id_default, id_with_old)
+ self.assertNotEqual(self.id_default, id_with_new)
+
+ # They are different between them
+ self.assertNotEqual(id_with_new, id_with_old)
diff --git a/conans/test/integration/basic_build_test.py b/conans/test/integration/basic_build_test.py
index 6878df08688..67826ce96b1 100644
--- a/conans/test/integration/basic_build_test.py
+++ b/conans/test/integration/basic_build_test.py
@@ -21,7 +21,7 @@ def build_cmake_test(self):
build(self, cmd, static, pure_c, use_cmake=True, lang=lang)
def build_default_test(self):
- "build default (gcc in nix, VS in win)"
+ """ build default (gcc in nix, VS in win) """
if platform.system() == "SunOS":
return # If is using sun-cc the gcc generator doesn't work
diff --git a/conans/test/integration/cppstd_test.py b/conans/test/integration/cppstd_test.py
index a0f63efda2b..fab06b81aa7 100644
--- a/conans/test/integration/cppstd_test.py
+++ b/conans/test/integration/cppstd_test.py
@@ -1,6 +1,7 @@
import unittest
from conans.paths import CONANFILE
+from conans.test.utils.deprecation import catch_deprecation_warning
from conans.test.utils.tools import TestClient
@@ -18,15 +19,17 @@ class TestConan(ConanFile):
"""
client.save({CONANFILE: conanfile})
- client.run('create . user/testing -s compiler="gcc" '
- '-s compiler.libcxx="libstdc++11" '
- '-s compiler.version="4.6" -s cppstd=17', assert_error=True)
+ with catch_deprecation_warning(self):
+ client.run('create . user/testing -s compiler="gcc" '
+ '-s compiler.libcxx="libstdc++11" '
+ '-s compiler.version="4.6" -s cppstd=17', assert_error=True)
self.assertIn("The specified 'cppstd=17' is not available for 'gcc 4.6'", client.out)
self.assertIn("Possible values are ['11', '98', 'gnu11', 'gnu98']", client.out)
- client.run('create . user/testing -s compiler="gcc" -s compiler.libcxx="libstdc++11" '
- '-s compiler.version="6.3" -s cppstd=17')
+ with catch_deprecation_warning(self):
+ client.run('create . user/testing -s compiler="gcc" -s compiler.libcxx="libstdc++11" '
+ '-s compiler.version="6.3" -s cppstd=17')
def gcc_8_std_20_test(self):
client = TestClient()
@@ -40,9 +43,10 @@ class TestConan(ConanFile):
"""
client.save({CONANFILE: conanfile})
- client.run('create . user/testing -s compiler="gcc" '
- '-s compiler.libcxx="libstdc++11" '
- '-s compiler.version="8" -s cppstd=20')
+ with catch_deprecation_warning(self):
+ client.run('create . user/testing -s compiler="gcc" '
+ '-s compiler.libcxx="libstdc++11" '
+ '-s compiler.version="8" -s cppstd=20')
def set_default_package_id_test(self):
client = TestClient()
@@ -56,7 +60,8 @@ class TestConan(ConanFile):
def build(self):
self.output.warn("BUILDING!")
"""
- client.save({CONANFILE: conanfile % ""}) # Without the setting
+ # Without the setting
+ client.save({CONANFILE: conanfile % ""})
client.run('create . user/testing -s compiler="gcc" -s compiler.version="7.1" '
'-s compiler.libcxx="libstdc++" '
'--build missing')
@@ -64,10 +69,12 @@ def build(self):
# Add the setting but with the default value, should not build again
client.save({CONANFILE: conanfile % '"cppstd"'}) # With the setting
- client.run('create . user/testing -s compiler="gcc" -s compiler.version="7.1" '
- '-s compiler.libcxx="libstdc++" '
- '-s cppstd=gnu14 '
- '--build missing')
+ with catch_deprecation_warning(self):
+ client.run('create . user/testing -s compiler="gcc" -s compiler.version="7.1" '
+ '-s compiler.libcxx="libstdc++" '
+ '-s cppstd=gnu14 '
+ '--build missing')
+
if client.cache.config.revisions_enabled:
self.assertIn("doesn't belong to the installed recipe revision, removing folder",
client.out)
@@ -77,8 +84,9 @@ def build(self):
# Add the setting but with a non-default value, should build again
client.save({CONANFILE: conanfile % '"cppstd"'}) # With the setting
- client.run('create . user/testing -s compiler="gcc" -s compiler.version="7.1" '
- '-s compiler.libcxx="libstdc++" '
- '-s cppstd=gnu17 '
- '--build missing')
+ with catch_deprecation_warning(self):
+ client.run('create . user/testing -s compiler="gcc" -s compiler.version="7.1" '
+ '-s compiler.libcxx="libstdc++" '
+ '-s cppstd=gnu17 '
+ '--build missing')
self.assertIn("BUILDING!", client.out)
diff --git a/conans/test/integration/package_id_test.py b/conans/test/integration/package_id_test.py
index 817480f0e6c..43a43e161ba 100644
--- a/conans/test/integration/package_id_test.py
+++ b/conans/test/integration/package_id_test.py
@@ -5,6 +5,7 @@
from conans.model.ref import ConanFileReference, PackageReference
from conans.paths import CONANINFO
from conans.test.utils.conanfile import TestConanFile
+from conans.test.utils.deprecation import catch_deprecation_warning
from conans.test.utils.tools import TestClient
from conans.util.env_reader import get_env
from conans.util.files import load
@@ -37,7 +38,6 @@ def _export(self, name, version, package_id_text=None, requires=None,
default_options=[("an_option", "%s" % default_option_value)],
package_id=package_id_text,
settings=settings)
-
self.client.save({"conanfile.py": str(conanfile)}, clean_first=True)
revisions_enabled = self.client.cache.config.revisions_enabled
self.client.disable_revisions()
@@ -369,19 +369,41 @@ def test_standard_version_default_matching(self):
channel="user/testing",
settings='"compiler", "cppstd"')
- self.client.run('install Hello/1.2.0@user/testing '
- ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
- ' -s compiler.version=7.2 -s cppstd=gnu14') # Default, already built
+ with catch_deprecation_warning(self):
+ self.client.run('info Hello/1.2.0@user/testing -s compiler="gcc" '
+ '-s compiler.libcxx=libstdc++11 -s compiler.version=7.2 '
+ '-s cppstd=gnu14')
+ with catch_deprecation_warning(self):
+ self.client.run('install Hello/1.2.0@user/testing'
+ ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
+ ' -s compiler.version=7.2 -s cppstd=gnu14') # Default, already built
# Should NOT have binary available
+ with catch_deprecation_warning(self):
+ self.client.run('install Hello/1.2.0@user/testing'
+ ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
+ ' -s compiler.version=7.2 -s cppstd=gnu11',
+ assert_error=True)
+
+ self.assertIn("Missing prebuilt package for 'Hello/1.2.0@user/testing'", self.client.out)
+
+ def test_std_non_matching_with_cppstd(self):
+ self._export("Hello", "1.2.0", package_id_text="self.info.default_std_non_matching()",
+ channel="user/testing",
+ settings='"compiler", "cppstd"'
+ )
self.client.run('install Hello/1.2.0@user/testing'
' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
- ' -s compiler.version=7.2 -s cppstd=gnu11',
- assert_error=True)
+ ' -s compiler.version=7.2 --build')
+ with catch_deprecation_warning(self, n=1):
+ self.client.run('install Hello/1.2.0@user/testing'
+ ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
+ ' -s compiler.version=7.2 -s cppstd=gnu14',
+ assert_error=True) # Default
self.assertIn("Missing prebuilt package for 'Hello/1.2.0@user/testing'", self.client.out)
- def test_standard_version_default_non_matching(self):
+ def test_std_non_matching_with_compiler_cppstd(self):
self._export("Hello", "1.2.0", package_id_text="self.info.default_std_non_matching()",
channel="user/testing",
settings='"compiler"'
@@ -390,12 +412,21 @@ def test_standard_version_default_non_matching(self):
' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
' -s compiler.version=7.2 --build')
- self._export("Hello", "1.2.0", package_id_text="self.info.default_std_non_matching()",
+ self.client.run('install Hello/1.2.0@user/testing '
+ ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
+ ' -s compiler.version=7.2 -s compiler.cppstd=gnu14', assert_error=True)
+ self.assertIn("Missing prebuilt package for 'Hello/1.2.0@user/testing'", self.client.out)
+
+ def test_std_matching_with_compiler_cppstd(self):
+ self._export("Hello", "1.2.0", package_id_text="self.info.default_std_matching()",
channel="user/testing",
- settings='"compiler", "cppstd"'
+ settings='"compiler"'
)
self.client.run('install Hello/1.2.0@user/testing '
- ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
- ' -s compiler.version=7.2 -s cppstd=gnu14',
- assert_error=True) # Default
- self.assertIn("Missing prebuilt package for 'Hello/1.2.0@user/testing'", self.client.out)
+ ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
+ ' -s compiler.version=7.2 --build')
+
+ self.client.run('install Hello/1.2.0@user/testing '
+ ' -s compiler="gcc" -s compiler.libcxx=libstdc++11'
+ ' -s compiler.version=7.2 -s compiler.cppstd=gnu14')
+ self.assertIn("Hello/1.2.0@user/testing: Already installed!", self.client.out)
diff --git a/conans/test/unittests/client/generators/cmake_paths_test.py b/conans/test/unittests/client/generators/cmake_paths_test.py
index d8c0c7763c7..41fba6e1e6c 100644
--- a/conans/test/unittests/client/generators/cmake_paths_test.py
+++ b/conans/test/unittests/client/generators/cmake_paths_test.py
@@ -8,13 +8,36 @@
from conans.model.env_info import EnvValues
from conans.test.utils.test_files import temp_folder
from conans.test.utils.tools import TestBufferConanOutput
+from conans.errors import ConanException
+
+
+class _MockSettings(object):
+ build_type = None
+ os = None
+ os_build = None
+ fields = []
+
+ def __init__(self, build_type=None):
+ self.build_type = build_type
+
+ @property
+ def compiler(self):
+ raise ConanException("mock: not available")
+
+ def constraint(self, _):
+ return self
+
+ def get_safe(self, _):
+ return None
+
+ def items(self):
+ return {}
class CMakePathsGeneratorTest(unittest.TestCase):
def cmake_vars_unit_test(self):
- settings_mock = namedtuple("Settings", "build_type, os, os_build, constraint")
- settings = settings_mock("Release", None, None, lambda x: x)
+ settings = _MockSettings("Release")
conanfile = ConanFile(TestBufferConanOutput(), None)
conanfile.initialize(settings, EnvValues())
tmp = temp_folder()
diff --git a/conans/test/unittests/client/generators/cmake_test.py b/conans/test/unittests/client/generators/cmake_test.py
index f8506239404..11b34049f60 100644
--- a/conans/test/unittests/client/generators/cmake_test.py
+++ b/conans/test/unittests/client/generators/cmake_test.py
@@ -1,12 +1,12 @@
import os
import re
import unittest
-from collections import namedtuple
+from conans.client.build.cmake_flags import CMakeDefinitionsBuilder
from conans.client.conf import default_settings_yml
from conans.client.generators.cmake import CMakeGenerator
from conans.client.generators.cmake_multi import CMakeMultiGenerator
-from conans.client.build.cmake_flags import CMakeDefinitionsBuilder
+from conans.errors import ConanException
from conans.model.build_info import CppInfo
from conans.model.conan_file import ConanFile
from conans.model.env_info import EnvValues
@@ -17,6 +17,29 @@
from conans.util.files import save
+class _MockSettings(object):
+ build_type = None
+ os = None
+ os_build = None
+ fields = []
+
+ def __init__(self, build_type=None):
+ self.build_type = build_type
+
+ @property
+ def compiler(self):
+ raise ConanException("mock: not available")
+
+ def constraint(self, _):
+ return self
+
+ def get_safe(self, _):
+ return None
+
+ def items(self):
+ return {}
+
+
class CMakeGeneratorTest(unittest.TestCase):
def _extract_macro(self, name, text):
@@ -50,10 +73,9 @@ def variables_setup_test(self):
self.assertIn('set(CONAN_USER_LIB2_MYVAR2 "myvalue4")', cmake_lines)
def paths_cmake_multi_user_vars_test(self):
- settings_mock = namedtuple("Settings", "build_type, os, os_build, constraint")
+ settings_mock = _MockSettings(build_type="Release")
conanfile = ConanFile(TestBufferConanOutput(), None)
- conanfile.initialize(settings_mock("Release", None, None,
- lambda x: x), EnvValues())
+ conanfile.initialize(settings_mock, EnvValues())
ref = ConanFileReference.loads("MyPkg/0.1@lasote/stables")
tmp_folder = temp_folder()
save(os.path.join(tmp_folder, "lib", "mylib.lib"), "")
@@ -70,10 +92,9 @@ def paths_cmake_multi_user_vars_test(self):
self.assertIn('set(CONAN_LIB_DIRS_MYPKG_RELEASE "root_folder/lib")', cmake_lines)
def paths_cmake_test(self):
- settings_mock = namedtuple("Settings", "build_type, os, os_build, constraint, items")
+ settings_mock = _MockSettings()
conanfile = ConanFile(TestBufferConanOutput(), None)
- conanfile.initialize(settings_mock(None, None, None, lambda x: x,
- lambda: {}), EnvValues())
+ conanfile.initialize(settings_mock, EnvValues())
ref = ConanFileReference.loads("MyPkg/0.1@lasote/stables")
tmp_folder = temp_folder()
save(os.path.join(tmp_folder, "lib", "mylib.lib"), "")
@@ -90,10 +111,9 @@ def paths_cmake_test(self):
self.assertIn('set(CONAN_LIB_DIRS_MYPKG_RELEASE "root_folder/lib")', cmake_lines)
def variables_cmake_multi_user_vars_test(self):
- settings_mock = namedtuple("Settings", "build_type, os, os_build, constraint")
+ settings_mock = _MockSettings(build_type="Release")
conanfile = ConanFile(TestBufferConanOutput(), None)
- conanfile.initialize(settings_mock("Release", None, None, lambda x: x,),
- EnvValues())
+ conanfile.initialize(settings_mock, EnvValues())
conanfile.deps_user_info["LIB1"].myvar = "myvalue"
conanfile.deps_user_info["LIB1"].myvar2 = "myvalue2"
conanfile.deps_user_info["lib2"].MYVAR2 = "myvalue4"
@@ -105,10 +125,9 @@ def variables_cmake_multi_user_vars_test(self):
self.assertIn('set(CONAN_USER_LIB2_MYVAR2 "myvalue4")', cmake_lines)
def variables_cmake_multi_user_vars_escape_test(self):
- settings_mock = namedtuple("Settings", "build_type, os, os_build, constraint")
+ settings_mock = _MockSettings(build_type="Release")
conanfile = ConanFile(TestBufferConanOutput(), None)
- conanfile.initialize(settings_mock("Release", None, None, lambda x: x,),
- EnvValues())
+ conanfile.initialize(settings_mock, EnvValues())
conanfile.deps_user_info["FOO"].myvar = 'my"value"'
conanfile.deps_user_info["FOO"].myvar2 = 'my${value}'
conanfile.deps_user_info["FOO"].myvar3 = 'my\\value'
@@ -278,10 +297,9 @@ def settings_are_generated_tests(self):
def cmake_find_package_multi_definitions_test(self):
""" CMAKE_PREFIX_PATH and CMAKE_MODULE_PATH must be present in cmake_find_package_multi definitions
"""
- settings_mock = namedtuple("Settings", "build_type, os_build, constraint, get_safe")
+ settings_mock = _MockSettings(build_type="Release")
conanfile = ConanFile(TestBufferConanOutput(), None)
- conanfile.initialize(settings_mock("Release", None, lambda x: x, lambda x: "Release"),
- EnvValues())
+ conanfile.initialize(settings_mock, EnvValues())
install_folder = "/c/foo/testing"
setattr(conanfile, "install_folder", install_folder)
conanfile.generators = ["cmake_find_package_multi"]
diff --git a/conans/test/unittests/client/profile_loader/__init__.py b/conans/test/unittests/client/profile_loader/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/conans/test/unittests/client/profile_loader/compiler_cppstd_test.py b/conans/test/unittests/client/profile_loader/compiler_cppstd_test.py
new file mode 100644
index 00000000000..a069740dcd6
--- /dev/null
+++ b/conans/test/unittests/client/profile_loader/compiler_cppstd_test.py
@@ -0,0 +1,113 @@
+# coding=utf-8
+
+import os
+import textwrap
+import unittest
+
+import six
+from jinja2 import Template
+
+from conans.client.cache.cache import ClientCache
+from conans.client.profile_loader import profile_from_args
+from conans.errors import ConanException
+from conans.test.utils.deprecation import catch_deprecation_warning
+from conans.test.utils.test_files import temp_folder
+from conans.test.utils.tools import TestBufferConanOutput
+from conans.util.files import save
+
+
+class SettingsCppStdTests(unittest.TestCase):
+
+ def setUp(self):
+ self.tmp_folder = temp_folder()
+ self.cache = ClientCache(self.tmp_folder, TestBufferConanOutput())
+
+ def _save_profile(self, cppstd=None, compiler_cppstd=None, filename="default"):
+ fullpath = os.path.join(self.cache.profiles_path, filename)
+
+ t = Template(textwrap.dedent("""
+ [settings]
+ os=Macos
+ arch=x86_64
+ compiler=apple-clang
+ {% if compiler_cppstd %}compiler.cppstd={{ compiler_cppstd }}{% endif %}
+ compiler.libcxx=libc++
+ compiler.version=10.0
+ {% if cppstd %}cppstd={{ cppstd }}{% endif %}
+ """))
+
+ save(fullpath, t.render(cppstd=cppstd, compiler_cppstd=compiler_cppstd))
+ return filename
+
+ def test_no_value(self):
+ self._save_profile()
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ r.process_settings(self.cache)
+ self.assertNotIn("compiler.cppstd", r.settings)
+ self.assertNotIn("cppstd", r.settings)
+
+ def test_value_none(self):
+ self._save_profile(compiler_cppstd="None")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ r.process_settings(self.cache)
+ self.assertEqual(r.settings["compiler.cppstd"], "None")
+ self.assertNotIn("cppstd", r.settings)
+
+ def test_value_valid(self):
+ self._save_profile(compiler_cppstd="11")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ r.process_settings(self.cache)
+ self.assertEqual(r.settings["compiler.cppstd"], "11")
+ self.assertNotIn("cppstd", r.settings)
+
+ def test_value_invalid(self):
+ self._save_profile(compiler_cppstd="13")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ with six.assertRaisesRegex(self, ConanException, "Invalid setting '13' is not a valid "
+ "'settings.compiler.cppstd' value"):
+ r.process_settings(self.cache)
+ self.assertNotIn("cppstd", r.settings)
+
+ def test_value_duplicated_None(self):
+ self._save_profile(compiler_cppstd="None", cppstd="None")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ r.process_settings(self.cache)
+ self.assertEqual(r.settings["compiler.cppstd"], "None")
+ self.assertEqual(r.settings["cppstd"], "None")
+
+ def test_value_duplicated(self):
+ self._save_profile(compiler_cppstd="11", cppstd="11")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ with six.assertRaisesRegex(self, ConanException, "Do not use settings 'compiler.cppstd'"
+ " together with 'cppstd'. Use only the"
+ " former one."):
+ with catch_deprecation_warning(self):
+ r.process_settings(self.cache)
+ self.assertEqual(r.settings["compiler.cppstd"], "11")
+ self.assertEqual(r.settings["cppstd"], "11")
+
+ def test_value_different(self):
+ self._save_profile(cppstd="14", compiler_cppstd="11")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ with six.assertRaisesRegex(self, ConanException, "Do not use settings 'compiler.cppstd'"
+ " together with 'cppstd'. Use only the"
+ " former one"):
+ with catch_deprecation_warning(self):
+ r.process_settings(self.cache)
+
+ def test_value_from_cppstd(self):
+ self._save_profile(cppstd="11")
+
+ r = profile_from_args(["default", ], [], [], [], cwd=self.tmp_folder, cache=self.cache)
+ with catch_deprecation_warning(self):
+ r.process_settings(self.cache)
+ self.assertNotIn('compiler.cppstd', r.settings)
+ self.assertEqual(r.settings["cppstd"], "11")
+
diff --git a/conans/test/unittests/client/profile_loader_test.py b/conans/test/unittests/client/profile_loader/profile_loader_test.py
similarity index 100%
rename from conans/test/unittests/client/profile_loader_test.py
rename to conans/test/unittests/client/profile_loader/profile_loader_test.py
diff --git a/conans/test/unittests/model/other_settings_test.py b/conans/test/unittests/model/other_settings_test.py
index a77ee1fb21e..cccaadce323 100644
--- a/conans/test/unittests/model/other_settings_test.py
+++ b/conans/test/unittests/model/other_settings_test.py
@@ -5,6 +5,7 @@
from conans.model.ref import PackageReference
from conans.model.settings import bad_value_msg, undefined_value
from conans.paths import CONANFILE, CONANINFO
+from conans.test.utils.deprecation import catch_deprecation_warning
from conans.test.utils.tools import TestClient
from conans.util.files import load, save
@@ -46,7 +47,8 @@ class Pkg(ConanFile):
settings = "compiler", "cppstd"
"""
client.save({"conanfile.py": conanfile})
- client.run("create . Pkg/0.1@lasote/testing")
+ with catch_deprecation_warning(self):
+ client.run("create . Pkg/0.1@lasote/testing")
self.assertIn("""Configuration:
[settings]
compiler=mycomp
diff --git a/conans/test/utils/deprecation.py b/conans/test/utils/deprecation.py
new file mode 100644
index 00000000000..36d1b0bc6cc
--- /dev/null
+++ b/conans/test/utils/deprecation.py
@@ -0,0 +1,14 @@
+# coding=utf-8
+
+import warnings
+from contextlib import contextmanager
+
+
+@contextmanager
+def catch_deprecation_warning(test_suite, n=1):
+ with warnings.catch_warnings(record=True) as w:
+ warnings.filterwarnings("always", module="(.*\.)?conans\..*")
+ yield
+ if n:
+ test_suite.assertEqual(len(w), n)
+ test_suite.assertTrue(issubclass(w[0].category, UserWarning))
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-4595@184dd5c
|
conan-io/conan
|
Python
| 4,595
|
Raise if using 'conan search ... --revisions' with the feature disabled
|
Changelog: omit
Docs: omit
closes #4576
@REVISIONS: 1
|
2019-02-22T12:52:57Z
|
[revisions] Error on search recipe revisions in remote
When revisions are NOT enabled in the Conan Client and a remote search or recipe revisions is done, Conan errors with:
```
$ conan search wiringpi/2.46@conan/stable --revisions -r conan-virtual
ERROR: The remote doesn't support revisions. [Remote: conan-virtual]
```
H
owever, the remote DOES support revisions and the fault is in the client not having them activated and thus not being able to talk via V2 with the remote.
There should be raise early in the code saying that revisions are not enabled, then, if the user enables revisions and finally the remote DOES NOT support revisions, the message ``ERROR: The remote doesn't support revisions. [Remote: conan-virtual]`` would be valid
----------------------------------------
There could be also the option of disabling the ``--revisions`` flag even for local search when revisions are not enabled
|
[
{
"body": "When revisions are NOT enabled in the Conan Client and a remote search or recipe revisions is done, Conan errors with:\r\n\r\n```\r\n$ conan search wiringpi/2.46@conan/stable --revisions -r conan-virtual\r\nERROR: The remote doesn't support revisions. [Remote: conan-virtual]\r\n```\r\nH\r\nowever, the remote DOES support revisions and the fault is in the client not having them activated and thus not being able to talk via V2 with the remote.\r\n\r\nThere should be raise early in the code saying that revisions are not enabled, then, if the user enables revisions and finally the remote DOES NOT support revisions, the message ``ERROR: The remote doesn't support revisions. [Remote: conan-virtual]`` would be valid\r\n\r\n----------------------------------------\r\n\r\nThere could be also the option of disabling the ``--revisions`` flag even for local search when revisions are not enabled",
"number": 4576,
"title": "[revisions] Error on search recipe revisions in remote"
}
] |
ab0870336550b7521da71595c6babf42d5690f7b
|
{
"head_commit": "184dd5ca367b001e291149d0b3b017d8053813c0",
"head_commit_message": "raise if using --revisions in search command with the feature disabled",
"patch_to_review": "diff --git a/conans/client/command.py b/conans/client/command.py\nindex 7cd41e5c406..68a57f696a9 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -1017,8 +1017,15 @@ def search(self, *args):\n parser.add_argument(\"-rev\", \"--revisions\", default=False, action='store_true',\n help='Get a list of revisions for a reference or a '\n 'package reference.')\n+\n args = parser.parse_args(*args)\n \n+ if args.revisions and not self._cache.config.revisions_enabled:\n+ raise ConanException(\"This client doesn't support revisions. Enable this feature setting\"\n+ \" to 'True' the environment variable 'CONAN_REVISIONS_ENABLED'\"\n+ \" or the config value 'general.revisions_enabled' in your\"\n+ \" conan.conf file\")\n+\n if args.table and args.json:\n raise ConanException(\"'--table' argument cannot be used together with '--json'\")\n \ndiff --git a/conans/test/functional/command/search_test.py b/conans/test/functional/command/search_test.py\nindex ab4de8dc421..4f0175dd4d9 100644\n--- a/conans/test/functional/command/search_test.py\n+++ b/conans/test/functional/command/search_test.py\n@@ -1359,6 +1359,12 @@ def test_invalid_references_test(self):\n assert_error=True)\n self.assertIn(\"Cannot list the revisions of a specific package revision\", client.out)\n \n+ def test_exception_client_without_revs(self):\n+ client = TestClient()\n+ client.run(\"search whatever --revisions\", assert_error=True)\n+ self.assertIn(\"ERROR: This client doesn't support revisions\", client.out)\n+\n+\n class SearchRemoteAllTestCase(unittest.TestCase):\n def setUp(self):\n \"\"\" Create a remote called 'all' with some recipe in it \"\"\"\n"
}
|
[
{
"diff_hunk": "@@ -1017,8 +1017,15 @@ def search(self, *args):\n parser.add_argument(\"-rev\", \"--revisions\", default=False, action='store_true',\n help='Get a list of revisions for a reference or a '\n 'package reference.')\n+\n args = parser.parse_args(*args)\n \n+ if args.revisions and not self._cache.config.revisions_enabled:\n+ raise ConanException(\"This client doesn't support revisions. Enable this feature setting\"",
"line": null,
"original_line": 1024,
"original_start_line": null,
"path": "conans/client/command.py",
"start_line": null,
"text": "@user1:\nThe client supports revisions, but they are not activated. I would change the message to: \r\n```\r\nThe client doesn't have the revisions feature enabled. Enable this feature ...\r\n```\r\nI also prefer the value to `1` instead of `True`\n\n@user1:\nI think this PR could also contain the fix for https://github.com/conan-io/conan/issues/4607\r\nThe message probably should be shared. Could make sense to create an exception `RevisionsDisabledException` and catch the error printing the message?\n\n@author:\nI'm not sure if the custom exception would work out of the box, it will inherit from `ConanException`, and in #4607 it will be raised from the `RestApiClient::_get_api` function, I think there is a high chance for that exception to be captured before reaching the `conan_api` world.\r\n\r\nSo, although it is a good idea (👍 ), I will eventually add it in the other issue"
}
] |
ce94712a93d574d1ee10b416544d7cde9e50ac60
|
diff --git a/conans/client/command.py b/conans/client/command.py
index 7cd41e5c406..3de12a8f4e2 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -1017,8 +1017,15 @@ def search(self, *args):
parser.add_argument("-rev", "--revisions", default=False, action='store_true',
help='Get a list of revisions for a reference or a '
'package reference.')
+
args = parser.parse_args(*args)
+ if args.revisions and not self._cache.config.revisions_enabled:
+ raise ConanException("The client doesn't have the revisions feature enabled."
+ " Enable this feature setting to '1' the environment variable"
+ " 'CONAN_REVISIONS_ENABLED' or the config value"
+ " 'general.revisions_enabled' in your conan.conf file")
+
if args.table and args.json:
raise ConanException("'--table' argument cannot be used together with '--json'")
diff --git a/conans/test/functional/command/search_test.py b/conans/test/functional/command/search_test.py
index ab4de8dc421..3b18f52bfde 100644
--- a/conans/test/functional/command/search_test.py
+++ b/conans/test/functional/command/search_test.py
@@ -1079,20 +1079,6 @@ def initial_search_without_registry_test(self):
self.assertIn("WARN: Remotes registry file missing, creating default one", client.out)
self.assertIn("There are no packages matching the 'my_pkg' pattern", client.out)
- def test_usage_of_list_revisions(self):
- client = TestClient()
- conanfile = dedent("""
- from conans import ConanFile
- class Test(ConanFile):
- pass
- """)
- client.save({"conanfile.py": conanfile})
- client.run("create . lib/1.0@conan/stable")
- client.run("search lib/1.0@conan/stable --revisions")
- self.assertIn("Revisions for 'lib/1.0@conan/stable':", client.out)
- # FIXME: Should be "0" when no revisions are enabled?
- self.assertIn("bd761686d5c57b31f4cd85fd0329751f", client.out)
-
@unittest.skipIf(get_env("TESTING_REVISIONS_ENABLED", False), "No sense with revs")
class SearchOutdatedTest(unittest.TestCase):
@@ -1121,6 +1107,11 @@ class Test(ConanFile):
self.assertIn("os: Windows", client.user_io.out)
self.assertNotIn("os: Linux", client.user_io.out)
+ def test_exception_client_without_revs(self):
+ client = TestClient()
+ client.run("search whatever --revisions", assert_error=True)
+ self.assertIn("ERROR: The client doesn't have the revisions feature enabled", client.out)
+
@unittest.skipUnless(get_env("TESTING_REVISIONS_ENABLED", False),
"set TESTING_REVISIONS_ENABLED=1")
@@ -1359,6 +1350,7 @@ def test_invalid_references_test(self):
assert_error=True)
self.assertIn("Cannot list the revisions of a specific package revision", client.out)
+
class SearchRemoteAllTestCase(unittest.TestCase):
def setUp(self):
""" Create a remote called 'all' with some recipe in it """
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-4596@1aa5016
|
conan-io/conan
|
Python
| 4,596
|
Use Jinja2 to parse layout files (editable packages)
|
Changelog: Feature: Apply Jinja2 to layout files before parsing them
Docs: https://github.com/conan-io/docs/pull/1093
This PR executes Jinja2 over layout files (context contains `reference`, `settings` and `options`) before parsing those files so the user will be able to apply some logic on the constructed path to match the underlying filesystem. As this logic depends on `settings` and `options` we are no longer able to parse these files before knowing the node/settings/options it is going to be applied to.
closes #4424
Running on _more_ python versions as we are introducing a new dependency:
@PYVERS: Macos@py27, Windows@py36, Linux@py27, py34
|
2019-02-22T16:00:47Z
|
Use jinja2 templates for editable layouts
Syntax in editable layouts are currently using placeholders like ``{settings.build_type}`` to make some conditional logic regarding folder paths.
We knew it has some limitations specially with placeholder like ``{options.shared}``, that will replace the value with a ``True``/``False`` instead of a useful value.
We evaluated the following:
- Inline evaluation of python code
- Evaluation of field headers like ``[includedirs#settings.build_type==True]``
- Python syntax in *layout.py* implementing a ``package_info()``
- Jinja2 templating
And decided that jinja2 is a fairly common orthogonal language used in a lot of tools, syntax is well documented and contemplates many conditional scenarios. Also this should be quite easy to implement.
The information available in this jinja2 templates should be ``options`` and ``settings``.
|
Changed complexity, I think using jinja template engine for this should be relatively easy.
|
[
{
"body": "Syntax in editable layouts are currently using placeholders like ``{settings.build_type}`` to make some conditional logic regarding folder paths.\r\n\r\nWe knew it has some limitations specially with placeholder like ``{options.shared}``, that will replace the value with a ``True``/``False`` instead of a useful value.\r\n\r\nWe evaluated the following:\r\n- Inline evaluation of python code\r\n- Evaluation of field headers like ``[includedirs#settings.build_type==True]``\r\n- Python syntax in *layout.py* implementing a ``package_info()``\r\n- Jinja2 templating\r\n\r\nAnd decided that jinja2 is a fairly common orthogonal language used in a lot of tools, syntax is well documented and contemplates many conditional scenarios. Also this should be quite easy to implement.\r\n\r\nThe information available in this jinja2 templates should be ``options`` and ``settings``.",
"number": 4424,
"title": "Use jinja2 templates for editable layouts"
}
] |
83053d0215fdc3137be527dd5a829b7c8ec9fb67
|
{
"head_commit": "1aa5016b8876c9d8c01fd73372754aa1b4495ba0",
"head_commit_message": "remove duplicated test",
"patch_to_review": "diff --git a/conans/client/tools/oss.py b/conans/client/tools/oss.py\nindex 7521320b979..e61f7cc2457 100644\n--- a/conans/client/tools/oss.py\n+++ b/conans/client/tools/oss.py\n@@ -306,7 +306,7 @@ def detect_windows_subsystem():\n # https://github.com/Microsoft/WSL/issues/423#issuecomment-221627364\n with open(\"/proc/sys/kernel/osrelease\") as f:\n return WSL if f.read().endswith(\"Microsoft\") else None\n- except FileNotFoundError:\n+ except IOError:\n return None\n try:\n output = OSInfo.uname()\ndiff --git a/conans/model/editable_cpp_info.py b/conans/model/editable_cpp_info.py\nindex b2ab609199d..3164c43f5ef 100644\n--- a/conans/model/editable_cpp_info.py\n+++ b/conans/model/editable_cpp_info.py\n@@ -2,10 +2,13 @@\n import os\n from collections import OrderedDict\n \n+from six import StringIO\n from six.moves import configparser\n \n from conans.errors import ConanException\n from conans.model.ref import ConanFileReference\n+from conans.util.files import load\n+from conans.util.templates import render_layout_file\n \n DEFAULT_LAYOUT_FILE = \"default\"\n LAYOUTS_FOLDER = 'layouts'\n@@ -14,19 +17,17 @@\n def get_editable_abs_path(path, cwd, cache_folder):\n # Check the layout file exists, is correct, and get its abs-path\n if path:\n- layout_abs_path = path if os.path.isabs(path) else os.path.normpath(os.path.join(cwd, path))\n+ layout_abs_path = os.path.normpath(os.path.join(cwd, path))\n if not os.path.isfile(layout_abs_path):\n layout_abs_path = os.path.join(cache_folder, LAYOUTS_FOLDER, path)\n if not os.path.isfile(layout_abs_path):\n raise ConanException(\"Couldn't find layout file: %s\" % path)\n- EditableLayout.load(layout_abs_path) # Try if it loads ok\n return layout_abs_path\n \n # Default only in cache\n- layout_abs_path = os.path.join(cache_folder, LAYOUTS_FOLDER, DEFAULT_LAYOUT_FILE)\n- if os.path.isfile(layout_abs_path):\n- EditableLayout.load(layout_abs_path)\n- return layout_abs_path\n+ layout_default_path = os.path.join(cache_folder, LAYOUTS_FOLDER, DEFAULT_LAYOUT_FILE)\n+ if os.path.isfile(layout_default_path):\n+ return layout_default_path\n \n \n class EditableLayout(object):\n@@ -35,68 +36,76 @@ class EditableLayout(object):\n cpp_info_dirs = ['includedirs', 'libdirs', 'resdirs', 'bindirs', 'builddirs', 'srcdirs']\n folders = [BUILD_FOLDER, SOURCE_FOLDER]\n \n- def __init__(self, data, folders):\n- self._data = data\n- self._folders = folders\n+ def __init__(self, filepath):\n+ self._filepath = filepath\n \n def folder(self, ref, name, settings, options):\n+ _, folders = self._load_data(ref, settings=settings, options=options)\n try:\n- path = self._folders.get(str(ref)) or self._folders.get(None) or {}\n- path = path[name]\n+ path = folders.get(str(ref)) or folders.get(None) or {}\n+ return path[name]\n except KeyError:\n return None\n- try:\n- return self._work_on_item(path, settings, options)\n- except Exception as e:\n- raise ConanException(\"Error getting fHolder '%s' from layout: %s\" % (str(name), str(e)))\n \n @staticmethod\n- def load(filepath):\n- parser = configparser.ConfigParser(allow_no_value=True)\n- parser.optionxform = str\n+ def _work_on_item(value):\n+ value = value.replace('\\\\', '/')\n+ return value\n+\n+ def _parse_layout_file(self, ref, settings, options):\n+ content = load(self._filepath)\n try:\n- parser.read(filepath)\n- except configparser.Error as e:\n- raise ConanException(\"Error parsing layout file: %s\\n%s\" % (filepath, str(e)))\n+ content = render_layout_file(content, ref=ref, settings=settings, options=options)\n+\n+ parser = configparser.ConfigParser(allow_no_value=True)\n+ parser.optionxform = str\n+ parser.readfp(StringIO(content))\n+ except (configparser.Error, ConanException) as e:\n+ raise ConanException(\"Error parsing layout file '%s' (for reference '%s')\\n%s\" %\n+ (self._filepath, str(ref), str(e)))\n+\n+ return parser\n+\n+ def _load_data(self, ref, settings, options):\n+ parser = self._parse_layout_file(ref, settings, options)\n+\n+ # Build a convenient data structure\n data = OrderedDict()\n folders = {}\n for section in parser.sections():\n- ref, section_name = section.rsplit(\":\", 1) if ':' in section else (None, section)\n+ reference, section_name = section.rsplit(\":\", 1) if ':' in section else (None, section)\n+\n if section_name in EditableLayout.folders:\n items = [k for k, _ in parser.items(section)] or [\"\"]\n if len(items) > 1:\n raise ConanException(\"'%s' with more than one value in layout file: %s\"\n- % (section_name, filepath))\n- folders.setdefault(ref, {})[section_name] = items[0]\n+ % (section_name, self._filepath))\n+ folders.setdefault(reference, {})[section_name] = self._work_on_item(items[0])\n continue\n+\n if section_name not in EditableLayout.cpp_info_dirs:\n raise ConanException(\"Wrong cpp_info field '%s' in layout file: %s\"\n- % (section_name, filepath))\n- if ref:\n+ % (section_name, self._filepath))\n+ if reference:\n try:\n- r = ConanFileReference.loads(ref, validate=True)\n+ r = ConanFileReference.loads(reference, validate=True)\n if r.revision:\n raise ConanException(\"Don't provide revision in Editable layouts\")\n except ConanException:\n raise ConanException(\"Wrong package reference '%s' in layout file: %s\"\n- % (ref, filepath))\n- data.setdefault(ref, {})[section_name] = [k for k, _ in parser.items(section)]\n-\n- return EditableLayout(data, folders)\n-\n- @staticmethod\n- def _work_on_item(value, settings, options):\n- value = value.format(settings=settings, options=options)\n- value = value.replace('\\\\', '/')\n- return value\n+ % (reference, self._filepath))\n+ data.setdefault(reference, {})[section_name] =\\\n+ [self._work_on_item(k) for k, _ in parser.items(section)]\n+ return data, folders\n \n def apply_to(self, ref, cpp_info, settings=None, options=None):\n- d = self._data\n- data = d.get(str(ref)) or d.get(None) or {}\n+ data, _ = self._load_data(ref, settings=settings, options=options)\n+\n+ # Apply the data to the cpp_info\n+ data = data.get(str(ref)) or data.get(None) or {}\n \n try:\n for key, items in data.items():\n- setattr(cpp_info, key, [self._work_on_item(item, settings, options)\n- for item in items])\n+ setattr(cpp_info, key, items)\n except Exception as e:\n raise ConanException(\"Error applying layout in '%s': %s\" % (str(ref), str(e)))\ndiff --git a/conans/model/workspace.py b/conans/model/workspace.py\nindex 761a4ddfc1d..39236c1b7da 100644\n--- a/conans/model/workspace.py\n+++ b/conans/model/workspace.py\n@@ -19,8 +19,7 @@ def __init__(self, base_folder, data, cache, ws_layout, ws_generators, ref):\n self._conanfile_folder = data.pop(\"path\", None) # The folder with the conanfile\n layout = data.pop(\"layout\", None)\n if layout:\n- self.layout = get_editable_abs_path(layout, self._base_folder,\n- cache.conan_folder)\n+ self.layout = get_editable_abs_path(layout, self._base_folder, cache.conan_folder)\n else:\n self.layout = ws_layout\n \n@@ -52,6 +51,7 @@ def generate(self, cwd, graph, output):\n ws_pkg = self._workspace_packages[ref]\n layout = self._cache.package_layout(ref)\n editable = layout.editable_cpp_info()\n+\n conanfile = node.conanfile\n build = editable.folder(ref, EditableLayout.BUILD_FOLDER, conanfile.settings,\n conanfile.options)\n@@ -73,6 +73,7 @@ def generate(self, cwd, graph, output):\n % (ref.name, ref.name))\n else:\n output.warn(\"CMake workspace: cannot 'add_subdirectory()'\")\n+\n if add_subdirs:\n cmake += \"macro(conan_workspace_subdirectories)\\n\"\n cmake += add_subdirs\ndiff --git a/conans/paths/package_layouts/package_editable_layout.py b/conans/paths/package_layouts/package_editable_layout.py\nindex ae4681605e4..cb48c71b66a 100644\n--- a/conans/paths/package_layouts/package_editable_layout.py\n+++ b/conans/paths/package_layouts/package_editable_layout.py\n@@ -30,7 +30,7 @@ def conanfile(self):\n def editable_cpp_info(self):\n if self._layout_file:\n if os.path.isfile(self._layout_file):\n- return EditableLayout.load(self._layout_file)\n+ return EditableLayout(self._layout_file)\n else:\n raise ConanException(\"Layout file not found: %s\" % self._layout_file)\n \ndiff --git a/conans/requirements.txt b/conans/requirements.txt\nindex d787cd05cee..b5c3e9e02ea 100644\n--- a/conans/requirements.txt\n+++ b/conans/requirements.txt\n@@ -13,3 +13,4 @@ pygments>=2.0, <3.0\n astroid>=1.6.5\n deprecation>=2.0, <2.1\n tqdm>=4.28.1, <5\n+Jinja2==2.10\ndiff --git a/conans/test/functional/editable/consume_header_only_test.py b/conans/test/functional/editable/consume_header_only_test.py\nindex 1879f1d655e..913b2df40e0 100644\n--- a/conans/test/functional/editable/consume_header_only_test.py\n+++ b/conans/test/functional/editable/consume_header_only_test.py\n@@ -2,16 +2,15 @@\n \n import os\n import tempfile\n-import unittest\n import textwrap\n+import unittest\n \n from parameterized import parameterized\n \n-\n+from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER\n from conans.test import CONAN_TEST_FOLDER\n from conans.test.utils.tools import TestClient\n from conans.util.files import save\n-from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER\n \n \n class HeaderOnlyLibTestClient(TestClient):\ndiff --git a/conans/test/functional/editable/consume_settings_and_options_test.py b/conans/test/functional/editable/consume_settings_and_options_test.py\nindex df7e1d6bf8c..0d24be61c10 100644\n--- a/conans/test/functional/editable/consume_settings_and_options_test.py\n+++ b/conans/test/functional/editable/consume_settings_and_options_test.py\n@@ -4,12 +4,13 @@\n import os\n import tempfile\n import unittest\n+\n from parameterized import parameterized\n \n+from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER\n from conans.test import CONAN_TEST_FOLDER\n from conans.test.utils.tools import TestClient\n from conans.util.files import save\n-from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER\n \n \n class HeaderOnlyLibTestClient(TestClient):\n@@ -48,7 +49,7 @@ def package_info(self):\n \"\"\"\n \n conan_package_layout = \"\"\"\n-[{namespace}includedirs]\n+[%sincludedirs]\n src/include/{{settings.build_type}}/{{options.shared}}\n \"\"\"\n \n@@ -67,11 +68,11 @@ def __init__(self, use_repo_file, *args, **kwargs):\n }\n \n if use_repo_file:\n- files[\"mylayout\"] = self.conan_package_layout.format(namespace=\"\")\n+ files[\"mylayout\"] = self.conan_package_layout % \"\"\n else:\n file_path = os.path.join(self.cache.conan_folder, LAYOUTS_FOLDER, DEFAULT_LAYOUT_FILE)\n save(file_path,\n- self.conan_package_layout.format(namespace=\"MyLib/0.1@user/editable:\"))\n+ self.conan_package_layout % \"MyLib/0.1@user/editable:\")\n \n self.save(files)\n \ndiff --git a/conans/test/functional/editable/layouts_test.py b/conans/test/functional/editable/layouts_test.py\nindex c8b21f75787..e9b5814ed0a 100644\n--- a/conans/test/functional/editable/layouts_test.py\n+++ b/conans/test/functional/editable/layouts_test.py\n@@ -5,10 +5,10 @@\n import textwrap\n import unittest\n \n-from conans.test.utils.tools import TestClient\n-from conans.util.files import load, save_files, save\n from conans.model.editable_cpp_info import LAYOUTS_FOLDER\n from conans.test.utils.test_files import temp_folder\n+from conans.test.utils.tools import TestClient\n+from conans.util.files import load, save_files, save\n \n \n class LayoutTest(unittest.TestCase):\n@@ -187,7 +187,7 @@ class Pkg(ConanFile):\n \"\"\")\n layout_repo = textwrap.dedent(\"\"\"\n [includedirs]\n- include_{settings.build_type}\n+ include_{{settings.build_type}}\n \"\"\")\n \n client.save({\"conanfile.py\": conanfile,\n@@ -200,8 +200,9 @@ class Pkg(ConanFile):\n \"\"\")\n client2.save({\"conanfile.txt\": consumer})\n client2.run(\"install . -g cmake -s build_type=Debug\", assert_error=True)\n- self.assertIn(\"ERROR: Error applying layout in 'mytool/0.1@user/testing': \"\n- \"'settings.build_type' doesn't exist\", client2.out)\n+ self.assertIn(\"ERROR: Error parsing layout file '{}' (for reference \"\n+ \"'mytool/0.1@user/testing')\\n'settings.build_type' doesn't exist\".format(\n+ os.path.join(client.current_folder, 'layout')), client2.out)\n \n # Now add settings to conanfile\n client.save({\"conanfile.py\": conanfile.replace(\"pass\", 'settings = \"build_type\"')})\ndiff --git a/conans/test/integration/workspace_test.py b/conans/test/integration/workspace_test.py\nindex 52a5d5fdd3a..6e5031be0db 100644\n--- a/conans/test/integration/workspace_test.py\n+++ b/conans/test/integration/workspace_test.py\n@@ -222,13 +222,13 @@ def files(name, depend=None):\n \"\"\")\n layout = dedent(\"\"\"\n [build_folder]\n- build/{settings.build_type}\n+ build/{{settings.build_type}}\n \n [includedirs]\n src\n \n [libdirs]\n- build/{settings.build_type}/lib\n+ build/{{settings.build_type}}/lib\n \"\"\")\n client.save({\"conanws.yml\": project,\n \"layout\": layout})\n@@ -295,7 +295,7 @@ def files(name, depend=None):\n \"\"\")\n layout = dedent(\"\"\"\n [build_folder]\n- build/{settings.build_type}\n+ build/{{settings.build_type}}\n \n [source_folder]\n src\n@@ -304,7 +304,7 @@ def files(name, depend=None):\n src\n \n [libdirs]\n- build/{settings.build_type}/lib\n+ build/{{settings.build_type}}/lib\n \"\"\")\n \n metacmake = dedent(\"\"\"\n@@ -422,7 +422,7 @@ def files(name, depend=None):\n src\n \n [libdirs]\n- build/{settings.build_type}\n+ build/{{settings.build_type}}\n \"\"\")\n metacmake = dedent(\"\"\"\n cmake_minimum_required(VERSION 3.3)\ndiff --git a/conans/test/unittests/model/editable_cpp_info/apply_test.py b/conans/test/unittests/model/editable_cpp_info/apply_test.py\nindex 011c2932b0c..12f6775326e 100644\n--- a/conans/test/unittests/model/editable_cpp_info/apply_test.py\n+++ b/conans/test/unittests/model/editable_cpp_info/apply_test.py\n@@ -6,12 +6,11 @@\n import textwrap\n import unittest\n \n-from conans.model.editable_cpp_info import EditableLayout\n from conans.model.build_info import CppInfo\n+from conans.model.editable_cpp_info import EditableLayout\n+from conans.model.ref import ConanFileReference\n from conans.test.utils.test_files import temp_folder\n from conans.util.files import save\n-from conans.model.ref import ConanFileReference\n-\n \n base_content = textwrap.dedent(\"\"\"\\\n [{namespace}includedirs]\n@@ -35,6 +34,7 @@ def setUp(self):\n self.test_folder = temp_folder()\n self.layout_filepath = os.path.join(self.test_folder, \"layout\")\n self.ref = ConanFileReference.loads(\"libA/0.1@user/channel\")\n+ self.editable_cpp_info = EditableLayout(self.layout_filepath)\n \n def tearDown(self):\n shutil.rmtree(self.test_folder)\n@@ -42,9 +42,8 @@ def tearDown(self):\n def test_require_no_namespace(self):\n content = base_content.format(namespace=\"\", path_prefix=\"\")\n save(self.layout_filepath, content)\n- editable_cpp_info = EditableLayout.load(self.layout_filepath)\n cpp_info = CppInfo(None)\n- editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)\n+ self.editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)\n self.assertListEqual(cpp_info.includedirs, ['dirs/includedirs'])\n self.assertListEqual(cpp_info.libdirs, ['dirs/libdirs'])\n self.assertListEqual(cpp_info.resdirs, ['dirs/resdirs'])\n@@ -57,9 +56,8 @@ def test_require_namespace(self):\n base_content.format(namespace=\"libA/0.1@user/channel:\", path_prefix=\"libA/\")\n ])\n save(self.layout_filepath, content)\n- editable_cpp_info = EditableLayout.load(self.layout_filepath)\n cpp_info = CppInfo(None)\n- editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)\n+ self.editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)\n self.assertListEqual(cpp_info.includedirs, ['libA/dirs/includedirs'])\n self.assertListEqual(cpp_info.libdirs, ['libA/dirs/libdirs'])\n self.assertListEqual(cpp_info.resdirs, ['libA/dirs/resdirs'])\n@@ -68,7 +66,7 @@ def test_require_namespace(self):\n \n cpp_info = CppInfo(None)\n other = ConanFileReference.loads(\"other/0.1@user/channel\")\n- editable_cpp_info.apply_to(other, cpp_info, settings=None, options=None)\n+ self.editable_cpp_info.apply_to(other, cpp_info, settings=None, options=None)\n self.assertListEqual(cpp_info.includedirs, ['dirs/includedirs'])\n self.assertListEqual(cpp_info.libdirs, ['dirs/libdirs'])\n self.assertListEqual(cpp_info.resdirs, ['dirs/resdirs'])\ndiff --git a/conans/test/unittests/model/editable_cpp_info/load_data_test.py b/conans/test/unittests/model/editable_cpp_info/load_data_test.py\nnew file mode 100644\nindex 00000000000..55e1daa3db4\n--- /dev/null\n+++ b/conans/test/unittests/model/editable_cpp_info/load_data_test.py\n@@ -0,0 +1,54 @@\n+# coding=utf-8\n+\n+import os\n+import shutil\n+import textwrap\n+import unittest\n+\n+from conans.errors import ConanException\n+from conans.model.editable_cpp_info import EditableLayout\n+from conans.test.utils.test_files import temp_folder\n+from conans.util.files import save\n+\n+\n+class ParseTest(unittest.TestCase):\n+ def setUp(self):\n+ self.test_folder = temp_folder()\n+ self.layout_filepath = os.path.join(self.test_folder, \"layout\")\n+ self.editable_cpp_info = EditableLayout(self.layout_filepath)\n+\n+ def tearDown(self):\n+ shutil.rmtree(self.test_folder)\n+\n+ def test_field_error(self):\n+ content = textwrap.dedent(\"\"\"\n+ [includedrs]\n+ something\n+ \"\"\")\n+ save(self.layout_filepath, content)\n+ with self.assertRaisesRegexp(ConanException, \"Wrong cpp_info field 'includedrs' in layout\"):\n+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)\n+ content = textwrap.dedent(\"\"\"\n+ [*:includedrs]\n+ something\n+ \"\"\")\n+ save(self.layout_filepath, content)\n+ with self.assertRaisesRegexp(ConanException, \"Wrong cpp_info field 'includedrs' in layout\"):\n+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)\n+\n+ content = textwrap.dedent(\"\"\"\n+ [*:includedirs]\n+ something\n+ \"\"\")\n+ save(self.layout_filepath, content)\n+ with self.assertRaisesRegexp(ConanException, \"Wrong package reference '\\*' in layout file\"):\n+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)\n+\n+ content = textwrap.dedent(\"\"\"\n+ [pkg/version@user/channel:revision:includedirs]\n+ something\n+ \"\"\")\n+ save(self.layout_filepath, content)\n+ with self.assertRaisesRegexp(ConanException, \"Wrong package reference \"\n+ \"'pkg/version@user/channel:revision' in layout file\"):\n+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)\ndiff --git a/conans/test/unittests/model/editable_cpp_info/parse_test.py b/conans/test/unittests/model/editable_cpp_info/parse_test.py\nindex 1c3ae67a261..babafb36b94 100644\n--- a/conans/test/unittests/model/editable_cpp_info/parse_test.py\n+++ b/conans/test/unittests/model/editable_cpp_info/parse_test.py\n@@ -1,12 +1,14 @@\n # coding=utf-8\n \n-import textwrap\n import os\n import shutil\n+import textwrap\n import unittest\n \n-from conans.errors import ConanException\n+from conans.client.conf import default_settings_yml\n from conans.model.editable_cpp_info import EditableLayout\n+from conans.model.options import Options, PackageOptions\n+from conans.model.settings import Settings\n from conans.test.utils.test_files import temp_folder\n from conans.util.files import save\n \n@@ -15,39 +17,45 @@ class ParseTest(unittest.TestCase):\n def setUp(self):\n self.test_folder = temp_folder()\n self.layout_filepath = os.path.join(self.test_folder, \"layout\")\n+ self.editable_cpp_info = EditableLayout(self.layout_filepath)\n+\n+ self.settings = Settings.loads(default_settings_yml)\n+ self.options = Options(PackageOptions({\"shared\": [True, False]}))\n \n def tearDown(self):\n shutil.rmtree(self.test_folder)\n \n- def field_error_test(self):\n- content = textwrap.dedent(\"\"\"\n- [includedrs]\n- something\n- \"\"\")\n- save(self.layout_filepath, content)\n- with self.assertRaisesRegexp(ConanException, \"Wrong cpp_info field 'includedrs' in layout\"):\n- _ = EditableLayout.load(self.layout_filepath)\n- content = textwrap.dedent(\"\"\"\n- [*:includedrs]\n- something\n- \"\"\")\n- save(self.layout_filepath, content)\n- with self.assertRaisesRegexp(ConanException, \"Wrong cpp_info field 'includedrs' in layout\"):\n- _ = EditableLayout.load(self.layout_filepath)\n+ def test_render_basic(self):\n+ self.options.shared = True\n+ self.settings.build_type = \"Debug\"\n \n content = textwrap.dedent(\"\"\"\n- [*:includedirs]\n- something\n- \"\"\")\n+ [includedirs]\n+ {% if options.shared %}\n+ path/to/shared/{{ settings.build_type }}\n+ {% else %}\n+ not/expected\n+ {% endif %}\n+ \"\"\")\n save(self.layout_filepath, content)\n- with self.assertRaisesRegexp(ConanException, \"Wrong package reference '\\*' in layout file\"):\n- _ = EditableLayout.load(self.layout_filepath)\n+\n+ data, folders = self.editable_cpp_info._load_data(ref=None, settings=self.settings,\n+ options=self.options)\n+ self.assertEqual(data[None], {'includedirs': [\"path/to/shared/Debug\"]})\n+\n+ def test_render_loop(self):\n+ self.settings.build_type = \"Debug\"\n \n content = textwrap.dedent(\"\"\"\n- [pkg/version@user/channel:revision:includedirs]\n- something\n- \"\"\")\n+ [includedirs]\n+ {% for item in [\"cmp1\", \"cmp2\", \"cmp3\"] %}\n+ components\\{{ item }}\\include\\{% if item != \"cmp3\" %}{{ settings.build_type }}{% endif %}\n+ {% endfor %}\n+ \"\"\")\n save(self.layout_filepath, content)\n- with self.assertRaisesRegexp(ConanException, \"Wrong package reference \"\n- \"'pkg/version@user/channel:revision' in layout file\"):\n- _ = EditableLayout.load(self.layout_filepath)\n+\n+ data, folders = self.editable_cpp_info._load_data(ref=None, settings=self.settings,\n+ options=self.options)\n+ self.assertEqual(data[None], {'includedirs': [\"components/cmp1/include/Debug\",\n+ \"components/cmp2/include/Debug\",\n+ \"components/cmp3/include/\"]})\ndiff --git a/conans/test/unittests/model/editable_cpp_info/work_on_items_test.py b/conans/test/unittests/model/editable_cpp_info/work_on_items_test.py\ndeleted file mode 100644\nindex 39d09e3ef85..00000000000\n--- a/conans/test/unittests/model/editable_cpp_info/work_on_items_test.py\n+++ /dev/null\n@@ -1,36 +0,0 @@\n-# coding=utf-8\n-\n-import unittest\n-\n-from conans.client.conf import default_settings_yml\n-from conans.model.editable_cpp_info import EditableLayout\n-from conans.model.settings import Settings\n-\n-\n-class WorkOnItemsTest(unittest.TestCase):\n-\n- def test_empty(self):\n- self.assertEqual(\"\", EditableLayout._work_on_item(\"\", None, None))\n-\n- def test_placeholders(self):\n- settings = Settings.loads(default_settings_yml)\n- settings.compiler = 'Visual Studio'\n- settings.compiler.version = '14'\n- settings.build_type = 'Debug'\n-\n- self.assertEqual('src/Visual Studio14/Debug/include',\n- EditableLayout._work_on_item(\"src/{settings.compiler}{settings.compiler.version}/{settings.build_type}/include\",\n- settings=settings,\n- options=None))\n- self.assertEqual('C:/Visual Studio/include/',\n- EditableLayout._work_on_item(\"C:\\\\{settings.compiler}\\\\include\\\\\",\n- settings=settings,\n- options=None))\n- self.assertEqual('C:/Visual Studio/include/',\n- EditableLayout._work_on_item(\"C:\\{settings.compiler}\\include\\\\\",\n- settings=settings,\n- options=None))\n- self.assertEqual('/usr/path with spaces/Visual Studio/dir',\n- EditableLayout._work_on_item(\"/usr/path with spaces/{settings.compiler}/dir\",\n- settings=settings,\n- options=None))\ndiff --git a/conans/test/utils/tools.py b/conans/test/utils/tools.py\nindex 8fcc489c2bb..6e6a957107c 100644\n--- a/conans/test/utils/tools.py\n+++ b/conans/test/utils/tools.py\n@@ -3,24 +3,24 @@\n import os\n import random\n import shlex\n+import shutil\n import stat\n import subprocess\n import sys\n import tempfile\n+import threading\n+import time\n import unittest\n+import uuid\n from collections import Counter, OrderedDict\n from contextlib import contextmanager\n-from io import StringIO\n \n import bottle\n import nose\n import requests\n-import shutil\n import six\n-import threading\n-import time\n-import uuid\n from mock import Mock\n+from six import StringIO\n from six.moves.urllib.parse import quote, urlsplit, urlunsplit\n from webtest.app import TestApp\n \ndiff --git a/conans/util/templates.py b/conans/util/templates.py\nnew file mode 100644\nindex 00000000000..ec8b3875a15\n--- /dev/null\n+++ b/conans/util/templates.py\n@@ -0,0 +1,8 @@\n+# coding=utf-8\n+\n+from jinja2 import Template\n+\n+\n+def render_layout_file(content, ref=None, settings=None, options=None):\n+ t = Template(content)\n+ return t.render(reference=str(ref), settings=settings, options=options)\n"
}
|
[
{
"diff_hunk": "@@ -13,3 +13,4 @@ pygments>=2.0, <3.0\n astroid>=1.6.5\n deprecation>=2.0, <2.1\n tqdm>=4.28.1, <5\n+Jinja2==2.10",
"line": null,
"original_line": 16,
"original_start_line": null,
"path": "conans/requirements.txt",
"start_line": null,
"text": "@user1:\nMaybe this requirement could be more relaxed? So it minimizes collisions with other possible Jinja2 previously installed.\n\n@author:\nUpps, yes, didn't realize about it after adding it to the requirements.\r\n\r\nQuick testing: `sphinx` requires `>=2.3` and our tests are working with it... and Jinja2.3 is from Feb 10, 2010. Maybe go for `Jinja2>=2.3, <3`?"
}
] |
80615c2318647d827ecb00552fefc4ece9ce66b3
|
diff --git a/conans/client/tools/oss.py b/conans/client/tools/oss.py
index 7521320b979..e61f7cc2457 100644
--- a/conans/client/tools/oss.py
+++ b/conans/client/tools/oss.py
@@ -306,7 +306,7 @@ def detect_windows_subsystem():
# https://github.com/Microsoft/WSL/issues/423#issuecomment-221627364
with open("/proc/sys/kernel/osrelease") as f:
return WSL if f.read().endswith("Microsoft") else None
- except FileNotFoundError:
+ except IOError:
return None
try:
output = OSInfo.uname()
diff --git a/conans/model/editable_cpp_info.py b/conans/model/editable_cpp_info.py
index b2ab609199d..3164c43f5ef 100644
--- a/conans/model/editable_cpp_info.py
+++ b/conans/model/editable_cpp_info.py
@@ -2,10 +2,13 @@
import os
from collections import OrderedDict
+from six import StringIO
from six.moves import configparser
from conans.errors import ConanException
from conans.model.ref import ConanFileReference
+from conans.util.files import load
+from conans.util.templates import render_layout_file
DEFAULT_LAYOUT_FILE = "default"
LAYOUTS_FOLDER = 'layouts'
@@ -14,19 +17,17 @@
def get_editable_abs_path(path, cwd, cache_folder):
# Check the layout file exists, is correct, and get its abs-path
if path:
- layout_abs_path = path if os.path.isabs(path) else os.path.normpath(os.path.join(cwd, path))
+ layout_abs_path = os.path.normpath(os.path.join(cwd, path))
if not os.path.isfile(layout_abs_path):
layout_abs_path = os.path.join(cache_folder, LAYOUTS_FOLDER, path)
if not os.path.isfile(layout_abs_path):
raise ConanException("Couldn't find layout file: %s" % path)
- EditableLayout.load(layout_abs_path) # Try if it loads ok
return layout_abs_path
# Default only in cache
- layout_abs_path = os.path.join(cache_folder, LAYOUTS_FOLDER, DEFAULT_LAYOUT_FILE)
- if os.path.isfile(layout_abs_path):
- EditableLayout.load(layout_abs_path)
- return layout_abs_path
+ layout_default_path = os.path.join(cache_folder, LAYOUTS_FOLDER, DEFAULT_LAYOUT_FILE)
+ if os.path.isfile(layout_default_path):
+ return layout_default_path
class EditableLayout(object):
@@ -35,68 +36,76 @@ class EditableLayout(object):
cpp_info_dirs = ['includedirs', 'libdirs', 'resdirs', 'bindirs', 'builddirs', 'srcdirs']
folders = [BUILD_FOLDER, SOURCE_FOLDER]
- def __init__(self, data, folders):
- self._data = data
- self._folders = folders
+ def __init__(self, filepath):
+ self._filepath = filepath
def folder(self, ref, name, settings, options):
+ _, folders = self._load_data(ref, settings=settings, options=options)
try:
- path = self._folders.get(str(ref)) or self._folders.get(None) or {}
- path = path[name]
+ path = folders.get(str(ref)) or folders.get(None) or {}
+ return path[name]
except KeyError:
return None
- try:
- return self._work_on_item(path, settings, options)
- except Exception as e:
- raise ConanException("Error getting fHolder '%s' from layout: %s" % (str(name), str(e)))
@staticmethod
- def load(filepath):
- parser = configparser.ConfigParser(allow_no_value=True)
- parser.optionxform = str
+ def _work_on_item(value):
+ value = value.replace('\\', '/')
+ return value
+
+ def _parse_layout_file(self, ref, settings, options):
+ content = load(self._filepath)
try:
- parser.read(filepath)
- except configparser.Error as e:
- raise ConanException("Error parsing layout file: %s\n%s" % (filepath, str(e)))
+ content = render_layout_file(content, ref=ref, settings=settings, options=options)
+
+ parser = configparser.ConfigParser(allow_no_value=True)
+ parser.optionxform = str
+ parser.readfp(StringIO(content))
+ except (configparser.Error, ConanException) as e:
+ raise ConanException("Error parsing layout file '%s' (for reference '%s')\n%s" %
+ (self._filepath, str(ref), str(e)))
+
+ return parser
+
+ def _load_data(self, ref, settings, options):
+ parser = self._parse_layout_file(ref, settings, options)
+
+ # Build a convenient data structure
data = OrderedDict()
folders = {}
for section in parser.sections():
- ref, section_name = section.rsplit(":", 1) if ':' in section else (None, section)
+ reference, section_name = section.rsplit(":", 1) if ':' in section else (None, section)
+
if section_name in EditableLayout.folders:
items = [k for k, _ in parser.items(section)] or [""]
if len(items) > 1:
raise ConanException("'%s' with more than one value in layout file: %s"
- % (section_name, filepath))
- folders.setdefault(ref, {})[section_name] = items[0]
+ % (section_name, self._filepath))
+ folders.setdefault(reference, {})[section_name] = self._work_on_item(items[0])
continue
+
if section_name not in EditableLayout.cpp_info_dirs:
raise ConanException("Wrong cpp_info field '%s' in layout file: %s"
- % (section_name, filepath))
- if ref:
+ % (section_name, self._filepath))
+ if reference:
try:
- r = ConanFileReference.loads(ref, validate=True)
+ r = ConanFileReference.loads(reference, validate=True)
if r.revision:
raise ConanException("Don't provide revision in Editable layouts")
except ConanException:
raise ConanException("Wrong package reference '%s' in layout file: %s"
- % (ref, filepath))
- data.setdefault(ref, {})[section_name] = [k for k, _ in parser.items(section)]
-
- return EditableLayout(data, folders)
-
- @staticmethod
- def _work_on_item(value, settings, options):
- value = value.format(settings=settings, options=options)
- value = value.replace('\\', '/')
- return value
+ % (reference, self._filepath))
+ data.setdefault(reference, {})[section_name] =\
+ [self._work_on_item(k) for k, _ in parser.items(section)]
+ return data, folders
def apply_to(self, ref, cpp_info, settings=None, options=None):
- d = self._data
- data = d.get(str(ref)) or d.get(None) or {}
+ data, _ = self._load_data(ref, settings=settings, options=options)
+
+ # Apply the data to the cpp_info
+ data = data.get(str(ref)) or data.get(None) or {}
try:
for key, items in data.items():
- setattr(cpp_info, key, [self._work_on_item(item, settings, options)
- for item in items])
+ setattr(cpp_info, key, items)
except Exception as e:
raise ConanException("Error applying layout in '%s': %s" % (str(ref), str(e)))
diff --git a/conans/model/workspace.py b/conans/model/workspace.py
index 761a4ddfc1d..39236c1b7da 100644
--- a/conans/model/workspace.py
+++ b/conans/model/workspace.py
@@ -19,8 +19,7 @@ def __init__(self, base_folder, data, cache, ws_layout, ws_generators, ref):
self._conanfile_folder = data.pop("path", None) # The folder with the conanfile
layout = data.pop("layout", None)
if layout:
- self.layout = get_editable_abs_path(layout, self._base_folder,
- cache.conan_folder)
+ self.layout = get_editable_abs_path(layout, self._base_folder, cache.conan_folder)
else:
self.layout = ws_layout
@@ -52,6 +51,7 @@ def generate(self, cwd, graph, output):
ws_pkg = self._workspace_packages[ref]
layout = self._cache.package_layout(ref)
editable = layout.editable_cpp_info()
+
conanfile = node.conanfile
build = editable.folder(ref, EditableLayout.BUILD_FOLDER, conanfile.settings,
conanfile.options)
@@ -73,6 +73,7 @@ def generate(self, cwd, graph, output):
% (ref.name, ref.name))
else:
output.warn("CMake workspace: cannot 'add_subdirectory()'")
+
if add_subdirs:
cmake += "macro(conan_workspace_subdirectories)\n"
cmake += add_subdirs
diff --git a/conans/paths/package_layouts/package_editable_layout.py b/conans/paths/package_layouts/package_editable_layout.py
index ae4681605e4..cb48c71b66a 100644
--- a/conans/paths/package_layouts/package_editable_layout.py
+++ b/conans/paths/package_layouts/package_editable_layout.py
@@ -30,7 +30,7 @@ def conanfile(self):
def editable_cpp_info(self):
if self._layout_file:
if os.path.isfile(self._layout_file):
- return EditableLayout.load(self._layout_file)
+ return EditableLayout(self._layout_file)
else:
raise ConanException("Layout file not found: %s" % self._layout_file)
diff --git a/conans/requirements.txt b/conans/requirements.txt
index d787cd05cee..0ecd00db009 100644
--- a/conans/requirements.txt
+++ b/conans/requirements.txt
@@ -13,3 +13,4 @@ pygments>=2.0, <3.0
astroid>=1.6.5
deprecation>=2.0, <2.1
tqdm>=4.28.1, <5
+Jinja2>=2.3, <3
diff --git a/conans/test/functional/editable/consume_header_only_test.py b/conans/test/functional/editable/consume_header_only_test.py
index 1879f1d655e..913b2df40e0 100644
--- a/conans/test/functional/editable/consume_header_only_test.py
+++ b/conans/test/functional/editable/consume_header_only_test.py
@@ -2,16 +2,15 @@
import os
import tempfile
-import unittest
import textwrap
+import unittest
from parameterized import parameterized
-
+from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER
from conans.test import CONAN_TEST_FOLDER
from conans.test.utils.tools import TestClient
from conans.util.files import save
-from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER
class HeaderOnlyLibTestClient(TestClient):
diff --git a/conans/test/functional/editable/consume_settings_and_options_test.py b/conans/test/functional/editable/consume_settings_and_options_test.py
index df7e1d6bf8c..0d24be61c10 100644
--- a/conans/test/functional/editable/consume_settings_and_options_test.py
+++ b/conans/test/functional/editable/consume_settings_and_options_test.py
@@ -4,12 +4,13 @@
import os
import tempfile
import unittest
+
from parameterized import parameterized
+from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER
from conans.test import CONAN_TEST_FOLDER
from conans.test.utils.tools import TestClient
from conans.util.files import save
-from conans.model.editable_cpp_info import DEFAULT_LAYOUT_FILE, LAYOUTS_FOLDER
class HeaderOnlyLibTestClient(TestClient):
@@ -48,7 +49,7 @@ def package_info(self):
"""
conan_package_layout = """
-[{namespace}includedirs]
+[%sincludedirs]
src/include/{{settings.build_type}}/{{options.shared}}
"""
@@ -67,11 +68,11 @@ def __init__(self, use_repo_file, *args, **kwargs):
}
if use_repo_file:
- files["mylayout"] = self.conan_package_layout.format(namespace="")
+ files["mylayout"] = self.conan_package_layout % ""
else:
file_path = os.path.join(self.cache.conan_folder, LAYOUTS_FOLDER, DEFAULT_LAYOUT_FILE)
save(file_path,
- self.conan_package_layout.format(namespace="MyLib/0.1@user/editable:"))
+ self.conan_package_layout % "MyLib/0.1@user/editable:")
self.save(files)
diff --git a/conans/test/functional/editable/layouts_test.py b/conans/test/functional/editable/layouts_test.py
index c8b21f75787..e9b5814ed0a 100644
--- a/conans/test/functional/editable/layouts_test.py
+++ b/conans/test/functional/editable/layouts_test.py
@@ -5,10 +5,10 @@
import textwrap
import unittest
-from conans.test.utils.tools import TestClient
-from conans.util.files import load, save_files, save
from conans.model.editable_cpp_info import LAYOUTS_FOLDER
from conans.test.utils.test_files import temp_folder
+from conans.test.utils.tools import TestClient
+from conans.util.files import load, save_files, save
class LayoutTest(unittest.TestCase):
@@ -187,7 +187,7 @@ class Pkg(ConanFile):
""")
layout_repo = textwrap.dedent("""
[includedirs]
- include_{settings.build_type}
+ include_{{settings.build_type}}
""")
client.save({"conanfile.py": conanfile,
@@ -200,8 +200,9 @@ class Pkg(ConanFile):
""")
client2.save({"conanfile.txt": consumer})
client2.run("install . -g cmake -s build_type=Debug", assert_error=True)
- self.assertIn("ERROR: Error applying layout in 'mytool/0.1@user/testing': "
- "'settings.build_type' doesn't exist", client2.out)
+ self.assertIn("ERROR: Error parsing layout file '{}' (for reference "
+ "'mytool/0.1@user/testing')\n'settings.build_type' doesn't exist".format(
+ os.path.join(client.current_folder, 'layout')), client2.out)
# Now add settings to conanfile
client.save({"conanfile.py": conanfile.replace("pass", 'settings = "build_type"')})
diff --git a/conans/test/integration/workspace_test.py b/conans/test/integration/workspace_test.py
index 52a5d5fdd3a..6e5031be0db 100644
--- a/conans/test/integration/workspace_test.py
+++ b/conans/test/integration/workspace_test.py
@@ -222,13 +222,13 @@ def files(name, depend=None):
""")
layout = dedent("""
[build_folder]
- build/{settings.build_type}
+ build/{{settings.build_type}}
[includedirs]
src
[libdirs]
- build/{settings.build_type}/lib
+ build/{{settings.build_type}}/lib
""")
client.save({"conanws.yml": project,
"layout": layout})
@@ -295,7 +295,7 @@ def files(name, depend=None):
""")
layout = dedent("""
[build_folder]
- build/{settings.build_type}
+ build/{{settings.build_type}}
[source_folder]
src
@@ -304,7 +304,7 @@ def files(name, depend=None):
src
[libdirs]
- build/{settings.build_type}/lib
+ build/{{settings.build_type}}/lib
""")
metacmake = dedent("""
@@ -422,7 +422,7 @@ def files(name, depend=None):
src
[libdirs]
- build/{settings.build_type}
+ build/{{settings.build_type}}
""")
metacmake = dedent("""
cmake_minimum_required(VERSION 3.3)
diff --git a/conans/test/unittests/model/editable_cpp_info/apply_test.py b/conans/test/unittests/model/editable_cpp_info/apply_test.py
index 011c2932b0c..12f6775326e 100644
--- a/conans/test/unittests/model/editable_cpp_info/apply_test.py
+++ b/conans/test/unittests/model/editable_cpp_info/apply_test.py
@@ -6,12 +6,11 @@
import textwrap
import unittest
-from conans.model.editable_cpp_info import EditableLayout
from conans.model.build_info import CppInfo
+from conans.model.editable_cpp_info import EditableLayout
+from conans.model.ref import ConanFileReference
from conans.test.utils.test_files import temp_folder
from conans.util.files import save
-from conans.model.ref import ConanFileReference
-
base_content = textwrap.dedent("""\
[{namespace}includedirs]
@@ -35,6 +34,7 @@ def setUp(self):
self.test_folder = temp_folder()
self.layout_filepath = os.path.join(self.test_folder, "layout")
self.ref = ConanFileReference.loads("libA/0.1@user/channel")
+ self.editable_cpp_info = EditableLayout(self.layout_filepath)
def tearDown(self):
shutil.rmtree(self.test_folder)
@@ -42,9 +42,8 @@ def tearDown(self):
def test_require_no_namespace(self):
content = base_content.format(namespace="", path_prefix="")
save(self.layout_filepath, content)
- editable_cpp_info = EditableLayout.load(self.layout_filepath)
cpp_info = CppInfo(None)
- editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)
+ self.editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)
self.assertListEqual(cpp_info.includedirs, ['dirs/includedirs'])
self.assertListEqual(cpp_info.libdirs, ['dirs/libdirs'])
self.assertListEqual(cpp_info.resdirs, ['dirs/resdirs'])
@@ -57,9 +56,8 @@ def test_require_namespace(self):
base_content.format(namespace="libA/0.1@user/channel:", path_prefix="libA/")
])
save(self.layout_filepath, content)
- editable_cpp_info = EditableLayout.load(self.layout_filepath)
cpp_info = CppInfo(None)
- editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)
+ self.editable_cpp_info.apply_to(self.ref, cpp_info, settings=None, options=None)
self.assertListEqual(cpp_info.includedirs, ['libA/dirs/includedirs'])
self.assertListEqual(cpp_info.libdirs, ['libA/dirs/libdirs'])
self.assertListEqual(cpp_info.resdirs, ['libA/dirs/resdirs'])
@@ -68,7 +66,7 @@ def test_require_namespace(self):
cpp_info = CppInfo(None)
other = ConanFileReference.loads("other/0.1@user/channel")
- editable_cpp_info.apply_to(other, cpp_info, settings=None, options=None)
+ self.editable_cpp_info.apply_to(other, cpp_info, settings=None, options=None)
self.assertListEqual(cpp_info.includedirs, ['dirs/includedirs'])
self.assertListEqual(cpp_info.libdirs, ['dirs/libdirs'])
self.assertListEqual(cpp_info.resdirs, ['dirs/resdirs'])
diff --git a/conans/test/unittests/model/editable_cpp_info/load_data_test.py b/conans/test/unittests/model/editable_cpp_info/load_data_test.py
new file mode 100644
index 00000000000..55e1daa3db4
--- /dev/null
+++ b/conans/test/unittests/model/editable_cpp_info/load_data_test.py
@@ -0,0 +1,54 @@
+# coding=utf-8
+
+import os
+import shutil
+import textwrap
+import unittest
+
+from conans.errors import ConanException
+from conans.model.editable_cpp_info import EditableLayout
+from conans.test.utils.test_files import temp_folder
+from conans.util.files import save
+
+
+class ParseTest(unittest.TestCase):
+ def setUp(self):
+ self.test_folder = temp_folder()
+ self.layout_filepath = os.path.join(self.test_folder, "layout")
+ self.editable_cpp_info = EditableLayout(self.layout_filepath)
+
+ def tearDown(self):
+ shutil.rmtree(self.test_folder)
+
+ def test_field_error(self):
+ content = textwrap.dedent("""
+ [includedrs]
+ something
+ """)
+ save(self.layout_filepath, content)
+ with self.assertRaisesRegexp(ConanException, "Wrong cpp_info field 'includedrs' in layout"):
+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)
+ content = textwrap.dedent("""
+ [*:includedrs]
+ something
+ """)
+ save(self.layout_filepath, content)
+ with self.assertRaisesRegexp(ConanException, "Wrong cpp_info field 'includedrs' in layout"):
+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)
+
+ content = textwrap.dedent("""
+ [*:includedirs]
+ something
+ """)
+ save(self.layout_filepath, content)
+ with self.assertRaisesRegexp(ConanException, "Wrong package reference '\*' in layout file"):
+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)
+
+ content = textwrap.dedent("""
+ [pkg/version@user/channel:revision:includedirs]
+ something
+ """)
+ save(self.layout_filepath, content)
+ with self.assertRaisesRegexp(ConanException, "Wrong package reference "
+ "'pkg/version@user/channel:revision' in layout file"):
+ _ = self.editable_cpp_info._load_data(ref=None, settings=None, options=None)
diff --git a/conans/test/unittests/model/editable_cpp_info/parse_test.py b/conans/test/unittests/model/editable_cpp_info/parse_test.py
index 1c3ae67a261..babafb36b94 100644
--- a/conans/test/unittests/model/editable_cpp_info/parse_test.py
+++ b/conans/test/unittests/model/editable_cpp_info/parse_test.py
@@ -1,12 +1,14 @@
# coding=utf-8
-import textwrap
import os
import shutil
+import textwrap
import unittest
-from conans.errors import ConanException
+from conans.client.conf import default_settings_yml
from conans.model.editable_cpp_info import EditableLayout
+from conans.model.options import Options, PackageOptions
+from conans.model.settings import Settings
from conans.test.utils.test_files import temp_folder
from conans.util.files import save
@@ -15,39 +17,45 @@ class ParseTest(unittest.TestCase):
def setUp(self):
self.test_folder = temp_folder()
self.layout_filepath = os.path.join(self.test_folder, "layout")
+ self.editable_cpp_info = EditableLayout(self.layout_filepath)
+
+ self.settings = Settings.loads(default_settings_yml)
+ self.options = Options(PackageOptions({"shared": [True, False]}))
def tearDown(self):
shutil.rmtree(self.test_folder)
- def field_error_test(self):
- content = textwrap.dedent("""
- [includedrs]
- something
- """)
- save(self.layout_filepath, content)
- with self.assertRaisesRegexp(ConanException, "Wrong cpp_info field 'includedrs' in layout"):
- _ = EditableLayout.load(self.layout_filepath)
- content = textwrap.dedent("""
- [*:includedrs]
- something
- """)
- save(self.layout_filepath, content)
- with self.assertRaisesRegexp(ConanException, "Wrong cpp_info field 'includedrs' in layout"):
- _ = EditableLayout.load(self.layout_filepath)
+ def test_render_basic(self):
+ self.options.shared = True
+ self.settings.build_type = "Debug"
content = textwrap.dedent("""
- [*:includedirs]
- something
- """)
+ [includedirs]
+ {% if options.shared %}
+ path/to/shared/{{ settings.build_type }}
+ {% else %}
+ not/expected
+ {% endif %}
+ """)
save(self.layout_filepath, content)
- with self.assertRaisesRegexp(ConanException, "Wrong package reference '\*' in layout file"):
- _ = EditableLayout.load(self.layout_filepath)
+
+ data, folders = self.editable_cpp_info._load_data(ref=None, settings=self.settings,
+ options=self.options)
+ self.assertEqual(data[None], {'includedirs': ["path/to/shared/Debug"]})
+
+ def test_render_loop(self):
+ self.settings.build_type = "Debug"
content = textwrap.dedent("""
- [pkg/version@user/channel:revision:includedirs]
- something
- """)
+ [includedirs]
+ {% for item in ["cmp1", "cmp2", "cmp3"] %}
+ components\{{ item }}\include\{% if item != "cmp3" %}{{ settings.build_type }}{% endif %}
+ {% endfor %}
+ """)
save(self.layout_filepath, content)
- with self.assertRaisesRegexp(ConanException, "Wrong package reference "
- "'pkg/version@user/channel:revision' in layout file"):
- _ = EditableLayout.load(self.layout_filepath)
+
+ data, folders = self.editable_cpp_info._load_data(ref=None, settings=self.settings,
+ options=self.options)
+ self.assertEqual(data[None], {'includedirs': ["components/cmp1/include/Debug",
+ "components/cmp2/include/Debug",
+ "components/cmp3/include/"]})
diff --git a/conans/test/unittests/model/editable_cpp_info/work_on_items_test.py b/conans/test/unittests/model/editable_cpp_info/work_on_items_test.py
deleted file mode 100644
index 39d09e3ef85..00000000000
--- a/conans/test/unittests/model/editable_cpp_info/work_on_items_test.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# coding=utf-8
-
-import unittest
-
-from conans.client.conf import default_settings_yml
-from conans.model.editable_cpp_info import EditableLayout
-from conans.model.settings import Settings
-
-
-class WorkOnItemsTest(unittest.TestCase):
-
- def test_empty(self):
- self.assertEqual("", EditableLayout._work_on_item("", None, None))
-
- def test_placeholders(self):
- settings = Settings.loads(default_settings_yml)
- settings.compiler = 'Visual Studio'
- settings.compiler.version = '14'
- settings.build_type = 'Debug'
-
- self.assertEqual('src/Visual Studio14/Debug/include',
- EditableLayout._work_on_item("src/{settings.compiler}{settings.compiler.version}/{settings.build_type}/include",
- settings=settings,
- options=None))
- self.assertEqual('C:/Visual Studio/include/',
- EditableLayout._work_on_item("C:\\{settings.compiler}\\include\\",
- settings=settings,
- options=None))
- self.assertEqual('C:/Visual Studio/include/',
- EditableLayout._work_on_item("C:\{settings.compiler}\include\\",
- settings=settings,
- options=None))
- self.assertEqual('/usr/path with spaces/Visual Studio/dir',
- EditableLayout._work_on_item("/usr/path with spaces/{settings.compiler}/dir",
- settings=settings,
- options=None))
diff --git a/conans/test/utils/tools.py b/conans/test/utils/tools.py
index 8fcc489c2bb..6e6a957107c 100644
--- a/conans/test/utils/tools.py
+++ b/conans/test/utils/tools.py
@@ -3,24 +3,24 @@
import os
import random
import shlex
+import shutil
import stat
import subprocess
import sys
import tempfile
+import threading
+import time
import unittest
+import uuid
from collections import Counter, OrderedDict
from contextlib import contextmanager
-from io import StringIO
import bottle
import nose
import requests
-import shutil
import six
-import threading
-import time
-import uuid
from mock import Mock
+from six import StringIO
from six.moves.urllib.parse import quote, urlsplit, urlunsplit
from webtest.app import TestApp
diff --git a/conans/util/templates.py b/conans/util/templates.py
new file mode 100644
index 00000000000..ec8b3875a15
--- /dev/null
+++ b/conans/util/templates.py
@@ -0,0 +1,8 @@
+# coding=utf-8
+
+from jinja2 import Template
+
+
+def render_layout_file(content, ref=None, settings=None, options=None):
+ t = Template(content)
+ return t.render(reference=str(ref), settings=settings, options=options)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
conan-io__conan-4495@096ca0a
|
conan-io/conan
|
Python
| 4,495
|
Do not allow an alias to override an existing package
|
Changelog: Fix: Do not allow an alias to override an existing package
Docs: omit
closes #4423
✅ Do not pass the `cache` object to the `export_alias` function.
@REVISIONS: 1
|
2019-02-08T19:07:44Z
|
Prevent conan alias from overwriting existing packages
When creating an alias package with `conan alias` it is easy to mix up the order of parameters:
* Correct: `conan alias boost/ALIAS@conan/stable boost/1.69.0@conan/stable`
* Wrong: `conan alias boost/1.69.0@conan/stable boost/ALIAS@conan/stable`
In case you're accidentally calling the wrong order you're overwriting the package you initially wanted to reference. If this happens you render your original package useless and need to remove it. Then you can start all over again.
It would be perfect to have a built-in check that tells the user that the existing package `boost/1.69.0@conan/stable` already exists and will be overwritten by an alias package - and if the user is really sure this is what he wants to do.
Happens to me with Conan 1.11.0.
|
I just found that a similar feature has been discussed within #3805 - the issue is just named in a way that it's hard to see on first glance what's going on there.
I have added it as "ux", because it is mostly a check. There are some possibilities:
- Throw an error if the package exists, and require ``--force``. Good, but will be breaking.
- Interactive: "are you sure you want to overwrite?". This might break also users using alias in CI (which is typical)
- Opt-in: define a ``--check`` argument (or something similar), that forces the alias to do this check, and will throw an error
It is a simple feature, but I see no way to do it both effectively and nonbreaking.
Agree, `--check` is not the most intuitive. This kind of issue happens to people who are using `conan alias` for the first few times - typically before you know about optional parameters. When a user knows about the possible parameters he probably also knows in which order to use `conan alias`.
I'd say it's better to have it than to miss it.
There's another option that is non-breaking and more intuitive: Add a new command like `conan alias-with-check` (I'm sure there's a better name for it) that'll do the same as `conan alias --check`. This would be non-breaking, but newer users would stumble upon it intuitively. Though this would add to the complexity of conan itself (new command, possible issues within future versions, users mixing up alias and alias-with-check...). Just wanted to throw it out there.
Another idea which might get the consensus as non-breaking: throw an error if the package already exists and it is NOT an alias already. This was the case that would be breaking, when you want to redefine an existing alias. But a check that you are not breaking an existing package can be totally OK
|
[
{
"body": "When creating an alias package with `conan alias` it is easy to mix up the order of parameters:\r\n\r\n* Correct: `conan alias boost/ALIAS@conan/stable boost/1.69.0@conan/stable`\r\n* Wrong: `conan alias boost/1.69.0@conan/stable boost/ALIAS@conan/stable`\r\n\r\nIn case you're accidentally calling the wrong order you're overwriting the package you initially wanted to reference. If this happens you render your original package useless and need to remove it. Then you can start all over again.\r\nIt would be perfect to have a built-in check that tells the user that the existing package `boost/1.69.0@conan/stable` already exists and will be overwritten by an alias package - and if the user is really sure this is what he wants to do.\r\n\r\nHappens to me with Conan 1.11.0.",
"number": 4423,
"title": "Prevent conan alias from overwriting existing packages"
}
] |
d51bb7237a00a64b367663b1b3fc22cb175c0305
|
{
"head_commit": "096ca0ab70ec6022ec914d7d394d0a115d7350b6",
"head_commit_message": "Merge remote-tracking branch 'conan/develop' into fix/alias-existing",
"patch_to_review": "diff --git a/conans/client/cmd/export.py b/conans/client/cmd/export.py\nindex a85796c7206..f4c26225676 100644\n--- a/conans/client/cmd/export.py\n+++ b/conans/client/cmd/export.py\n@@ -15,29 +15,25 @@\n from conans.model.scm import detect_repo_type\n from conans.paths import CONANFILE\n from conans.search.search import search_recipes, search_packages\n-from conans.util.files import is_dirty, load, mkdir, rmdir, save, set_dirty, remove\n+from conans.util.files import is_dirty, load, rmdir, save, set_dirty, remove\n from conans.util.log import logger\n \n \n-def export_alias(reference, target_reference, cache, output):\n- if reference.name != target_reference.name:\n- raise ConanException(\"An alias can only be defined to a package with the same name\")\n+def export_alias(ref_layout, target_reference, output, revisions_enabled):\n conanfile = \"\"\"\n from conans import ConanFile\n \n class AliasConanfile(ConanFile):\n alias = \"%s\"\n-\"\"\" % target_reference.full_repr()\n+\"\"\" % target_reference\n \n- export_path = cache.export(reference)\n- mkdir(export_path)\n- save(os.path.join(export_path, CONANFILE), conanfile)\n- mkdir(cache.export_sources(reference))\n- digest = FileTreeManifest.create(export_path)\n- digest.save(export_path)\n+ save(ref_layout.conanfile(), conanfile)\n+ digest = FileTreeManifest.create(ref_layout.export())\n+ digest.save(folder=ref_layout.export())\n \n # Create the metadata for the alias\n- _update_revision_in_metadata(cache, output, None, reference, digest)\n+ _update_revision_in_metadata(ref_layout=ref_layout, revisions_enabled=revisions_enabled,\n+ output=output, path=None, digest=digest)\n \n \n def cmd_export(conanfile_path, conanfile, ref, keep_source, output, cache, hook_manager):\n@@ -98,7 +94,11 @@ def cmd_export(conanfile_path, conanfile, ref, keep_source, output, cache, hook_\n digest.save(package_layout.export())\n \n # Compute the revision for the recipe\n- _update_revision_in_metadata(cache, output, os.path.dirname(conanfile_path), ref, digest)\n+ _update_revision_in_metadata(ref_layout=package_layout,\n+ revisions_enabled=cache.config.revisions_enabled,\n+ output=output,\n+ path=os.path.dirname(conanfile_path),\n+ digest=digest)\n \n # FIXME: Conan 2.0 Clear the registry entry if the recipe has changed\n source_folder = package_layout.source()\n@@ -229,18 +229,18 @@ def _detect_scm_revision(path):\n return repo_obj.get_revision(), repo_type\n \n \n-def _update_revision_in_metadata(cache, output, path, ref, digest):\n+def _update_revision_in_metadata(ref_layout, revisions_enabled, output, path, digest):\n \n scm_revision_detected, repo_type = _detect_scm_revision(path)\n revision = scm_revision_detected or digest.summary_hash\n- if cache.config.revisions_enabled:\n+ if revisions_enabled:\n if scm_revision_detected:\n output.info(\"Using {} commit as the recipe\"\n \" revision: {} \".format(repo_type, revision))\n else:\n output.info(\"Using the exported files summary hash as the recipe\"\n \" revision: {} \".format(revision))\n- with cache.package_layout(ref).update_metadata() as metadata:\n+ with ref_layout.update_metadata() as metadata:\n metadata.recipe.revision = revision\n metadata.recipe.time = None\n \ndiff --git a/conans/client/conan_api.py b/conans/client/conan_api.py\nindex 0f079fa4827..8fa80feba7e 100644\n--- a/conans/client/conan_api.py\n+++ b/conans/client/conan_api.py\n@@ -45,6 +45,7 @@\n from conans.client.userio import UserIO\n from conans.errors import ConanException, NotFoundException\n from conans.model.conan_file import get_env_context_manager\n+from conans.model.editable_cpp_info import get_editable_abs_path\n from conans.model.graph_info import GraphInfo, GRAPH_INFO_FILE\n from conans.model.ref import ConanFileReference, PackageReference, check_valid_ref\n from conans.model.version import Version\n@@ -56,7 +57,6 @@\n from conans.util.files import exception_message_safe, mkdir, save_files\n from conans.util.log import configure_logger\n from conans.util.tracer import log_command, log_exception\n-from conans.model.editable_cpp_info import get_editable_abs_path\n \n default_manifest_folder = '.conan_manifests'\n \n@@ -934,7 +934,23 @@ def get_path(self, reference, package_id=None, path=None, remote_name=None):\n def export_alias(self, reference, target_reference):\n ref = ConanFileReference.loads(reference)\n target_ref = ConanFileReference.loads(target_reference)\n- return export_alias(ref, target_ref, self._cache, self._user_io.out)\n+\n+ if ref.name != target_ref.name:\n+ raise ConanException(\"An alias can only be defined to a package with the same name\")\n+\n+ # Do not allow to override an existing package\n+ alias_conanfile_path = self._cache.package_layout(ref).conanfile()\n+ if os.path.exists(alias_conanfile_path):\n+ conanfile_class = self._loader.load_class(alias_conanfile_path)\n+ conanfile = conanfile_class(self._user_io.out, None, str(ref))\n+ if not getattr(conanfile, 'alias', None):\n+ raise ConanException(\"Reference '{}' is already a package, remove it before creating\"\n+ \" and alias with the same name\".format(ref))\n+\n+ ref_layout = self._cache.package_layout(ref)\n+ return export_alias(ref_layout, str(target_ref),\n+ revisions_enabled=self._cache.config.revisions_enabled,\n+ output=self._user_io.out)\n \n @api_method\n def get_default_remote(self):\ndiff --git a/conans/test/functional/command/alias_test.py b/conans/test/functional/command/alias_test.py\nindex bfb2d1ce47a..2cc7611b9e0 100644\n--- a/conans/test/functional/command/alias_test.py\n+++ b/conans/test/functional/command/alias_test.py\n@@ -1,4 +1,5 @@\n import os\n+import textwrap\n import unittest\n \n from parameterized.parameterized import parameterized\n@@ -429,23 +430,24 @@ def test_basic_test(self):\n servers = {\"default\": test_server}\n client = TestClient(servers=servers, users={\"default\": [(\"lasote\", \"mypass\")]})\n for i in (1, 2):\n- conanfile = \"\"\"from conans import ConanFile\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n \n-class TestConan(ConanFile):\n- name = \"Hello\"\n- version = \"0.%s\"\n- \"\"\" % i\n+ class TestConan(ConanFile):\n+ name = \"Hello\"\n+ version = \"0.%s\"\n+ \"\"\" % i)\n client.save({\"conanfile.py\": conanfile})\n client.run(\"export . lasote/channel\")\n \n client.run(\"alias Hello/0.X@lasote/channel Hello/0.1@lasote/channel\")\n- conanfile_chat = \"\"\"from conans import ConanFile\n-\n-class TestConan(ConanFile):\n- name = \"Chat\"\n- version = \"1.0\"\n- requires = \"Hello/0.X@lasote/channel\"\n- \"\"\"\n+ conanfile_chat = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class TestConan(ConanFile):\n+ name = \"Chat\"\n+ version = \"1.0\"\n+ requires = \"Hello/0.X@lasote/channel\"\n+ \"\"\")\n client.save({\"conanfile.py\": conanfile_chat}, clean_first=True)\n client.run(\"export . lasote/channel\")\n client.save({\"conanfile.txt\": \"[requires]\\nChat/1.0@lasote/channel\"}, clean_first=True)\n@@ -469,3 +471,43 @@ class TestConan(ConanFile):\n client.run(\"install . --build=missing\")\n self.assertIn(\"Hello/0.2\", client.user_io.out)\n self.assertNotIn(\"Hello/0.1\", client.user_io.out)\n+\n+ def test_not_override_package(self):\n+ \"\"\" Do not override a package with an alias\n+\n+ If we create an alias with the same name as an existing package, it will\n+ override the package without any warning.\n+ \"\"\"\n+ t = TestClient()\n+ conanfile = textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ class Pkg(ConanFile):\n+ description = \"{}\"\n+ \"\"\")\n+\n+ # Create two packages\n+ reference1 = \"PkgA/0.1@user/testing\"\n+ t.save({\"conanfile.py\": conanfile.format(reference1)})\n+ t.run(\"export . {}\".format(reference1))\n+\n+ reference2 = \"PkgA/0.2@user/testing\"\n+ t.save({\"conanfile.py\": conanfile.format(reference2)})\n+ t.run(\"export . {}\".format(reference2))\n+\n+ # Now create an alias overriding one of them\n+ alias = reference2\n+ t.run(\"alias {alias} {reference}\".format(alias=alias, reference=reference1),\n+ assert_error=True)\n+ self.assertIn(\"ERROR: Reference '{}' is already a package\".format(alias), t.out)\n+\n+ # Check that the package is not damaged\n+ t.run(\"inspect {} -a description\".format(reference2))\n+ self.assertIn(\"description: {}\".format(reference2), t.out)\n+\n+ # Remove it, and create the alias again (twice, override an alias is allowed)\n+ t.run(\"remove {} -f\".format(reference2))\n+ t.run(\"alias {alias} {reference}\".format(alias=alias, reference=reference1))\n+ t.run(\"alias {alias} {reference}\".format(alias=alias, reference=reference1))\n+\n+ t.run(\"inspect {} -a description\".format(reference2))\n+ self.assertIn(\"description: None\", t.out) # The alias conanfile doesn't have description\n"
}
|
[
{
"diff_hunk": "@@ -15,29 +15,25 @@\n from conans.model.scm import detect_repo_type\n from conans.paths import CONANFILE\n from conans.search.search import search_recipes, search_packages\n-from conans.util.files import is_dirty, load, mkdir, rmdir, save, set_dirty, remove\n+from conans.util.files import is_dirty, load, rmdir, save, set_dirty, remove\n from conans.util.log import logger\n \n \n-def export_alias(reference, target_reference, cache, output):\n- if reference.name != target_reference.name:\n- raise ConanException(\"An alias can only be defined to a package with the same name\")\n+def export_alias(ref_layout, target_reference, output, revisions_enabled):\n conanfile = \"\"\"\n from conans import ConanFile\n \n class AliasConanfile(ConanFile):\n alias = \"%s\"\n-\"\"\" % target_reference.full_repr()\n+\"\"\" % target_reference",
"line": null,
"original_line": 28,
"original_start_line": null,
"path": "conans/client/cmd/export.py",
"start_line": null,
"text": "@user2:\nThis is changing behavior, not allowing alias to revisions, why? Discuss with @user1 what behavior we want for alias.\n\n@user1:\nI relaunched the CI for this with revisions enabled, because yes, it should break the alias test with revisions."
},
{
"diff_hunk": "@@ -15,29 +15,25 @@\n from conans.model.scm import detect_repo_type\n from conans.paths import CONANFILE\n from conans.search.search import search_recipes, search_packages\n-from conans.util.files import is_dirty, load, mkdir, rmdir, save, set_dirty, remove\n+from conans.util.files import is_dirty, load, rmdir, save, set_dirty, remove\n from conans.util.log import logger\n \n \n-def export_alias(reference, target_reference, cache, output):\n- if reference.name != target_reference.name:\n- raise ConanException(\"An alias can only be defined to a package with the same name\")\n+def export_alias(ref_layout, target_reference, output, revisions_enabled):\n conanfile = \"\"\"\n from conans import ConanFile\n \n class AliasConanfile(ConanFile):\n alias = \"%s\"\n-\"\"\" % target_reference.full_repr()\n+\"\"\" % target_reference\n \n- export_path = cache.export(reference)\n- mkdir(export_path)\n- save(os.path.join(export_path, CONANFILE), conanfile)\n- mkdir(cache.export_sources(reference))\n- digest = FileTreeManifest.create(export_path)\n- digest.save(export_path)\n+ save(ref_layout.conanfile(), conanfile)\n+ digest = FileTreeManifest.create(ref_layout.export())\n+ digest.save(folder=ref_layout.export())\n \n # Create the metadata for the alias\n- _update_revision_in_metadata(cache, output, None, reference, digest)\n+ _update_revision_in_metadata(ref_layout=ref_layout, revisions_enabled=revisions_enabled,",
"line": null,
"original_line": 35,
"original_start_line": null,
"path": "conans/client/cmd/export.py",
"start_line": null,
"text": "@user1:\nChanging this to use layouts should have been done in the other PR, not in this one."
}
] |
aab5fa1b29e67173eda33273c7a864697ad6e48a
|
diff --git a/conans/client/conan_api.py b/conans/client/conan_api.py
index 0f079fa4827..22017c536d6 100644
--- a/conans/client/conan_api.py
+++ b/conans/client/conan_api.py
@@ -45,6 +45,7 @@
from conans.client.userio import UserIO
from conans.errors import ConanException, NotFoundException
from conans.model.conan_file import get_env_context_manager
+from conans.model.editable_cpp_info import get_editable_abs_path
from conans.model.graph_info import GraphInfo, GRAPH_INFO_FILE
from conans.model.ref import ConanFileReference, PackageReference, check_valid_ref
from conans.model.version import Version
@@ -56,7 +57,6 @@
from conans.util.files import exception_message_safe, mkdir, save_files
from conans.util.log import configure_logger
from conans.util.tracer import log_command, log_exception
-from conans.model.editable_cpp_info import get_editable_abs_path
default_manifest_folder = '.conan_manifests'
@@ -934,6 +934,16 @@ def get_path(self, reference, package_id=None, path=None, remote_name=None):
def export_alias(self, reference, target_reference):
ref = ConanFileReference.loads(reference)
target_ref = ConanFileReference.loads(target_reference)
+
+ # Do not allow to override an existing package
+ alias_conanfile_path = self._cache.package_layout(ref).conanfile()
+ if os.path.exists(alias_conanfile_path):
+ conanfile_class = self._loader.load_class(alias_conanfile_path)
+ conanfile = conanfile_class(self._user_io.out, None, str(ref))
+ if not getattr(conanfile, 'alias', None):
+ raise ConanException("Reference '{}' is already a package, remove it before creating"
+ " and alias with the same name".format(ref))
+
return export_alias(ref, target_ref, self._cache, self._user_io.out)
@api_method
diff --git a/conans/test/functional/command/alias_test.py b/conans/test/functional/command/alias_test.py
index bfb2d1ce47a..2cc7611b9e0 100644
--- a/conans/test/functional/command/alias_test.py
+++ b/conans/test/functional/command/alias_test.py
@@ -1,4 +1,5 @@
import os
+import textwrap
import unittest
from parameterized.parameterized import parameterized
@@ -429,23 +430,24 @@ def test_basic_test(self):
servers = {"default": test_server}
client = TestClient(servers=servers, users={"default": [("lasote", "mypass")]})
for i in (1, 2):
- conanfile = """from conans import ConanFile
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
-class TestConan(ConanFile):
- name = "Hello"
- version = "0.%s"
- """ % i
+ class TestConan(ConanFile):
+ name = "Hello"
+ version = "0.%s"
+ """ % i)
client.save({"conanfile.py": conanfile})
client.run("export . lasote/channel")
client.run("alias Hello/0.X@lasote/channel Hello/0.1@lasote/channel")
- conanfile_chat = """from conans import ConanFile
-
-class TestConan(ConanFile):
- name = "Chat"
- version = "1.0"
- requires = "Hello/0.X@lasote/channel"
- """
+ conanfile_chat = textwrap.dedent("""
+ from conans import ConanFile
+ class TestConan(ConanFile):
+ name = "Chat"
+ version = "1.0"
+ requires = "Hello/0.X@lasote/channel"
+ """)
client.save({"conanfile.py": conanfile_chat}, clean_first=True)
client.run("export . lasote/channel")
client.save({"conanfile.txt": "[requires]\nChat/1.0@lasote/channel"}, clean_first=True)
@@ -469,3 +471,43 @@ class TestConan(ConanFile):
client.run("install . --build=missing")
self.assertIn("Hello/0.2", client.user_io.out)
self.assertNotIn("Hello/0.1", client.user_io.out)
+
+ def test_not_override_package(self):
+ """ Do not override a package with an alias
+
+ If we create an alias with the same name as an existing package, it will
+ override the package without any warning.
+ """
+ t = TestClient()
+ conanfile = textwrap.dedent("""
+ from conans import ConanFile
+ class Pkg(ConanFile):
+ description = "{}"
+ """)
+
+ # Create two packages
+ reference1 = "PkgA/0.1@user/testing"
+ t.save({"conanfile.py": conanfile.format(reference1)})
+ t.run("export . {}".format(reference1))
+
+ reference2 = "PkgA/0.2@user/testing"
+ t.save({"conanfile.py": conanfile.format(reference2)})
+ t.run("export . {}".format(reference2))
+
+ # Now create an alias overriding one of them
+ alias = reference2
+ t.run("alias {alias} {reference}".format(alias=alias, reference=reference1),
+ assert_error=True)
+ self.assertIn("ERROR: Reference '{}' is already a package".format(alias), t.out)
+
+ # Check that the package is not damaged
+ t.run("inspect {} -a description".format(reference2))
+ self.assertIn("description: {}".format(reference2), t.out)
+
+ # Remove it, and create the alias again (twice, override an alias is allowed)
+ t.run("remove {} -f".format(reference2))
+ t.run("alias {alias} {reference}".format(alias=alias, reference=reference1))
+ t.run("alias {alias} {reference}".format(alias=alias, reference=reference1))
+
+ t.run("inspect {} -a description".format(reference2))
+ self.assertIn("description: None", t.out) # The alias conanfile doesn't have description
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-45548@447e5eb
|
frappe/erpnext
|
Python
| 45,548
|
fix: Gross Profit Report with Correct Totals and Gross Margin
|
Support ticket: [Support Ticket - 30225](https://support.frappe.io/helpdesk/tickets/30225)
- closes : #35942
Before :
The add_total_row option was checked, causing the subtotals to combine, which resulted in an incorrect **Selling Amount**, **Buying Amount**, **Gross Profit**, **Gross Profit Percentage**.
<img width="847" alt="Screenshot 2025-01-27 at 6 55 10 PM" src="https://github.com/user-attachments/assets/374c917f-727b-4bdf-a93b-2dc548a440a9" />
After :
The total Selling Amount and Buying Amount are now calculated correctly.
The total **Gross Profit** and **Gross Profit Percentage** is calculated using a formula.
The add_total_row option was disabled using a patch.
<img width="847" alt="Screenshot 2025-01-27 at 9 58 45 PM" src="https://github.com/user-attachments/assets/45151881-71a0-4ab1-a55b-15f42fc99321" />
|
2025-01-27T13:34:35Z
|
Gross Profit Report error
### Information about bug
Using the Gross Profit report grouped by invoices shows incorrect gross profit average figure in the 'total' row. Also when changing the grouping to item code, it again gives another average figure which is also wrong.
### Module
accounts
### Version
ERPNext: v14.28.0 (HEAD)
Frappe Framework: v14.40.1 (HEAD)
India Compliance: v14.10.2 (HEAD)
### Installation method
None
### Relevant log output / Stack trace / Full Error Message.
_No response_
|
I think I figured out the error. In each grouping the figure changes except in monthly when it gives the right figure.
So, it takes the average of the gross profit% column rather than calculate using the totals. That has to be fixed as its a fundamental error in giving averages which have got to be on the totals (i.e. weighted).
Hoping for a resolution to this fundamental issue.
Can confirm, still an issue in v15
```
Frappe CRM: v1.19.0
South Africa Customisations: v0.1.6
ERPNext: v15.34.2
Frappe Framework: v15.40.3
Helpdesk: v0.10.0
Frappe HR: v15.28.3
```
https://github.com/frappe/erpnext/issues/33911
|
[
{
"body": "### Information about bug\n\nUsing the Gross Profit report grouped by invoices shows incorrect gross profit average figure in the 'total' row. Also when changing the grouping to item code, it again gives another average figure which is also wrong. \n\n### Module\n\naccounts\n\n### Version\n\nERPNext: v14.28.0 (HEAD)\r\nFrappe Framework: v14.40.1 (HEAD)\r\nIndia Compliance: v14.10.2 (HEAD)\n\n### Installation method\n\nNone\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_",
"number": 35942,
"title": "Gross Profit Report error"
}
] |
47c2c5377c162c896751cb1988fb80a9ba1ae608
|
{
"head_commit": "447e5ebac7bc31ecf108ae7794989ea9f1ed9abb",
"head_commit_message": "test: added test cases for total",
"patch_to_review": "diff --git a/erpnext/accounts/report/gross_profit/gross_profit.json b/erpnext/accounts/report/gross_profit/gross_profit.json\nindex 0730ffd77e5b..dfb7a991e3ea 100644\n--- a/erpnext/accounts/report/gross_profit/gross_profit.json\n+++ b/erpnext/accounts/report/gross_profit/gross_profit.json\n@@ -1,5 +1,5 @@\n {\n- \"add_total_row\": 1,\n+ \"add_total_row\": 0,\n \"columns\": [],\n \"creation\": \"2013-02-25 17:03:34\",\n \"disable_prepared_report\": 0,\n@@ -9,7 +9,7 @@\n \"filters\": [],\n \"idx\": 3,\n \"is_standard\": \"Yes\",\n- \"modified\": \"2022-02-11 10:18:36.956558\",\n+ \"modified\": \"2025-01-27 18:40:24.493829\",\n \"modified_by\": \"Administrator\",\n \"module\": \"Accounts\",\n \"name\": \"Gross Profit\",\ndiff --git a/erpnext/accounts/report/gross_profit/gross_profit.py b/erpnext/accounts/report/gross_profit/gross_profit.py\nindex 5df37603b9a9..7afbf2d0dbb4 100644\n--- a/erpnext/accounts/report/gross_profit/gross_profit.py\n+++ b/erpnext/accounts/report/gross_profit/gross_profit.py\n@@ -178,7 +178,14 @@ def get_data_when_grouped_by_invoice(columns, gross_profit_data, filters, group_\n \t# removing Item Code and Item Name columns\n \tdel columns[4:6]\n \n+\ttotal_base_amount = 0\n+\ttotal_buying_amount = 0\n+\n \tfor src in gross_profit_data.si_list:\n+\t\tif src.indent == 1:\n+\t\t\ttotal_base_amount += src.base_amount or 0.0\n+\t\t\ttotal_buying_amount += src.buying_amount or 0.0\n+\n \t\trow = frappe._dict()\n \t\trow.indent = src.indent\n \t\trow.parent_invoice = src.parent_invoice\n@@ -189,6 +196,24 @@ def get_data_when_grouped_by_invoice(columns, gross_profit_data, filters, group_\n \n \t\tdata.append(row)\n \n+\ttotal_gross_profit = total_base_amount - total_buying_amount\n+\tdata.append(\n+\t\tfrappe._dict(\n+\t\t\t{\n+\t\t\t\t\"sales_invoice\": \"Total\",\n+\t\t\t\t\"qty\": None,\n+\t\t\t\t\"avg._selling_rate\": None,\n+\t\t\t\t\"valuation_rate\": None,\n+\t\t\t\t\"selling_amount\": total_base_amount,\n+\t\t\t\t\"buying_amount\": total_buying_amount,\n+\t\t\t\t\"gross_profit\": total_gross_profit,\n+\t\t\t\t\"gross_profit_%\": flt((total_gross_profit / total_base_amount) * 100.0, 3)\n+\t\t\t\tif total_base_amount\n+\t\t\t\telse 0,\n+\t\t\t}\n+\t\t)\n+\t)\n+\n \n def get_data_when_not_grouped_by_invoice(gross_profit_data, filters, group_wise_columns, data):\n \tfor src in gross_profit_data.grouped_data:\ndiff --git a/erpnext/accounts/report/gross_profit/test_gross_profit.py b/erpnext/accounts/report/gross_profit/test_gross_profit.py\nindex 6d060db1d155..b483555d701b 100644\n--- a/erpnext/accounts/report/gross_profit/test_gross_profit.py\n+++ b/erpnext/accounts/report/gross_profit/test_gross_profit.py\n@@ -612,3 +612,33 @@ def test_valuation_rate_without_previous_sle(self):\n \t\titem_from_sinv2 = [x for x in data if x.parent_invoice == sinv2.name]\n \t\tself.assertEqual(len(item_from_sinv2), 1)\n \t\tself.assertEqual(1800, item_from_sinv2[0].valuation_rate)\n+\n+\tdef test_gross_profit_groupby_invoices(self):\n+\t\tcreate_sales_invoice(\n+\t\t\tqty=1,\n+\t\t\trate=100,\n+\t\t\tcompany=self.company,\n+\t\t\tcustomer=self.customer,\n+\t\t\titem_code=self.item,\n+\t\t\titem_name=self.item,\n+\t\t\tcost_center=self.cost_center,\n+\t\t\twarehouse=self.warehouse,\n+\t\t\tdebit_to=self.debit_to,\n+\t\t\tparent_cost_center=self.cost_center,\n+\t\t\tupdate_stock=0,\n+\t\t\tcurrency=\"INR\",\n+\t\t\tincome_account=self.income_account,\n+\t\t\texpense_account=self.expense_account,\n+\t\t)\n+\n+\t\tfilters = frappe._dict(\n+\t\t\tcompany=self.company, from_date=nowdate(), to_date=nowdate(), group_by=\"Invoice\"\n+\t\t)\n+\n+\t\t_, data = execute(filters=filters)\n+\t\ttotal = data[-1]\n+\n+\t\tself.assertEqual(total.selling_amount, 100.0)\n+\t\tself.assertEqual(total.buying_amount, 0.0)\n+\t\tself.assertEqual(total.gross_profit, 100.0)\n+\t\tself.assertEqual(total.get(\"gross_profit_%\"), 100.0)\ndiff --git a/erpnext/patches.txt b/erpnext/patches.txt\nindex 098000b0b03b..9b2688020ad9 100644\n--- a/erpnext/patches.txt\n+++ b/erpnext/patches.txt\n@@ -399,3 +399,4 @@ erpnext.patches.v15_0.rename_manufacturing_settings_field\n erpnext.patches.v15_0.migrate_checkbox_to_select_for_reconciliation_effect\n erpnext.patches.v15_0.sync_auto_reconcile_config\n execute:frappe.db.set_single_value(\"Accounts Settings\", \"exchange_gain_loss_posting_date\", \"Payment\")\n+erpnext.patches.v14_0.disable_add_row_in_gross_profit\n\\ No newline at end of file\ndiff --git a/erpnext/patches/v14_0/disable_add_row_in_gross_profit.py b/erpnext/patches/v14_0/disable_add_row_in_gross_profit.py\nnew file mode 100644\nindex 000000000000..d95503bef0aa\n--- /dev/null\n+++ b/erpnext/patches/v14_0/disable_add_row_in_gross_profit.py\n@@ -0,0 +1,5 @@\n+import frappe\n+\n+\n+def execute():\n+\tfrappe.db.set_value(\"Report\", \"Gross Profit\", \"add_total_row\", 0)\n"
}
|
[
{
"diff_hunk": "@@ -189,6 +196,24 @@ def get_data_when_grouped_by_invoice(columns, gross_profit_data, filters, group_\n \n \t\tdata.append(row)\n \n+\ttotal_gross_profit = total_base_amount - total_buying_amount\n+\tdata.append(\n+\t\tfrappe._dict(\n+\t\t\t{\n+\t\t\t\t\"sales_invoice\": \"Total\",\n+\t\t\t\t\"qty\": None,\n+\t\t\t\t\"avg._selling_rate\": None,\n+\t\t\t\t\"valuation_rate\": None,\n+\t\t\t\t\"selling_amount\": total_base_amount,\n+\t\t\t\t\"buying_amount\": total_buying_amount,\n+\t\t\t\t\"gross_profit\": total_gross_profit,\n+\t\t\t\t\"gross_profit_%\": flt((total_gross_profit / total_base_amount) * 100.0, 3)",
"line": null,
"original_line": 210,
"original_start_line": null,
"path": "erpnext/accounts/report/gross_profit/gross_profit.py",
"start_line": null,
"text": "@user1:\n Use `currency_precision` instead of hardcoding it."
}
] |
dbdc39d30ffeb5ec545dad23e543f8f114397abe
|
diff --git a/erpnext/accounts/report/gross_profit/gross_profit.json b/erpnext/accounts/report/gross_profit/gross_profit.json
index 0730ffd77e5b..dfb7a991e3ea 100644
--- a/erpnext/accounts/report/gross_profit/gross_profit.json
+++ b/erpnext/accounts/report/gross_profit/gross_profit.json
@@ -1,5 +1,5 @@
{
- "add_total_row": 1,
+ "add_total_row": 0,
"columns": [],
"creation": "2013-02-25 17:03:34",
"disable_prepared_report": 0,
@@ -9,7 +9,7 @@
"filters": [],
"idx": 3,
"is_standard": "Yes",
- "modified": "2022-02-11 10:18:36.956558",
+ "modified": "2025-01-27 18:40:24.493829",
"modified_by": "Administrator",
"module": "Accounts",
"name": "Gross Profit",
diff --git a/erpnext/accounts/report/gross_profit/gross_profit.py b/erpnext/accounts/report/gross_profit/gross_profit.py
index 5df37603b9a9..4802b0f35c1f 100644
--- a/erpnext/accounts/report/gross_profit/gross_profit.py
+++ b/erpnext/accounts/report/gross_profit/gross_profit.py
@@ -178,7 +178,14 @@ def get_data_when_grouped_by_invoice(columns, gross_profit_data, filters, group_
# removing Item Code and Item Name columns
del columns[4:6]
+ total_base_amount = 0
+ total_buying_amount = 0
+
for src in gross_profit_data.si_list:
+ if src.indent == 1:
+ total_base_amount += src.base_amount or 0.0
+ total_buying_amount += src.buying_amount or 0.0
+
row = frappe._dict()
row.indent = src.indent
row.parent_invoice = src.parent_invoice
@@ -189,6 +196,27 @@ def get_data_when_grouped_by_invoice(columns, gross_profit_data, filters, group_
data.append(row)
+ total_gross_profit = total_base_amount - total_buying_amount
+ data.append(
+ frappe._dict(
+ {
+ "sales_invoice": "Total",
+ "qty": None,
+ "avg._selling_rate": None,
+ "valuation_rate": None,
+ "selling_amount": total_base_amount,
+ "buying_amount": total_buying_amount,
+ "gross_profit": total_gross_profit,
+ "gross_profit_%": flt(
+ (total_gross_profit / total_base_amount) * 100.0,
+ cint(frappe.db.get_default("currency_precision")) or 3,
+ )
+ if total_base_amount
+ else 0,
+ }
+ )
+ )
+
def get_data_when_not_grouped_by_invoice(gross_profit_data, filters, group_wise_columns, data):
for src in gross_profit_data.grouped_data:
diff --git a/erpnext/accounts/report/gross_profit/test_gross_profit.py b/erpnext/accounts/report/gross_profit/test_gross_profit.py
index 6d060db1d155..b483555d701b 100644
--- a/erpnext/accounts/report/gross_profit/test_gross_profit.py
+++ b/erpnext/accounts/report/gross_profit/test_gross_profit.py
@@ -612,3 +612,33 @@ def test_valuation_rate_without_previous_sle(self):
item_from_sinv2 = [x for x in data if x.parent_invoice == sinv2.name]
self.assertEqual(len(item_from_sinv2), 1)
self.assertEqual(1800, item_from_sinv2[0].valuation_rate)
+
+ def test_gross_profit_groupby_invoices(self):
+ create_sales_invoice(
+ qty=1,
+ rate=100,
+ company=self.company,
+ customer=self.customer,
+ item_code=self.item,
+ item_name=self.item,
+ cost_center=self.cost_center,
+ warehouse=self.warehouse,
+ debit_to=self.debit_to,
+ parent_cost_center=self.cost_center,
+ update_stock=0,
+ currency="INR",
+ income_account=self.income_account,
+ expense_account=self.expense_account,
+ )
+
+ filters = frappe._dict(
+ company=self.company, from_date=nowdate(), to_date=nowdate(), group_by="Invoice"
+ )
+
+ _, data = execute(filters=filters)
+ total = data[-1]
+
+ self.assertEqual(total.selling_amount, 100.0)
+ self.assertEqual(total.buying_amount, 0.0)
+ self.assertEqual(total.gross_profit, 100.0)
+ self.assertEqual(total.get("gross_profit_%"), 100.0)
diff --git a/erpnext/patches.txt b/erpnext/patches.txt
index 098000b0b03b..9b2688020ad9 100644
--- a/erpnext/patches.txt
+++ b/erpnext/patches.txt
@@ -399,3 +399,4 @@ erpnext.patches.v15_0.rename_manufacturing_settings_field
erpnext.patches.v15_0.migrate_checkbox_to_select_for_reconciliation_effect
erpnext.patches.v15_0.sync_auto_reconcile_config
execute:frappe.db.set_single_value("Accounts Settings", "exchange_gain_loss_posting_date", "Payment")
+erpnext.patches.v14_0.disable_add_row_in_gross_profit
\ No newline at end of file
diff --git a/erpnext/patches/v14_0/disable_add_row_in_gross_profit.py b/erpnext/patches/v14_0/disable_add_row_in_gross_profit.py
new file mode 100644
index 000000000000..d95503bef0aa
--- /dev/null
+++ b/erpnext/patches/v14_0/disable_add_row_in_gross_profit.py
@@ -0,0 +1,5 @@
+import frappe
+
+
+def execute():
+ frappe.db.set_value("Report", "Gross Profit", "add_total_row", 0)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-41655@e65f306
|
frappe/erpnext
|
Python
| 41,655
|
fix: completed DC will not appear in a delivery trip
|
fixes: #41427
Set the condition for the delivery trip so that a completed Delivery Challan (DC) will not appear. Also, add the customer filter in the "Get stops from".
The `docstatus` 1 was removed from the delivery trip due to https://github.com/frappe/erpnext/pull/38559, but `docstatus` 1 is still set in `delivery_note.js`, which is confusing because it is set in one place but not in another.
Currently, I haven't made any changes related to `docstatus`, but if you request, I will also update it in `delivery_note.js`.
**Output:**
https://github.com/frappe/erpnext/assets/141945075/e082f7a1-2813-4479-a6e1-789200bccdeb
|
2024-05-27T10:06:35Z
|
Delivery Trip can still schedule a delivery Note Even Delivery Trip is tagged as Completed
### Information about bug
Using the Old Versions, the delivery trip can filter fetching of delivery notes that are not yet delivered. But on version 14, you can still fetch all delivery notes regardless if it was already visited/delivered or not yet.
You may encode all delivery notes through the "get stops from" button or manually create it at the delivery note "create" button.
### Module
stock
### Version
ERPNext: v14.69.0 (HEAD)
Frappe Framework: v14.74.0 (HEAD)
Frappe HR: v14.27.1 (HEAD)
### Installation method
FrappeCloud
### Relevant log output / Stack trace / Full Error Message.
_No response_
|
[
{
"body": "### Information about bug\n\nUsing the Old Versions, the delivery trip can filter fetching of delivery notes that are not yet delivered. But on version 14, you can still fetch all delivery notes regardless if it was already visited/delivered or not yet. \r\n\r\nYou may encode all delivery notes through the \"get stops from\" button or manually create it at the delivery note \"create\" button.\n\n### Module\n\nstock\n\n### Version\n\nERPNext: v14.69.0 (HEAD)\r\nFrappe Framework: v14.74.0 (HEAD)\r\nFrappe HR: v14.27.1 (HEAD)\n\n### Installation method\n\nFrappeCloud\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_",
"number": 41427,
"title": "Delivery Trip can still schedule a delivery Note Even Delivery Trip is tagged as Completed"
}
] |
537e0e32b2d02da278824a3e226f45c983326745
|
{
"head_commit": "e65f30674d4e390fa6b702b2407588a704df8ac2",
"head_commit_message": "fix: completed DC will not appear in a delivery trip",
"patch_to_review": "diff --git a/erpnext/stock/doctype/delivery_note/delivery_note.js b/erpnext/stock/doctype/delivery_note/delivery_note.js\nindex 06881c99c125..8e9fbcd2adb7 100644\n--- a/erpnext/stock/doctype/delivery_note/delivery_note.js\n+++ b/erpnext/stock/doctype/delivery_note/delivery_note.js\n@@ -223,7 +223,7 @@ erpnext.stock.DeliveryNoteController = class DeliveryNoteController extends (\n \t\t\t\t);\n \t\t\t}\n \n-\t\t\tif (doc.docstatus == 1 && frappe.model.can_create(\"Delivery Trip\")) {\n+\t\t\tif (doc.docstatus == 1 && doc.status == \"Completed\" && frappe.model.can_create(\"Delivery Trip\")) {\n \t\t\t\tthis.frm.add_custom_button(\n \t\t\t\t\t__(\"Delivery Trip\"),\n \t\t\t\t\tfunction () {\ndiff --git a/erpnext/stock/doctype/delivery_trip/delivery_trip.js b/erpnext/stock/doctype/delivery_trip/delivery_trip.js\nindex e0c20cf1351b..855bccdded5e 100755\n--- a/erpnext/stock/doctype/delivery_trip/delivery_trip.js\n+++ b/erpnext/stock/doctype/delivery_trip/delivery_trip.js\n@@ -58,9 +58,11 @@ frappe.ui.form.on(\"Delivery Trip\", {\n \t\t\t\t\t\tdate_field: \"posting_date\",\n \t\t\t\t\t\tsetters: {\n \t\t\t\t\t\t\tcompany: frm.doc.company,\n+\t\t\t\t\t\t\tcustomer: null,\n \t\t\t\t\t\t},\n \t\t\t\t\t\tget_query_filters: {\n \t\t\t\t\t\t\tcompany: frm.doc.company,\n+\t\t\t\t\t\t\tstatus: [\"Not In\", [\"Completed\", \"Cancelled\"]],\n \t\t\t\t\t\t},\n \t\t\t\t\t});\n \t\t\t\t},\n"
}
|
[
{
"diff_hunk": "@@ -223,7 +223,7 @@ erpnext.stock.DeliveryNoteController = class DeliveryNoteController extends (\n \t\t\t\t);\n \t\t\t}\n \n-\t\t\tif (doc.docstatus == 1 && frappe.model.can_create(\"Delivery Trip\")) {\n+\t\t\tif (doc.docstatus == 1 && doc.status == \"Completed\" && frappe.model.can_create(\"Delivery Trip\")) {",
"line": null,
"original_line": 226,
"original_start_line": null,
"path": "erpnext/stock/doctype/delivery_note/delivery_note.js",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\t\tif (doc.docstatus == 1 && doc.status != \"Completed\" && frappe.model.can_create(\"Delivery Trip\")) {\r\n```"
}
] |
ffc1f4837b305ef698d54dccbec8ee5e2e08ff4a
|
diff --git a/erpnext/stock/doctype/delivery_note/delivery_note.js b/erpnext/stock/doctype/delivery_note/delivery_note.js
index 06881c99c125..eba41d97c6e0 100644
--- a/erpnext/stock/doctype/delivery_note/delivery_note.js
+++ b/erpnext/stock/doctype/delivery_note/delivery_note.js
@@ -223,7 +223,7 @@ erpnext.stock.DeliveryNoteController = class DeliveryNoteController extends (
);
}
- if (doc.docstatus == 1 && frappe.model.can_create("Delivery Trip")) {
+ if (doc.docstatus == 1 && doc.status != "Completed" && frappe.model.can_create("Delivery Trip")) {
this.frm.add_custom_button(
__("Delivery Trip"),
function () {
diff --git a/erpnext/stock/doctype/delivery_trip/delivery_trip.js b/erpnext/stock/doctype/delivery_trip/delivery_trip.js
index e0c20cf1351b..855bccdded5e 100755
--- a/erpnext/stock/doctype/delivery_trip/delivery_trip.js
+++ b/erpnext/stock/doctype/delivery_trip/delivery_trip.js
@@ -58,9 +58,11 @@ frappe.ui.form.on("Delivery Trip", {
date_field: "posting_date",
setters: {
company: frm.doc.company,
+ customer: null,
},
get_query_filters: {
company: frm.doc.company,
+ status: ["Not In", ["Completed", "Cancelled"]],
},
});
},
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
frappe__erpnext-40964@39d6df7
|
frappe/erpnext
|
Python
| 40,964
|
fix: price list when invoice created from timesheet
|
**Version 15**
fixes: #40921
**Before:**
https://github.com/frappe/erpnext/assets/141945075/7cf638f5-5874-44ae-b5d4-c0753d0badf6
<br>
**After:**
https://github.com/frappe/erpnext/assets/141945075/a1cfdf4e-d2ae-460c-ae43-b736ad173a67
|
2024-04-11T06:40:18Z
|
Wrong pricelist when invoice created from timesheet
### Information about bug
When an invoice is created from a timesheet, the pricelist for the invoice is set to the default price list instead of the price list defined in the customer.
Creating an invoice from scratch uses the correct price list for the customer.
### Module
selling, projects
### Version
ERPNext: v15.19.2 (version-15)
Frappe Framework: v15.20.0 (version-15)
### Installation method
easy-install
### Relevant log output / Stack trace / Full Error Message.
_No response_
|
[
{
"body": "### Information about bug\n\nWhen an invoice is created from a timesheet, the pricelist for the invoice is set to the default price list instead of the price list defined in the customer.\r\n\r\nCreating an invoice from scratch uses the correct price list for the customer.\n\n### Module\n\nselling, projects\n\n### Version\n\nERPNext: v15.19.2 (version-15)\r\nFrappe Framework: v15.20.0 (version-15)\n\n### Installation method\n\neasy-install\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_",
"number": 40921,
"title": "Wrong pricelist when invoice created from timesheet"
}
] |
fc835ed6b12f09b168c737ccbb467ec98ac3c5a1
|
{
"head_commit": "39d6df7c7d225cb946a640e980fcc5bafb1c6a2d",
"head_commit_message": "fix: price list when invoice created from timesheet",
"patch_to_review": "diff --git a/erpnext/projects/doctype/timesheet/timesheet.py b/erpnext/projects/doctype/timesheet/timesheet.py\nindex 90f436831f91..a625f9fee73a 100644\n--- a/erpnext/projects/doctype/timesheet/timesheet.py\n+++ b/erpnext/projects/doctype/timesheet/timesheet.py\n@@ -389,6 +389,9 @@ def make_sales_invoice(source_name, item_code=None, customer=None, currency=None\n \ttarget.project = timesheet.parent_project\n \tif customer:\n \t\ttarget.customer = customer\n+\t\tcustomer_doc = frappe.get_doc(\"Customer\", customer)\n+\t\tif customer_doc and customer_doc.default_price_list:\n+\t\t\ttarget.selling_price_list = customer_doc.default_price_list\n \n \tif currency:\n \t\ttarget.currency = currency\n"
}
|
[
{
"diff_hunk": "@@ -389,6 +389,9 @@ def make_sales_invoice(source_name, item_code=None, customer=None, currency=None\n \ttarget.project = timesheet.parent_project\n \tif customer:\n \t\ttarget.customer = customer\n+\t\tcustomer_doc = frappe.get_doc(\"Customer\", customer)",
"line": null,
"original_line": 392,
"original_start_line": null,
"path": "erpnext/projects/doctype/timesheet/timesheet.py",
"start_line": null,
"text": "@user1:\nUse get_value instead of get_doc, get_doc is unnecessary"
}
] |
882227a460ae33e0829c0876f5ad8d33f4ded639
|
diff --git a/erpnext/projects/doctype/timesheet/timesheet.py b/erpnext/projects/doctype/timesheet/timesheet.py
index 90f436831f91..64584591cc03 100644
--- a/erpnext/projects/doctype/timesheet/timesheet.py
+++ b/erpnext/projects/doctype/timesheet/timesheet.py
@@ -389,6 +389,9 @@ def make_sales_invoice(source_name, item_code=None, customer=None, currency=None
target.project = timesheet.parent_project
if customer:
target.customer = customer
+ default_price_list = frappe.get_value("Customer", customer, "default_price_list")
+ if default_price_list:
+ target.selling_price_list = default_price_list
if currency:
target.currency = currency
|
{
"difficulty": "medium",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-4380@e9cf667
|
conan-io/conan
|
Python
| 4,380
|
Remove double application of env vars from profile
|
Changelog: BugFix: Prepend environment variables are applied twice in conanfile
Docs: omit
- [x] Refer to the issue that supports this Pull Request: closes #4385
- [x] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [x] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [x] I've followed the PEP8 style guides for Python code.
- [ ] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-01-24T15:01:20Z
|
Virtualenv generator prepends duplicated values in env vars from profile
With a *conanfile.txt* empty and a profile with:
```
[env]
PREPEND_VAR = ['kk','pp']
```
Commands:
```
$ conan install . -g virtualenv
```
In activate.sh there is:
```
PREPEND_VAR="kk":"pp":"kk":"pp"
```
and in *activate.bat* you alaso have:
```
SET PREPEND_VAR=kk;pp;kk;pp
```
So at least variables that should be pretended are applied twice.
|
[
{
"body": "With a *conanfile.txt* empty and a profile with:\r\n\r\n```\r\n[env]\r\nPREPEND_VAR = ['kk','pp']\r\n```\r\n\r\nCommands:\r\n\r\n```\r\n$ conan install . -g virtualenv\r\n```\r\n\r\nIn activate.sh there is:\r\n```\r\nPREPEND_VAR=\"kk\":\"pp\":\"kk\":\"pp\"\r\n```\r\nand in *activate.bat* you alaso have:\r\n```\r\nSET PREPEND_VAR=kk;pp;kk;pp\r\n```\r\n\r\nSo at least variables that should be pretended are applied twice.\r\n\r\n",
"number": 4385,
"title": "Virtualenv generator prepends duplicated values in env vars from profile"
}
] |
256c14775b6281dec546aec1aa4c5973fe0d3ac8
|
{
"head_commit": "e9cf6670b5180d4704cdf073e42bfcdb6b93c235",
"head_commit_message": "make test conditional",
"patch_to_review": "diff --git a/conans/client/loader.py b/conans/client/loader.py\nindex 525ef71b8ca..9bd73cab09c 100644\n--- a/conans/client/loader.py\n+++ b/conans/client/loader.py\n@@ -174,7 +174,6 @@ def _parse_conan_txt(self, contents, path, display_name, processed_profile):\n \n # imports method\n conanfile.imports = parser.imports_method(conanfile)\n- conanfile._conan_env_values.update(processed_profile._env_values)\n return conanfile\n \n def load_virtual(self, references, processed_profile, scope_options=True,\ndiff --git a/conans/test/functional/configuration/profile_test.py b/conans/test/functional/configuration/profile_test.py\nindex e1ce9776577..133c34ff104 100644\n--- a/conans/test/functional/configuration/profile_test.py\n+++ b/conans/test/functional/configuration/profile_test.py\n@@ -1,4 +1,5 @@\n import os\n+import platform\n import unittest\n from collections import OrderedDict\n from textwrap import dedent\n@@ -27,7 +28,6 @@ def build(self):\n self.run(\"SET\")\n else:\n self.run(\"env\")\n-\n \"\"\"\n \n \n@@ -44,23 +44,27 @@ class ProfileTest(unittest.TestCase):\n def setUp(self):\n self.client = TestClient()\n \n+ def profile_conanfile_txt_test(self):\n+ self.client.save({\"conanfile.txt\": \"\"})\n+ create_profile(self.client.cache.profiles_path, \"envs\", settings={},\n+ env=[(\"A_VAR\", \"A_VALUE\"), (\"PREPEND_VAR\", [\"new_path\", \"other_path\"])],\n+ package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n+ self.client.run(\"install . -pr envs -g virtualenv\")\n+\n def test_profile_relative_cwd(self):\n- client = TestClient()\n- client.save({\"conanfile.txt\": \"\",\n- \"sub/sub/profile\": \"\"})\n- client.current_folder = os.path.join(client.current_folder, \"sub\")\n- client.run(\"install .. -pr=sub/profile2\", assert_error=True)\n- self.assertIn(\"ERROR: Profile not found: sub/profile2\", client.out)\n- client.run(\"install .. -pr=sub/profile\")\n- self.assertIn(\"conanfile.txt: Installing package\", client.out)\n+ self.client.save({\"conanfile.txt\": \"\", \"sub/sub/profile\": \"\"})\n+ self.client.current_folder = os.path.join(self.client.current_folder, \"sub\")\n+ self.client.run(\"install .. -pr=sub/profile2\", assert_error=True)\n+ self.assertIn(\"ERROR: Profile not found: sub/profile2\", self.client.out)\n+ self.client.run(\"install .. -pr=sub/profile\")\n+ self.assertIn(\"conanfile.txt: Installing package\", self.client.out)\n \n def base_profile_generated_test(self):\n \"\"\"we are testing that the default profile is created (when not existing, fresh install)\n even when you run a create with a profile\"\"\"\n- client = TestClient()\n- client.save({CONANFILE: conanfile_scope_env,\n- \"myprofile\": \"include(default)\\n[settings]\\nbuild_type=Debug\"})\n- client.run(\"create . conan/testing --profile myprofile\")\n+ self.client.save({CONANFILE: conanfile_scope_env,\n+ \"myprofile\": \"include(default)\\n[settings]\\nbuild_type=Debug\"})\n+ self.client.run(\"create . conan/testing --profile myprofile\")\n \n def bad_syntax_test(self):\n self.client.save({CONANFILE: conanfile_scope_env})\n@@ -148,11 +152,19 @@ def install_profile_env_test(self):\n files[\"conanfile.py\"] = conanfile_scope_env\n \n create_profile(self.client.cache.profiles_path, \"envs\", settings={},\n- env=[(\"A_VAR\", \"A_VALUE\")], package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n+ env=[(\"A_VAR\", \"A_VALUE\"),\n+ (\"PREPEND_VAR\", [\"new_path\", \"other_path\"])],\n+ package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n \n self.client.save(files)\n self.client.run(\"export . lasote/stable\")\n self.client.run(\"install Hello0/0.1@lasote/stable --build missing -pr envs\")\n+ if platform.system() == \"Windows\":\n+ self._assert_env_variable_printed(\"PREPEND_VAR\", \"new_path;other_path\")\n+ else:\n+ self._assert_env_variable_printed(\"PREPEND_VAR\", \"new_path:other_path\")\n+ self.assertNotIn(\"PREPEND_VAR=new_path;other_path;new_path;other_path\", self.client.out)\n+ self.assertNotIn(\"PREPEND_VAR=new_path:other_path:new_path:other_path\", self.client.out)\n self._assert_env_variable_printed(\"A_VAR\", \"A_VALUE\")\n self._assert_env_variable_printed(\"OTHER_VAR\", \"2\")\n \ndiff --git a/conans/test/unittests/client/generators/virtualenv_test.py b/conans/test/unittests/client/generators/virtualenv_test.py\nnew file mode 100644\nindex 00000000000..777a9c658d8\n--- /dev/null\n+++ b/conans/test/unittests/client/generators/virtualenv_test.py\n@@ -0,0 +1,19 @@\n+import platform\n+import unittest\n+\n+from conans import ConanFile, Settings\n+from conans.client.generators.virtualenv import VirtualEnvGenerator\n+from conans.model.env_info import EnvValues\n+from conans.test.utils.tools import TestBufferConanOutput\n+\n+\[email protected](platform.system() == \"Windows\", \"Test both .sh and .bat files\")\n+class VirtualenvGeneratorTest(unittest.TestCase):\n+\n+ def prepend_values_test(self):\n+ conanfile = ConanFile(TestBufferConanOutput(), None)\n+ conanfile.initialize(Settings({}), EnvValues.loads(\"PATH=[1,2,three]\"))\n+ gen = VirtualEnvGenerator(conanfile)\n+ content = gen.content\n+ self.assertIn(\"PATH=\\\"1\\\":\\\"2\\\":\\\"three\\\":$PATH\", content[\"activate.sh\"])\n+ self.assertIn(\"PATH=1;2;three;%PATH%\", content[\"activate.bat\"])\ndiff --git a/conans/test/unittests/client/loader_test.py b/conans/test/unittests/client/loader_test.py\nnew file mode 100644\nindex 00000000000..5ae0a4d96ee\n--- /dev/null\n+++ b/conans/test/unittests/client/loader_test.py\n@@ -0,0 +1,69 @@\n+import os\n+import textwrap\n+import unittest\n+\n+from conans import Settings\n+from conans.client.graph.python_requires import ConanPythonRequire\n+from conans.client.loader import ConanFileLoader\n+from conans.model.env_info import EnvValues\n+from conans.model.profile import Profile\n+from conans.model.ref import ConanFileReference\n+from conans.test.utils.conanfile import MockSettings\n+from conans.test.utils.runner import TestRunner\n+from conans.test.utils.test_files import temp_folder\n+from conans.test.utils.tools import TestBufferConanOutput\n+from conans.util.files import save\n+\n+\n+class LoadConanfileTxtTest(unittest.TestCase):\n+\n+ def setUp(self):\n+ settings = Settings()\n+ self.profile = Profile()\n+ self.profile._settings = settings\n+ self.profile._user_options = None\n+ self.profile._env_values = None\n+ self.conanfile_txt_path = os.path.join(temp_folder(), \"conanfile.txt\")\n+ output = TestBufferConanOutput()\n+ self.loader = ConanFileLoader(TestRunner(output), output, None)\n+\n+ def env_test(self):\n+ env_values = EnvValues()\n+ env_values.add(\"PREPEND_PATH\", [\"hello\", \"bye\"])\n+ env_values.add(\"VAR\", [\"var_value\"])\n+ self.profile._env_values = env_values\n+ save(self.conanfile_txt_path, \"\")\n+ conanfile = self.loader.load_conanfile_txt(self.conanfile_txt_path, self.profile)\n+ self.assertEquals(conanfile.env, {\"PREPEND_PATH\": [\"hello\", \"bye\"], \"VAR\": [\"var_value\"]})\n+\n+\n+class LoadConanfileTest(unittest.TestCase):\n+\n+ def setUp(self):\n+ settings = Settings()\n+ self.profile = Profile()\n+ self.profile._settings = settings\n+ self.profile._user_options = None\n+ self.profile._env_values = None\n+ self.profile._dev_reference = None\n+ self.profile._package_settings = None\n+ self.conanfile_path = os.path.join(temp_folder(), \"conanfile.py\")\n+ output = TestBufferConanOutput()\n+ self.loader = ConanFileLoader(TestRunner(output), output, ConanPythonRequire(None, None))\n+\n+ def env_test(self):\n+ env_values = EnvValues()\n+ env_values.add(\"PREPEND_PATH\", [\"hello\", \"bye\"])\n+ env_values.add(\"VAR\", [\"var_value\"])\n+ self.profile._env_values = env_values\n+ save(self.conanfile_path,\n+ textwrap.dedent(\"\"\"\n+ from conans import ConanFile\n+ \n+ class TestConan(ConanFile):\n+ name = \"hello\"\n+ version = \"1.0\"\n+ \"\"\"))\n+ ref = ConanFileReference(\"hello\", \"1.0\", \"user\", \"channel\")\n+ conanfile = self.loader.load_conanfile(self.conanfile_path, self.profile, ref)\n+ self.assertEquals(conanfile.env, {\"PREPEND_PATH\": [\"hello\", \"bye\"], \"VAR\": [\"var_value\"]})\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,19 @@\n+import platform\n+import unittest\n+\n+from conans import ConanFile, Settings\n+from conans.client.generators.virtualenv import VirtualEnvGenerator\n+from conans.model.env_info import EnvValues\n+from conans.test.utils.tools import TestBufferConanOutput\n+\n+\[email protected](platform.system() == \"Windows\", \"Test both .sh and .bat files\")",
"line": null,
"original_line": 10,
"original_start_line": null,
"path": "conans/test/unittests/client/generators/virtualenv_test.py",
"start_line": null,
"text": "@user1:\nwhy not mock `platform.system()`?\n\n@author:\nI have moved this check inside the test so it is not skipped when out of windows"
},
{
"diff_hunk": "@@ -148,11 +152,19 @@ def install_profile_env_test(self):\n files[\"conanfile.py\"] = conanfile_scope_env\n \n create_profile(self.client.cache.profiles_path, \"envs\", settings={},\n- env=[(\"A_VAR\", \"A_VALUE\")], package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n+ env=[(\"A_VAR\", \"A_VALUE\"),\n+ (\"PREPEND_VAR\", [\"new_path\", \"other_path\"])],\n+ package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n \n self.client.save(files)\n self.client.run(\"export . lasote/stable\")\n self.client.run(\"install Hello0/0.1@lasote/stable --build missing -pr envs\")\n+ if platform.system() == \"Windows\":\n+ self._assert_env_variable_printed(\"PREPEND_VAR\", \"new_path;other_path\")\n+ else:\n+ self._assert_env_variable_printed(\"PREPEND_VAR\", \"new_path:other_path\")\n+ self.assertNotIn(\"PREPEND_VAR=new_path;other_path;new_path;other_path\", self.client.out)",
"line": null,
"original_line": 166,
"original_start_line": null,
"path": "conans/test/functional/configuration/profile_test.py",
"start_line": null,
"text": "@user1:\nDo not check this way. For example, when using Linux, it will never happen, because ``os.pathsep``.\r\nThe correct way to robustly check this is something like:\r\n\r\n``self.assertEqual(1, str(self.client.out).count(\"new_path\")``"
},
{
"diff_hunk": "@@ -148,11 +152,19 @@ def install_profile_env_test(self):\n files[\"conanfile.py\"] = conanfile_scope_env\n \n create_profile(self.client.cache.profiles_path, \"envs\", settings={},\n- env=[(\"A_VAR\", \"A_VALUE\")], package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n+ env=[(\"A_VAR\", \"A_VALUE\"),\n+ (\"PREPEND_VAR\", [\"new_path\", \"other_path\"])],\n+ package_env={\"Hello0\": [(\"OTHER_VAR\", \"2\")]})\n \n self.client.save(files)\n self.client.run(\"export . lasote/stable\")\n self.client.run(\"install Hello0/0.1@lasote/stable --build missing -pr envs\")\n+ if platform.system() == \"Windows\":\n+ self._assert_env_variable_printed(\"PREPEND_VAR\", \"new_path;other_path\")",
"line": null,
"original_line": 163,
"original_start_line": null,
"path": "conans/test/functional/configuration/profile_test.py",
"start_line": null,
"text": "@user1:\nYou can use ``os.pathsep`` to check this in 1 line"
}
] |
2699c7156e80d76d95894b6372f2e3edc07c80b8
|
diff --git a/conans/client/loader.py b/conans/client/loader.py
index 525ef71b8ca..9bd73cab09c 100644
--- a/conans/client/loader.py
+++ b/conans/client/loader.py
@@ -174,7 +174,6 @@ def _parse_conan_txt(self, contents, path, display_name, processed_profile):
# imports method
conanfile.imports = parser.imports_method(conanfile)
- conanfile._conan_env_values.update(processed_profile._env_values)
return conanfile
def load_virtual(self, references, processed_profile, scope_options=True,
diff --git a/conans/test/functional/configuration/profile_test.py b/conans/test/functional/configuration/profile_test.py
index e1ce9776577..c7c8d7c261e 100644
--- a/conans/test/functional/configuration/profile_test.py
+++ b/conans/test/functional/configuration/profile_test.py
@@ -1,4 +1,5 @@
import os
+import platform
import unittest
from collections import OrderedDict
from textwrap import dedent
@@ -27,7 +28,6 @@ def build(self):
self.run("SET")
else:
self.run("env")
-
"""
@@ -44,23 +44,37 @@ class ProfileTest(unittest.TestCase):
def setUp(self):
self.client = TestClient()
+ def profile_conanfile_txt_test(self):
+ """
+ Test prepended env variables are applied correctrly from a profile
+ """
+ self.client.save({"conanfile.txt": ""})
+ create_profile(self.client.cache.profiles_path, "envs", settings={},
+ env=[("A_VAR", "A_VALUE"), ("PREPEND_VAR", ["new_path", "other_path"])],
+ package_env={"Hello0": [("OTHER_VAR", "2")]})
+ self.client.run("install . -pr envs -g virtualenv")
+ content = load(os.path.join(self.client.current_folder, "activate.sh"))
+ self.assertIn(":".join(["PREPEND_VAR=\"new_path\"", "\"other_path\"", "$PREPEND_VAR"]),
+ content)
+ if platform.system() == "Windows":
+ content = load(os.path.join(self.client.current_folder, "activate.bat"))
+ self.assertIn(";".join(["PREPEND_VAR=new_path", "other_path", "%PREPEND_VAR%"]),
+ content)
+
def test_profile_relative_cwd(self):
- client = TestClient()
- client.save({"conanfile.txt": "",
- "sub/sub/profile": ""})
- client.current_folder = os.path.join(client.current_folder, "sub")
- client.run("install .. -pr=sub/profile2", assert_error=True)
- self.assertIn("ERROR: Profile not found: sub/profile2", client.out)
- client.run("install .. -pr=sub/profile")
- self.assertIn("conanfile.txt: Installing package", client.out)
+ self.client.save({"conanfile.txt": "", "sub/sub/profile": ""})
+ self.client.current_folder = os.path.join(self.client.current_folder, "sub")
+ self.client.run("install .. -pr=sub/profile2", assert_error=True)
+ self.assertIn("ERROR: Profile not found: sub/profile2", self.client.out)
+ self.client.run("install .. -pr=sub/profile")
+ self.assertIn("conanfile.txt: Installing package", self.client.out)
def base_profile_generated_test(self):
"""we are testing that the default profile is created (when not existing, fresh install)
even when you run a create with a profile"""
- client = TestClient()
- client.save({CONANFILE: conanfile_scope_env,
- "myprofile": "include(default)\n[settings]\nbuild_type=Debug"})
- client.run("create . conan/testing --profile myprofile")
+ self.client.save({CONANFILE: conanfile_scope_env,
+ "myprofile": "include(default)\n[settings]\nbuild_type=Debug"})
+ self.client.run("create . conan/testing --profile myprofile")
def bad_syntax_test(self):
self.client.save({CONANFILE: conanfile_scope_env})
@@ -148,11 +162,15 @@ def install_profile_env_test(self):
files["conanfile.py"] = conanfile_scope_env
create_profile(self.client.cache.profiles_path, "envs", settings={},
- env=[("A_VAR", "A_VALUE")], package_env={"Hello0": [("OTHER_VAR", "2")]})
+ env=[("A_VAR", "A_VALUE"),
+ ("PREPEND_VAR", ["new_path", "other_path"])],
+ package_env={"Hello0": [("OTHER_VAR", "2")]})
self.client.save(files)
self.client.run("export . lasote/stable")
self.client.run("install Hello0/0.1@lasote/stable --build missing -pr envs")
+ self._assert_env_variable_printed("PREPEND_VAR", os.pathsep.join(["new_path", "other_path"]))
+ self.assertEqual(1, str(self.client.out).count("PREPEND_VAR=new_path")) # prepended once
self._assert_env_variable_printed("A_VAR", "A_VALUE")
self._assert_env_variable_printed("OTHER_VAR", "2")
diff --git a/conans/test/unittests/client/generators/virtualenv_test.py b/conans/test/unittests/client/generators/virtualenv_test.py
new file mode 100644
index 00000000000..458160b04ff
--- /dev/null
+++ b/conans/test/unittests/client/generators/virtualenv_test.py
@@ -0,0 +1,22 @@
+import platform
+import unittest
+
+from conans import ConanFile, Settings
+from conans.client.generators.virtualenv import VirtualEnvGenerator
+from conans.model.env_info import EnvValues
+from conans.test.utils.tools import TestBufferConanOutput
+
+
+class VirtualenvGeneratorTest(unittest.TestCase):
+
+ def prepend_values_test(self):
+ """
+ Check list values are only prepended once
+ """
+ conanfile = ConanFile(TestBufferConanOutput(), None)
+ conanfile.initialize(Settings({}), EnvValues.loads("PATH=[1,2,three]"))
+ gen = VirtualEnvGenerator(conanfile)
+ content = gen.content
+ self.assertIn("PATH=\"1\":\"2\":\"three\":$PATH", content["activate.sh"])
+ if platform.system() == "Windows":
+ self.assertIn("PATH=1;2;three;%PATH%", content["activate.bat"])
diff --git a/conans/test/unittests/client/loader_test.py b/conans/test/unittests/client/loader_test.py
new file mode 100644
index 00000000000..5ae0a4d96ee
--- /dev/null
+++ b/conans/test/unittests/client/loader_test.py
@@ -0,0 +1,69 @@
+import os
+import textwrap
+import unittest
+
+from conans import Settings
+from conans.client.graph.python_requires import ConanPythonRequire
+from conans.client.loader import ConanFileLoader
+from conans.model.env_info import EnvValues
+from conans.model.profile import Profile
+from conans.model.ref import ConanFileReference
+from conans.test.utils.conanfile import MockSettings
+from conans.test.utils.runner import TestRunner
+from conans.test.utils.test_files import temp_folder
+from conans.test.utils.tools import TestBufferConanOutput
+from conans.util.files import save
+
+
+class LoadConanfileTxtTest(unittest.TestCase):
+
+ def setUp(self):
+ settings = Settings()
+ self.profile = Profile()
+ self.profile._settings = settings
+ self.profile._user_options = None
+ self.profile._env_values = None
+ self.conanfile_txt_path = os.path.join(temp_folder(), "conanfile.txt")
+ output = TestBufferConanOutput()
+ self.loader = ConanFileLoader(TestRunner(output), output, None)
+
+ def env_test(self):
+ env_values = EnvValues()
+ env_values.add("PREPEND_PATH", ["hello", "bye"])
+ env_values.add("VAR", ["var_value"])
+ self.profile._env_values = env_values
+ save(self.conanfile_txt_path, "")
+ conanfile = self.loader.load_conanfile_txt(self.conanfile_txt_path, self.profile)
+ self.assertEquals(conanfile.env, {"PREPEND_PATH": ["hello", "bye"], "VAR": ["var_value"]})
+
+
+class LoadConanfileTest(unittest.TestCase):
+
+ def setUp(self):
+ settings = Settings()
+ self.profile = Profile()
+ self.profile._settings = settings
+ self.profile._user_options = None
+ self.profile._env_values = None
+ self.profile._dev_reference = None
+ self.profile._package_settings = None
+ self.conanfile_path = os.path.join(temp_folder(), "conanfile.py")
+ output = TestBufferConanOutput()
+ self.loader = ConanFileLoader(TestRunner(output), output, ConanPythonRequire(None, None))
+
+ def env_test(self):
+ env_values = EnvValues()
+ env_values.add("PREPEND_PATH", ["hello", "bye"])
+ env_values.add("VAR", ["var_value"])
+ self.profile._env_values = env_values
+ save(self.conanfile_path,
+ textwrap.dedent("""
+ from conans import ConanFile
+
+ class TestConan(ConanFile):
+ name = "hello"
+ version = "1.0"
+ """))
+ ref = ConanFileReference("hello", "1.0", "user", "channel")
+ conanfile = self.loader.load_conanfile(self.conanfile_path, self.profile, ref)
+ self.assertEquals(conanfile.env, {"PREPEND_PATH": ["hello", "bye"], "VAR": ["var_value"]})
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
conan-io__conan-4354@3612dd7
|
conan-io/conan
|
Python
| 4,354
|
#3812 Remove system requirements from cache
|
Related Issue: #3812
Changelog: Feature: Remove system requirements conan folders (not installed binaries) from cache
Docs: https://github.com/conan-io/docs/pull/1038
@PYVERS: Macos@py27, Windows@py36, Linux@py27, py34
closes #3812
- [X] Refer to the issue that supports this Pull Request.
- [X] If the issue has missing info, explain the purpose/use case/pain/need that covers this Pull Request.
- [X] I've read the [Contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
- [X] I've followed the PEP8 style guides for Python code.
- [X] I've opened another PR in the Conan docs repo to the ``develop`` branch, documenting this one.
<sup>**Note:** By default this PR will skip the slower tests and will use a limited set of python versions. Check [here](https://github.com/conan-io/conan/blob/develop/.github/PR_INCREASE_TESTING.md) how to increase the testing level by writing some tags in the current PR body text.</sup>
|
2019-01-21T21:55:56Z
|
system_requirements files not shared between host/docker
I have an application depends on a library libA. libA's recipe has a system_requirements() method which installs a system library. libA's artifact is stored remotely. When I built the applications, it failed to find the system library installed by the libA's recipe.
Is system_requirements() transitive? If so, how to specify in the conanfile.txt or conanfile.py of application?
By the way, I don't want to build libA from source.
|
In fact, this is caused by the same reason in https://github.com/conan-io/conan/issues/2262
Everytime libA is installed the system requirements method will be called if it hasn't installed before so I don't understand exactly what is the issue. What do you mean with `When I built the application`? The app is built in a different computer, right? Is it not called when doing the `conan install`?
Actually I run the build inside Docker container and the local cache is stored on a persistent volume. So rerun the build of the application will not install system package anymore.
It is a same problem as issue #2262. When I created this issue, I misunderstood the way how system_requirements() works.
I think it is better to store system_reqs.txt alongside with systems, so local cache can be freely shared among different nodes.
If system_reqs.txt are stored in the user’s home directory instead of CONAN_USER_HOME, then problem will be perfectly resolved.
I wouldn't recommend sharing the cache with the containers, to be honest. It can be problematic with the permissions of the storage and for example with the system requirements. I would say it more solid to communicate with a conan server (or better Artifactory community edition)
> I wouldn't recommend sharing the cache with the containers, to be honest. It can be problematic with the permissions of the storage and for example with the system requirements. I would say it more solid to communicate with a conan server (or better Artifactory community edition)
I don't think permission is a big issue, because Docker can use `-u` to set the user/group.
The problem with conan server (or Artifactory) is that they can be slow to access. If the local cache can be shared locally, that would be a big plus.
By the way, all these `system_reqs.txt` are important for deploy the applications to target system. It would be nice to export all the `system_reqs.txt` in `deploy()` method.
Could a `conan remove --system_reqs` help? #2262 @bilke @yangcha
> Could a `conan remove --system_reqs` help? #2262 @bilke @yangcha
That will be great.
|
[
{
"body": "I have an application depends on a library libA. libA's recipe has a system_requirements() method which installs a system library. libA's artifact is stored remotely. When I built the applications, it failed to find the system library installed by the libA's recipe.\r\n\r\nIs system_requirements() transitive? If so, how to specify in the conanfile.txt or conanfile.py of application?\r\nBy the way, I don't want to build libA from source.\r\n\r\n",
"number": 3812,
"title": "system_requirements files not shared between host/docker"
}
] |
b28d665e63b423d4e1feeef307500cae3657c0c2
|
{
"head_commit": "3612dd76050a8d545ed170d9890b298a42bcdae5",
"head_commit_message": "#3812 --system-reqs can raise error\n\n- Conan remove --system-reqs forwards the exception message\n when an error occurs.\n- Re-order remove command arguments\n- Add test to check system-reqs error message\n\nSigned-off-by: Uilian Ries <[email protected]>",
"patch_to_review": "diff --git a/conans/client/cache.py b/conans/client/cache.py\nindex b09e462bd84..4616dc692d1 100644\n--- a/conans/client/cache.py\n+++ b/conans/client/cache.py\n@@ -13,7 +13,7 @@\n from conans.model.profile import Profile\n from conans.model.ref import ConanFileReference\n from conans.model.settings import Settings\n-from conans.paths import PUT_HEADERS\n+from conans.paths import PUT_HEADERS, SYSTEM_REQS_FOLDER\n from conans.paths.simple_paths import SimplePaths\n from conans.unicode import get_cwd\n from conans.util.files import list_folder_subdirs, load, normalize, save\n@@ -257,6 +257,11 @@ def delete_empty_dirs(self, deleted_refs):\n break # not empty\n ref_path = os.path.dirname(ref_path)\n \n+ def remove_package_system_reqs(self, reference):\n+ conan_folder = self.conan(reference)\n+ system_reqs_folder = os.path.join(conan_folder, SYSTEM_REQS_FOLDER)\n+ shutil.rmtree(system_reqs_folder)\n+\n def remove_locks(self):\n folders = list_folder_subdirs(self._store_folder, 4)\n for folder in folders:\ndiff --git a/conans/client/command.py b/conans/client/command.py\nindex 0f6f05184c2..2f06c458405 100644\n--- a/conans/client/command.py\n+++ b/conans/client/command.py\n@@ -819,6 +819,8 @@ def remove(self, *args):\n \"specifying the package ID\"))\n parser.add_argument('-f', '--force', default=False, action='store_true',\n help='Remove without requesting a confirmation')\n+ parser.add_argument(\"-l\", \"--locks\", default=False, action=\"store_true\",\n+ help=\"Remove locks\")\n parser.add_argument(\"-o\", \"--outdated\", default=False, action=\"store_true\",\n help=\"Remove only outdated from recipe packages. \" \\\n \"This flag can only be used with a reference\")\n@@ -829,8 +831,8 @@ def remove(self, *args):\n help='Will remove from the specified remote')\n parser.add_argument('-s', '--src', default=False, action=\"store_true\",\n help='Remove source folders')\n- parser.add_argument(\"-l\", \"--locks\", default=False, action=\"store_true\",\n- help=\"Remove locks\")\n+ parser.add_argument('-t', '--system-reqs', default=False, action=\"store_true\",\n+ help='Remove system_reqs folders')\n args = parser.parse_args(*args)\n \n self._warn_python2()\n@@ -854,6 +856,16 @@ def remove(self, *args):\n self._cache.remove_locks()\n self._user_io.out.info(\"Cache locks removed\")\n return\n+ elif args.system_reqs:\n+ if not ref:\n+ raise ConanException(\"Please specify a valid package reference to be cleaned\")\n+ try:\n+ self._cache.remove_package_system_reqs(ref)\n+ except Exception as error:\n+ raise ConanException(\"Could not remove system_reqs: %s\" % error)\n+\n+ self._user_io.out.info(\"Cache system_reqs from %s has been removed\" % str(ref))\n+ return\n else:\n if not args.pattern_or_reference:\n raise ConanException('Please specify a pattern to be removed (\"*\" for all)')\ndiff --git a/conans/conan.py b/conans/conan.py\nold mode 100644\nnew mode 100755\ndiff --git a/conans/test/integration/system_reqs_test.py b/conans/test/integration/system_reqs_test.py\nindex 28b1fc74743..52cbe10f345 100644\n--- a/conans/test/integration/system_reqs_test.py\n+++ b/conans/test/integration/system_reqs_test.py\n@@ -1,6 +1,8 @@\n import os\n import unittest\n \n+from conans.paths import SYSTEM_REQS_FOLDER\n+\n from conans.model.ref import ConanFileReference, PackageReference\n from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient\n from conans.util.files import load\n@@ -22,7 +24,6 @@ def system_requirements(self):\n \n \n class SystemReqsTest(unittest.TestCase):\n-\n def force_system_reqs_rerun_test(self):\n client = TestClient()\n files = {'conanfile.py': base_conanfile.replace(\"%GLOBAL%\", \"\")}\n@@ -92,8 +93,10 @@ def per_package_test(self):\n \n def global_test(self):\n client = TestClient()\n- files = {'conanfile.py': base_conanfile.replace(\"%GLOBAL%\",\n- \"self.global_system_requirements=True\")}\n+ files = {\n+ 'conanfile.py': base_conanfile.replace(\"%GLOBAL%\",\n+ \"self.global_system_requirements=True\")\n+ }\n client.save(files)\n client.run(\"export . user/testing\")\n client.run(\"install Test/0.1@user/testing --build missing\")\n@@ -128,8 +131,10 @@ def global_test(self):\n \n def wrong_output_test(self):\n client = TestClient()\n- files = {'conanfile.py':\n- base_conanfile.replace(\"%GLOBAL%\", \"\").replace('\"Installed my stuff\"', 'None')}\n+ files = {\n+ 'conanfile.py':\n+ base_conanfile.replace(\"%GLOBAL%\", \"\").replace('\"Installed my stuff\"', 'None')\n+ }\n client.save(files)\n client.run(\"export . user/testing\")\n client.run(\"install Test/0.1@user/testing --build missing\")\n@@ -139,3 +144,45 @@ def wrong_output_test(self):\n pref = PackageReference(ref, \"f0ba3ca2c218df4a877080ba99b65834b9413798\")\n load_file = load(client.cache.system_reqs_package(pref))\n self.assertEqual('', load_file)\n+\n+ def remove_system_reqs_test(self):\n+ ref = ConanFileReference.loads(\"Test/0.1@user/channel\")\n+ client = TestClient()\n+ files = {'conanfile.py': base_conanfile.replace(\"%GLOBAL%\", \"\")}\n+ client.save(files)\n+ system_reqs_path = os.path.join(\n+ client.cache.package_layout(ref).conan(), SYSTEM_REQS_FOLDER)\n+\n+ # create package to populate system_reqs folder\n+ self.assertFalse(os.path.exists(system_reqs_path))\n+ client.run(\"create . user/channel\")\n+ self.assertIn(\"*+Running system requirements+*\", client.user_io.out)\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # a new build must not remove or re-run\n+ client.run(\"create . user/channel\")\n+ self.assertNotIn(\"*+Running system requirements+*\", client.user_io.out)\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # error must not remove anything\n+ with self.assertRaisesRegexp(\n+ Exception, \"ERROR: Please specify a valid package reference to be cleaned\"):\n+ client.run(\"remove --system-reqs\")\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # error must show exception message\n+ with self.assertRaises(Exception) as error:\n+ client.run(\"remove --system-reqs foo/bar@foo/bar\")\n+ self.assertIn(\"ERROR: Could not remove system_reqs: [Errno 2] No such file or directory:\", error.exception)\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # remove system_reqs global\n+ client.run(\"remove --system-reqs Test/0.1@user/channel\")\n+ self.assertIn(\"Cache system_reqs from Test/0.1@user/channel has been removed\",\n+ client.user_io.out)\n+ self.assertFalse(os.path.exists(system_reqs_path))\n+\n+ # re-create system_reqs folder\n+ client.run(\"create . user/channel\")\n+ self.assertIn(\"*+Running system requirements+*\", client.user_io.out)\n+ self.assertTrue(os.path.exists(system_reqs_path))\n"
}
|
[
{
"diff_hunk": "@@ -139,3 +144,45 @@ def wrong_output_test(self):\n pref = PackageReference(ref, \"f0ba3ca2c218df4a877080ba99b65834b9413798\")\n load_file = load(client.cache.system_reqs_package(pref))\n self.assertEqual('', load_file)\n+\n+ def remove_system_reqs_test(self):\n+ ref = ConanFileReference.loads(\"Test/0.1@user/channel\")\n+ client = TestClient()\n+ files = {'conanfile.py': base_conanfile.replace(\"%GLOBAL%\", \"\")}\n+ client.save(files)\n+ system_reqs_path = os.path.join(\n+ client.cache.package_layout(ref).conan(), SYSTEM_REQS_FOLDER)\n+\n+ # create package to populate system_reqs folder\n+ self.assertFalse(os.path.exists(system_reqs_path))\n+ client.run(\"create . user/channel\")\n+ self.assertIn(\"*+Running system requirements+*\", client.user_io.out)\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # a new build must not remove or re-run\n+ client.run(\"create . user/channel\")\n+ self.assertNotIn(\"*+Running system requirements+*\", client.user_io.out)\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # error must not remove anything\n+ with self.assertRaisesRegexp(\n+ Exception, \"ERROR: Please specify a valid package reference to be cleaned\"):\n+ client.run(\"remove --system-reqs\")\n+ self.assertTrue(os.path.exists(system_reqs_path))\n+\n+ # error must show exception message\n+ with self.assertRaises(Exception) as error:\n+ client.run(\"remove --system-reqs foo/bar@foo/bar\")\n+ self.assertIn(\"ERROR: Could not remove system_reqs: [Errno 2] No such file or directory:\", error.exception)",
"line": null,
"original_line": 176,
"original_start_line": null,
"path": "conans/test/integration/system_reqs_test.py",
"start_line": null,
"text": "@user1:\nI'm not sure the user is expecting this to happen. Probably only if something weird happens like permissions or whatever. If the recipe doesn't exist it should fail with that message. Otherwise if the recipe exists but there is no cached system requirements it should exit normally.\n\n@author:\nso, should we ignore errors, as made to `--locks` ?\n\n@author:\nDone, errors will be ignored.\n\n@user2:\nwhy ignore? permission errors happen, especially on Windows (e.g. due to the AV software, search indexers, etc.). as a user, I would prefer commands to report if they weren't success, and return correct exit code in that case.\r\nbut we should clearly distinguish cases of missing recipe vs permission issues for sure. \n\n@author:\nIf we change this behavior for system-reqs, I recommend do the same for `--locks`\n\n@user1:\nAs @user2 said I don't want to ignore the failures. But if a recipe doesn't exist it should fail by saying \"the recipe doesn't exist\", not a weird error. And if there is no system requirements folder for a recipe that exists, it should end with ok. (I think that test is missing)\n\n@author:\nunderstood\n\n@author:\n@user2 @user1 done! now it will raises friendly error messages."
}
] |
a79bd346136d93422a47bac5723e641f6122a43d
|
diff --git a/conans/client/cache.py b/conans/client/cache.py
index 8f7ac60d589..8d3e272bb44 100644
--- a/conans/client/cache.py
+++ b/conans/client/cache.py
@@ -14,12 +14,12 @@
from conans.model.profile import Profile
from conans.model.ref import ConanFileReference
from conans.model.settings import Settings
-from conans.paths import PUT_HEADERS
+from conans.paths import PUT_HEADERS, SYSTEM_REQS_FOLDER
from conans.paths.package_layouts.package_cache_layout import PackageCacheLayout
from conans.paths.package_layouts.package_editable_layout import PackageEditableLayout
from conans.paths.simple_paths import SimplePaths, check_ref_case
from conans.unicode import get_cwd
-from conans.util.files import list_folder_subdirs, load, normalize, save
+from conans.util.files import list_folder_subdirs, load, normalize, save, rmdir
from conans.util.locks import Lock, NoLock, ReadLock, SimpleLock, WriteLock
@@ -269,6 +269,19 @@ def delete_empty_dirs(self, deleted_refs):
break # not empty
ref_path = os.path.dirname(ref_path)
+ def remove_package_system_reqs(self, reference):
+ assert isinstance(reference, ConanFileReference)
+ conan_folder = self.conan(reference)
+ system_reqs_folder = os.path.join(conan_folder, SYSTEM_REQS_FOLDER)
+ if not os.path.exists(conan_folder):
+ raise ValueError("%s does not exist" % repr(reference))
+ if not os.path.exists(system_reqs_folder):
+ return
+ try:
+ rmdir(system_reqs_folder)
+ except Exception as e:
+ raise ConanException("Unable to remove system requirements at %s: %s" % (system_reqs_folder, str(e)))
+
def remove_locks(self):
folders = list_folder_subdirs(self._store_folder, 4)
for folder in folders:
diff --git a/conans/client/command.py b/conans/client/command.py
index 75ae8bb01ba..4511c19b75e 100644
--- a/conans/client/command.py
+++ b/conans/client/command.py
@@ -818,6 +818,8 @@ def remove(self, *args):
"specifying the package ID"))
parser.add_argument('-f', '--force', default=False, action='store_true',
help='Remove without requesting a confirmation')
+ parser.add_argument("-l", "--locks", default=False, action="store_true",
+ help="Remove locks")
parser.add_argument("-o", "--outdated", default=False, action="store_true",
help="Remove only outdated from recipe packages. " \
"This flag can only be used with a reference")
@@ -828,8 +830,8 @@ def remove(self, *args):
help='Will remove from the specified remote')
parser.add_argument('-s', '--src', default=False, action="store_true",
help='Remove source folders')
- parser.add_argument("-l", "--locks", default=False, action="store_true",
- help="Remove locks")
+ parser.add_argument('-t', '--system-reqs', default=False, action="store_true",
+ help='Remove system_reqs folders')
args = parser.parse_args(*args)
self._warn_python2()
@@ -853,6 +855,17 @@ def remove(self, *args):
self._cache.remove_locks()
self._user_io.out.info("Cache locks removed")
return
+ elif args.system_reqs:
+ if not ref:
+ raise ConanException("Please specify a valid package reference to be cleaned")
+ if args.packages:
+ raise ConanException("'-t' and '-p' parameters can't be used at the same time")
+ try:
+ self._cache.remove_package_system_reqs(ref)
+ self._user_io.out.info("Cache system_reqs from %s has been removed" % repr(ref))
+ return
+ except Exception as error:
+ raise ConanException("Unable to remove system_reqs: %s" % error)
else:
if not args.pattern_or_reference:
raise ConanException('Please specify a pattern to be removed ("*" for all)')
diff --git a/conans/conan.py b/conans/conan.py
old mode 100644
new mode 100755
diff --git a/conans/test/integration/system_reqs_test.py b/conans/test/integration/system_reqs_test.py
index 28b1fc74743..7a39aa554ff 100644
--- a/conans/test/integration/system_reqs_test.py
+++ b/conans/test/integration/system_reqs_test.py
@@ -1,6 +1,9 @@
import os
+import stat
import unittest
+from conans.paths import SYSTEM_REQS_FOLDER
+
from conans.model.ref import ConanFileReference, PackageReference
from conans.test.utils.tools import NO_SETTINGS_PACKAGE_ID, TestClient
from conans.util.files import load
@@ -22,7 +25,6 @@ def system_requirements(self):
class SystemReqsTest(unittest.TestCase):
-
def force_system_reqs_rerun_test(self):
client = TestClient()
files = {'conanfile.py': base_conanfile.replace("%GLOBAL%", "")}
@@ -92,8 +94,10 @@ def per_package_test(self):
def global_test(self):
client = TestClient()
- files = {'conanfile.py': base_conanfile.replace("%GLOBAL%",
- "self.global_system_requirements=True")}
+ files = {
+ 'conanfile.py': base_conanfile.replace("%GLOBAL%",
+ "self.global_system_requirements=True")
+ }
client.save(files)
client.run("export . user/testing")
client.run("install Test/0.1@user/testing --build missing")
@@ -128,8 +132,10 @@ def global_test(self):
def wrong_output_test(self):
client = TestClient()
- files = {'conanfile.py':
- base_conanfile.replace("%GLOBAL%", "").replace('"Installed my stuff"', 'None')}
+ files = {
+ 'conanfile.py':
+ base_conanfile.replace("%GLOBAL%", "").replace('"Installed my stuff"', 'None')
+ }
client.save(files)
client.run("export . user/testing")
client.run("install Test/0.1@user/testing --build missing")
@@ -139,3 +145,110 @@ def wrong_output_test(self):
pref = PackageReference(ref, "f0ba3ca2c218df4a877080ba99b65834b9413798")
load_file = load(client.cache.system_reqs_package(pref))
self.assertEqual('', load_file)
+
+ def remove_system_reqs_test(self):
+ ref = ConanFileReference.loads("Test/0.1@user/channel")
+ client = TestClient()
+ files = {'conanfile.py': base_conanfile.replace("%GLOBAL%", "")}
+ client.save(files)
+ system_reqs_path = os.path.join(
+ client.cache.package_layout(ref).conan(), SYSTEM_REQS_FOLDER)
+
+ # create package to populate system_reqs folder
+ self.assertFalse(os.path.exists(system_reqs_path))
+ client.run("create . user/channel")
+ self.assertIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ # a new build must not remove or re-run
+ client.run("create . user/channel")
+ self.assertNotIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ # remove system_reqs global
+ client.run("remove --system-reqs Test/0.1@user/channel")
+ self.assertIn("Cache system_reqs from Test/0.1@user/channel has been removed",
+ client.user_io.out)
+ self.assertFalse(os.path.exists(system_reqs_path))
+
+ # re-create system_reqs folder
+ client.run("create . user/channel")
+ self.assertIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ def invalid_remove_reqs_test(self):
+ client = TestClient()
+
+ with self.assertRaisesRegexp(
+ Exception, "ERROR: Please specify a valid package reference to be cleaned"):
+ client.run("remove --system-reqs")
+
+ # wrong file reference should be treated as error
+ with self.assertRaisesRegexp(
+ Exception, "ERROR: Unable to remove system_reqs: foo/version@bar/testing does not exist"):
+ client.run("remove --system-reqs foo/version@bar/testing")
+
+ # package is not supported with system_reqs
+ with self.assertRaisesRegexp(
+ Exception, "ERROR: '-t' and '-p' parameters can't be used at the same time"):
+ client.run("remove --system-reqs foo/bar@foo/bar -p f0ba3ca2c218df4a877080ba99b65834b9413798")
+
+ def permission_denied_remove_system_reqs_test(self):
+ ref = ConanFileReference.loads("Test/0.1@user/channel")
+ client = TestClient()
+ files = {'conanfile.py': base_conanfile.replace("%GLOBAL%", "")}
+ client.save(files)
+ system_reqs_path = os.path.join(
+ client.cache.package_layout(ref).conan(), SYSTEM_REQS_FOLDER)
+
+ # create package to populate system_reqs folder
+ self.assertFalse(os.path.exists(system_reqs_path))
+ client.run("create . user/channel")
+ self.assertIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ # remove write permission
+ current = stat.S_IMODE(os.lstat(system_reqs_path).st_mode)
+ os.chmod(system_reqs_path, current & ~stat.S_IWRITE)
+
+ # friendly message for permission error
+ with self.assertRaisesRegexp(
+ Exception, "ERROR: Unable to remove system_reqs:"):
+ client.run("remove --system-reqs Test/0.1@user/channel")
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ def duplicate_remove_system_reqs_test(self):
+ ref = ConanFileReference.loads("Test/0.1@user/channel")
+ client = TestClient()
+ files = {'conanfile.py': base_conanfile.replace("%GLOBAL%", "")}
+ client.save(files)
+ system_reqs_path = os.path.join(
+ client.cache.package_layout(ref).conan(), SYSTEM_REQS_FOLDER)
+
+ # create package to populate system_reqs folder
+ self.assertFalse(os.path.exists(system_reqs_path))
+ client.run("create . user/channel")
+ self.assertIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ # a new build must not remove or re-run
+ client.run("create . user/channel")
+ self.assertNotIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
+
+ # remove system_reqs global
+ client.run("remove --system-reqs Test/0.1@user/channel")
+ self.assertIn("Cache system_reqs from Test/0.1@user/channel has been removed",
+ client.user_io.out)
+ self.assertFalse(os.path.exists(system_reqs_path))
+
+ # try to remove system_reqs global again
+ client.run("remove --system-reqs Test/0.1@user/channel")
+ self.assertIn("Cache system_reqs from Test/0.1@user/channel has been removed",
+ client.user_io.out)
+ self.assertFalse(os.path.exists(system_reqs_path))
+
+ # re-create system_reqs folder
+ client.run("create . user/channel")
+ self.assertIn("*+Running system requirements+*", client.user_io.out)
+ self.assertTrue(os.path.exists(system_reqs_path))
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Dependency Updates & Env Compatibility"
}
|
frappe__erpnext-37828@84f0d1f
|
frappe/erpnext
|
Python
| 37,828
|
fix: payments irrespective of party types
|
**Problem**
Currently, when a Payment Entry is created, the ledger entries that get generated are configured in a way that does not allow the following scenarios -
- Receive from a Payable Party Type like Supplier / Shareholder / Employee.
- Pay to a Receivable Party Type like Customer.
**Solution**
- Remove the validation for checking negative outstanding amounts.
- Use the Payment Type instead of Party Type field for determining how GL entries are created for a payment.
Resolves https://github.com/frappe/erpnext/issues/37124
`no-docs`
|
2023-11-01T10:12:46Z
|
Payment Receive from Shareholder
### Information about bug
Trying to submit a payment of type receive from a shareholder with cash payment mode to cash account from creditors account causes an error.. Debit and Credit entries created are not equal. ERPNext puts both entries on one side instead of one entry in Debit and the other in Credit.
### Module
accounts
### Version
14.49
14.39
### Installation method
manual install
### Relevant log output / Stack trace / Full Error Message.
_No response_
|
Share Traceback?
|
[
{
"body": "### Information about bug\n\nTrying to submit a payment of type receive from a shareholder with cash payment mode to cash account from creditors account causes an error.. Debit and Credit entries created are not equal. ERPNext puts both entries on one side instead of one entry in Debit and the other in Credit. \n\n### Module\n\naccounts\n\n### Version\n\n14.49\r\n14.39\n\n### Installation method\n\nmanual install\n\n### Relevant log output / Stack trace / Full Error Message.\n\n_No response_",
"number": 37124,
"title": "Payment Receive from Shareholder "
}
] |
7e67d42d1d1837e47a5df3b1e6e22a72c996d761
|
{
"head_commit": "84f0d1ff1ff231f0388033d2304d977b07411253",
"head_commit_message": "chore: linting issues",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/payment_entry/payment_entry.py b/erpnext/accounts/doctype/payment_entry/payment_entry.py\nindex e6403fddefe3..c0e3ab3ed49d 100644\n--- a/erpnext/accounts/doctype/payment_entry/payment_entry.py\n+++ b/erpnext/accounts/doctype/payment_entry/payment_entry.py\n@@ -33,6 +33,7 @@\n \tget_account_currency,\n \tget_balance_on,\n \tget_outstanding_invoices,\n+\tget_party_types_from_account_type,\n )\n from erpnext.controllers.accounts_controller import (\n \tAccountsController,\n@@ -83,7 +84,6 @@ def validate(self):\n \t\tself.apply_taxes()\n \t\tself.set_amounts_after_tax()\n \t\tself.clear_unallocated_reference_document_rows()\n-\t\tself.validate_payment_against_negative_invoice()\n \t\tself.validate_transaction_reference()\n \t\tself.set_title()\n \t\tself.set_remarks()\n@@ -952,35 +952,6 @@ def clear_unallocated_reference_document_rows(self):\n \t\t\tself.name,\n \t\t)\n \n-\tdef validate_payment_against_negative_invoice(self):\n-\t\tif (self.payment_type != \"Pay\" or self.party_type != \"Customer\") and (\n-\t\t\tself.payment_type != \"Receive\" or self.party_type != \"Supplier\"\n-\t\t):\n-\t\t\treturn\n-\n-\t\ttotal_negative_outstanding = sum(\n-\t\t\tabs(flt(d.outstanding_amount)) for d in self.get(\"references\") if flt(d.outstanding_amount) < 0\n-\t\t)\n-\n-\t\tpaid_amount = self.paid_amount if self.payment_type == \"Receive\" else self.received_amount\n-\t\tadditional_charges = sum(flt(d.amount) for d in self.deductions)\n-\n-\t\tif not total_negative_outstanding:\n-\t\t\tif self.party_type == \"Customer\":\n-\t\t\t\tmsg = _(\"Cannot pay to Customer without any negative outstanding invoice\")\n-\t\t\telse:\n-\t\t\t\tmsg = _(\"Cannot receive from Supplier without any negative outstanding invoice\")\n-\n-\t\t\tfrappe.throw(msg, InvalidPaymentEntry)\n-\n-\t\telif paid_amount - additional_charges > total_negative_outstanding:\n-\t\t\tfrappe.throw(\n-\t\t\t\t_(\"Paid Amount cannot be greater than total negative outstanding amount {0}\").format(\n-\t\t\t\t\tfmt_money(total_negative_outstanding)\n-\t\t\t\t),\n-\t\t\t\tInvalidPaymentEntry,\n-\t\t\t)\n-\n \tdef set_title(self):\n \t\tif frappe.flags.in_import and self.title:\n \t\t\t# do not set title dynamically if title exists during data import.\n@@ -1083,9 +1054,7 @@ def add_party_gl_entries(self, gl_entries):\n \t\t\t\titem=self,\n \t\t\t)\n \n-\t\t\tdr_or_cr = (\n-\t\t\t\t\"credit\" if erpnext.get_party_account_type(self.party_type) == \"Receivable\" else \"debit\"\n-\t\t\t)\n+\t\t\tdr_or_cr = \"credit\" if self.payment_type == \"Receive\" else \"debit\"\n \n \t\t\tfor d in self.get(\"references\"):\n \t\t\t\tcost_center = self.cost_center\n@@ -1103,10 +1072,27 @@ def add_party_gl_entries(self, gl_entries):\n \t\t\t\t\tagainst_voucher_type = d.reference_doctype\n \t\t\t\t\tagainst_voucher = d.reference_name\n \n+\t\t\t\treverse_dr_or_cr, standalone_note = 0, 0\n+\t\t\t\tif d.reference_doctype in [\"Sales Invoice\", \"Purchase Invoice\"]:\n+\t\t\t\t\tis_return, return_against = frappe.db.get_value(\n+\t\t\t\t\t\td.reference_doctype, d.reference_name, [\"is_return\", \"return_against\"]\n+\t\t\t\t\t)\n+\t\t\t\t\tpayable_party_types = get_party_types_from_account_type(\"Payable\")\n+\t\t\t\t\treceivable_party_types = get_party_types_from_account_type(\"Receivable\")\n+\t\t\t\t\tif is_return and self.party_type in receivable_party_types and (self.payment_type == \"Pay\"):\n+\t\t\t\t\t\treverse_dr_or_cr = 1\n+\t\t\t\t\telif (\n+\t\t\t\t\t\tis_return and self.party_type in payable_party_types and (self.payment_type == \"Receive\")\n+\t\t\t\t\t):\n+\t\t\t\t\t\treverse_dr_or_cr = 1\n+\n+\t\t\t\t\tif is_return and not return_against and not reverse_dr_or_cr:\n+\t\t\t\t\t\tdr_or_cr = \"debit\" if dr_or_cr == \"credit\" else \"credit\"\n+\n \t\t\t\tgle.update(\n \t\t\t\t\t{\n-\t\t\t\t\t\tdr_or_cr: allocated_amount_in_company_currency,\n-\t\t\t\t\t\tdr_or_cr + \"_in_account_currency\": d.allocated_amount,\n+\t\t\t\t\t\tdr_or_cr: abs(allocated_amount_in_company_currency),\n+\t\t\t\t\t\tdr_or_cr + \"_in_account_currency\": abs(d.allocated_amount),\n \t\t\t\t\t\t\"against_voucher_type\": against_voucher_type,\n \t\t\t\t\t\t\"against_voucher\": against_voucher,\n \t\t\t\t\t\t\"cost_center\": cost_center,\ndiff --git a/erpnext/accounts/doctype/payment_entry/test_payment_entry.py b/erpnext/accounts/doctype/payment_entry/test_payment_entry.py\nindex edfec419181c..b6b93b6b1109 100644\n--- a/erpnext/accounts/doctype/payment_entry/test_payment_entry.py\n+++ b/erpnext/accounts/doctype/payment_entry/test_payment_entry.py\n@@ -683,17 +683,6 @@ def test_internal_transfer_usd_to_inr(self):\n \t\tself.validate_gl_entries(pe.name, expected_gle)\n \n \tdef test_payment_against_negative_sales_invoice(self):\n-\t\tpe1 = frappe.new_doc(\"Payment Entry\")\n-\t\tpe1.payment_type = \"Pay\"\n-\t\tpe1.company = \"_Test Company\"\n-\t\tpe1.party_type = \"Customer\"\n-\t\tpe1.party = \"_Test Customer\"\n-\t\tpe1.paid_from = \"_Test Cash - _TC\"\n-\t\tpe1.paid_amount = 100\n-\t\tpe1.received_amount = 100\n-\n-\t\tself.assertRaises(InvalidPaymentEntry, pe1.validate)\n-\n \t\tsi1 = create_sales_invoice()\n \n \t\t# create full payment entry against si1\n@@ -751,8 +740,6 @@ def test_payment_against_negative_sales_invoice(self):\n \n \t\t# pay more than outstanding against si1\n \t\tpe3 = get_payment_entry(\"Sales Invoice\", si1.name, bank_account=\"_Test Cash - _TC\")\n-\t\tpe3.paid_amount = pe3.received_amount = 300\n-\t\tself.assertRaises(InvalidPaymentEntry, pe3.validate)\n \n \t\t# pay negative outstanding against si1\n \t\tpe3.paid_to = \"Debtors - _TC\"\n@@ -1262,6 +1249,39 @@ def test_allocation_validation_for_sales_order(self):\n \t\tso.reload()\n \t\tself.assertEqual(so.advance_paid, so.rounded_total)\n \n+\tdef test_receive_payment_from_payable_party_type(self):\n+\t\tpe = create_payment_entry(\n+\t\t\tparty_type=\"Supplier\",\n+\t\t\tparty=\"_Test Supplier\",\n+\t\t\tpayment_type=\"Receive\",\n+\t\t\tpaid_from=\"Creditors - _TC\",\n+\t\t\tpaid_to=\"_Test Cash - _TC\",\n+\t\t\tsave=True,\n+\t\t\tsubmit=True,\n+\t\t)\n+\t\tself.voucher_no = pe.name\n+\t\tself.expected_gle = [\n+\t\t\t{\"account\": \"_Test Cash - _TC\", \"debit\": 1000.0, \"credit\": 0.0},\n+\t\t\t{\"account\": \"Creditors - _TC\", \"debit\": 0.0, \"credit\": 1000.0},\n+\t\t]\n+\t\tself.check_gl_entries()\n+\n+\tdef check_gl_entries(self):\n+\t\tgle = frappe.qb.DocType(\"GL Entry\")\n+\t\tgl_entries = (\n+\t\t\tfrappe.qb.from_(gle)\n+\t\t\t.select(\n+\t\t\t\tgle.account,\n+\t\t\t\tgle.debit,\n+\t\t\t\tgle.credit,\n+\t\t\t)\n+\t\t\t.where((gle.voucher_no == self.voucher_no) & (gle.is_cancelled == 0))\n+\t\t\t.orderby(gle.account)\n+\t\t).run(as_dict=True)\n+\t\tfor row in range(len(self.expected_gle)):\n+\t\t\tfor field in [\"account\", \"debit\", \"credit\"]:\n+\t\t\t\tself.assertEqual(self.expected_gle[row][field], gl_entries[row][field])\n+\n \n def create_payment_entry(**args):\n \tpayment_entry = frappe.new_doc(\"Payment Entry\")\ndiff --git a/erpnext/accounts/report/accounts_receivable/accounts_receivable.py b/erpnext/accounts/report/accounts_receivable/accounts_receivable.py\nindex 20444f949644..4cc0f0c6d1c1 100755\n--- a/erpnext/accounts/report/accounts_receivable/accounts_receivable.py\n+++ b/erpnext/accounts/report/accounts_receivable/accounts_receivable.py\n@@ -14,7 +14,7 @@\n \tget_accounting_dimensions,\n \tget_dimension_with_children,\n )\n-from erpnext.accounts.utils import get_currency_precision\n+from erpnext.accounts.utils import get_currency_precision, get_party_types_from_account_type\n \n # This report gives a summary of all Outstanding Invoices considering the following\n \n@@ -72,9 +72,7 @@ def set_defaults(self):\n \t\tself.currency_precision = get_currency_precision() or 2\n \t\tself.dr_or_cr = \"debit\" if self.filters.account_type == \"Receivable\" else \"credit\"\n \t\tself.account_type = self.filters.account_type\n-\t\tself.party_type = frappe.db.get_all(\n-\t\t\t\"Party Type\", {\"account_type\": self.account_type}, pluck=\"name\"\n-\t\t)\n+\t\tself.party_type = get_party_types_from_account_type(self.account_type)\n \t\tself.party_details = {}\n \t\tself.invoices = set()\n \t\tself.skip_total_row = 0\ndiff --git a/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py b/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py\nindex 60274cd8b108..d50cf0708e29 100644\n--- a/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py\n+++ b/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py\n@@ -8,6 +8,7 @@\n \n from erpnext.accounts.party import get_partywise_advanced_payment_amount\n from erpnext.accounts.report.accounts_receivable.accounts_receivable import ReceivablePayableReport\n+from erpnext.accounts.utils import get_party_types_from_account_type\n \n \n def execute(filters=None):\n@@ -22,9 +23,7 @@ def execute(filters=None):\n class AccountsReceivableSummary(ReceivablePayableReport):\n \tdef run(self, args):\n \t\tself.account_type = args.get(\"account_type\")\n-\t\tself.party_type = frappe.db.get_all(\n-\t\t\t\"Party Type\", {\"account_type\": self.account_type}, pluck=\"name\"\n-\t\t)\n+\t\tself.party_type = get_party_types_from_account_type(self.account_type)\n \t\tself.party_naming_by = frappe.db.get_value(\n \t\t\targs.get(\"naming_by\")[0], None, args.get(\"naming_by\")[1]\n \t\t)\ndiff --git a/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py b/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py\nindex e842d2e8dc45..06c9e44b4557 100644\n--- a/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py\n+++ b/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py\n@@ -316,7 +316,7 @@ def get_tds_docs_query(filters, bank_accounts, tds_accounts):\n \tif not tds_accounts:\n \t\tfrappe.throw(\n \t\t\t_(\"No {0} Accounts found for this company.\").format(frappe.bold(\"Tax Withholding\")),\n-\t\t\ttitle=\"Accounts Missing Error\",\n+\t\t\ttitle=_(\"Accounts Missing Error\"),\n \t\t)\n \tgle = frappe.qb.DocType(\"GL Entry\")\n \tquery = (\ndiff --git a/erpnext/accounts/utils.py b/erpnext/accounts/utils.py\nindex 1c7052f8ffb6..0a116a734567 100644\n--- a/erpnext/accounts/utils.py\n+++ b/erpnext/accounts/utils.py\n@@ -2041,3 +2041,7 @@ def create_gain_loss_journal(\n \tjournal_entry.save()\n \tjournal_entry.submit()\n \treturn journal_entry.name\n+\n+\n+def get_party_types_from_account_type(account_type):\n+\treturn frappe.db.get_list(\"Party Type\", {\"account_type\": account_type}, pluck=\"name\")\n"
}
|
[
{
"diff_hunk": "@@ -2041,3 +2041,7 @@ def create_gain_loss_journal(\n \tjournal_entry.save()\n \tjournal_entry.submit()\n \treturn journal_entry.name\n+\n+\n+def get_party_types_from_account_type(account_type):\n+\treturn frappe.db.get_list(\"Party Type\", {\"account_type\": account_type}, pluck=\"name\")",
"line": null,
"original_line": 2047,
"original_start_line": null,
"path": "erpnext/accounts/utils.py",
"start_line": null,
"text": "@user1:\nUse `get_all` instead of `get_list`, `get_list` might cause unwanted permission issues"
}
] |
2984a86f37e72bbdefd0348d7c456bd6ada540bb
|
diff --git a/erpnext/accounts/doctype/payment_entry/payment_entry.py b/erpnext/accounts/doctype/payment_entry/payment_entry.py
index e6403fddefe3..c0e3ab3ed49d 100644
--- a/erpnext/accounts/doctype/payment_entry/payment_entry.py
+++ b/erpnext/accounts/doctype/payment_entry/payment_entry.py
@@ -33,6 +33,7 @@
get_account_currency,
get_balance_on,
get_outstanding_invoices,
+ get_party_types_from_account_type,
)
from erpnext.controllers.accounts_controller import (
AccountsController,
@@ -83,7 +84,6 @@ def validate(self):
self.apply_taxes()
self.set_amounts_after_tax()
self.clear_unallocated_reference_document_rows()
- self.validate_payment_against_negative_invoice()
self.validate_transaction_reference()
self.set_title()
self.set_remarks()
@@ -952,35 +952,6 @@ def clear_unallocated_reference_document_rows(self):
self.name,
)
- def validate_payment_against_negative_invoice(self):
- if (self.payment_type != "Pay" or self.party_type != "Customer") and (
- self.payment_type != "Receive" or self.party_type != "Supplier"
- ):
- return
-
- total_negative_outstanding = sum(
- abs(flt(d.outstanding_amount)) for d in self.get("references") if flt(d.outstanding_amount) < 0
- )
-
- paid_amount = self.paid_amount if self.payment_type == "Receive" else self.received_amount
- additional_charges = sum(flt(d.amount) for d in self.deductions)
-
- if not total_negative_outstanding:
- if self.party_type == "Customer":
- msg = _("Cannot pay to Customer without any negative outstanding invoice")
- else:
- msg = _("Cannot receive from Supplier without any negative outstanding invoice")
-
- frappe.throw(msg, InvalidPaymentEntry)
-
- elif paid_amount - additional_charges > total_negative_outstanding:
- frappe.throw(
- _("Paid Amount cannot be greater than total negative outstanding amount {0}").format(
- fmt_money(total_negative_outstanding)
- ),
- InvalidPaymentEntry,
- )
-
def set_title(self):
if frappe.flags.in_import and self.title:
# do not set title dynamically if title exists during data import.
@@ -1083,9 +1054,7 @@ def add_party_gl_entries(self, gl_entries):
item=self,
)
- dr_or_cr = (
- "credit" if erpnext.get_party_account_type(self.party_type) == "Receivable" else "debit"
- )
+ dr_or_cr = "credit" if self.payment_type == "Receive" else "debit"
for d in self.get("references"):
cost_center = self.cost_center
@@ -1103,10 +1072,27 @@ def add_party_gl_entries(self, gl_entries):
against_voucher_type = d.reference_doctype
against_voucher = d.reference_name
+ reverse_dr_or_cr, standalone_note = 0, 0
+ if d.reference_doctype in ["Sales Invoice", "Purchase Invoice"]:
+ is_return, return_against = frappe.db.get_value(
+ d.reference_doctype, d.reference_name, ["is_return", "return_against"]
+ )
+ payable_party_types = get_party_types_from_account_type("Payable")
+ receivable_party_types = get_party_types_from_account_type("Receivable")
+ if is_return and self.party_type in receivable_party_types and (self.payment_type == "Pay"):
+ reverse_dr_or_cr = 1
+ elif (
+ is_return and self.party_type in payable_party_types and (self.payment_type == "Receive")
+ ):
+ reverse_dr_or_cr = 1
+
+ if is_return and not return_against and not reverse_dr_or_cr:
+ dr_or_cr = "debit" if dr_or_cr == "credit" else "credit"
+
gle.update(
{
- dr_or_cr: allocated_amount_in_company_currency,
- dr_or_cr + "_in_account_currency": d.allocated_amount,
+ dr_or_cr: abs(allocated_amount_in_company_currency),
+ dr_or_cr + "_in_account_currency": abs(d.allocated_amount),
"against_voucher_type": against_voucher_type,
"against_voucher": against_voucher,
"cost_center": cost_center,
diff --git a/erpnext/accounts/doctype/payment_entry/test_payment_entry.py b/erpnext/accounts/doctype/payment_entry/test_payment_entry.py
index edfec419181c..b6b93b6b1109 100644
--- a/erpnext/accounts/doctype/payment_entry/test_payment_entry.py
+++ b/erpnext/accounts/doctype/payment_entry/test_payment_entry.py
@@ -683,17 +683,6 @@ def test_internal_transfer_usd_to_inr(self):
self.validate_gl_entries(pe.name, expected_gle)
def test_payment_against_negative_sales_invoice(self):
- pe1 = frappe.new_doc("Payment Entry")
- pe1.payment_type = "Pay"
- pe1.company = "_Test Company"
- pe1.party_type = "Customer"
- pe1.party = "_Test Customer"
- pe1.paid_from = "_Test Cash - _TC"
- pe1.paid_amount = 100
- pe1.received_amount = 100
-
- self.assertRaises(InvalidPaymentEntry, pe1.validate)
-
si1 = create_sales_invoice()
# create full payment entry against si1
@@ -751,8 +740,6 @@ def test_payment_against_negative_sales_invoice(self):
# pay more than outstanding against si1
pe3 = get_payment_entry("Sales Invoice", si1.name, bank_account="_Test Cash - _TC")
- pe3.paid_amount = pe3.received_amount = 300
- self.assertRaises(InvalidPaymentEntry, pe3.validate)
# pay negative outstanding against si1
pe3.paid_to = "Debtors - _TC"
@@ -1262,6 +1249,39 @@ def test_allocation_validation_for_sales_order(self):
so.reload()
self.assertEqual(so.advance_paid, so.rounded_total)
+ def test_receive_payment_from_payable_party_type(self):
+ pe = create_payment_entry(
+ party_type="Supplier",
+ party="_Test Supplier",
+ payment_type="Receive",
+ paid_from="Creditors - _TC",
+ paid_to="_Test Cash - _TC",
+ save=True,
+ submit=True,
+ )
+ self.voucher_no = pe.name
+ self.expected_gle = [
+ {"account": "_Test Cash - _TC", "debit": 1000.0, "credit": 0.0},
+ {"account": "Creditors - _TC", "debit": 0.0, "credit": 1000.0},
+ ]
+ self.check_gl_entries()
+
+ def check_gl_entries(self):
+ gle = frappe.qb.DocType("GL Entry")
+ gl_entries = (
+ frappe.qb.from_(gle)
+ .select(
+ gle.account,
+ gle.debit,
+ gle.credit,
+ )
+ .where((gle.voucher_no == self.voucher_no) & (gle.is_cancelled == 0))
+ .orderby(gle.account)
+ ).run(as_dict=True)
+ for row in range(len(self.expected_gle)):
+ for field in ["account", "debit", "credit"]:
+ self.assertEqual(self.expected_gle[row][field], gl_entries[row][field])
+
def create_payment_entry(**args):
payment_entry = frappe.new_doc("Payment Entry")
diff --git a/erpnext/accounts/report/accounts_receivable/accounts_receivable.py b/erpnext/accounts/report/accounts_receivable/accounts_receivable.py
index 20444f949644..4cc0f0c6d1c1 100755
--- a/erpnext/accounts/report/accounts_receivable/accounts_receivable.py
+++ b/erpnext/accounts/report/accounts_receivable/accounts_receivable.py
@@ -14,7 +14,7 @@
get_accounting_dimensions,
get_dimension_with_children,
)
-from erpnext.accounts.utils import get_currency_precision
+from erpnext.accounts.utils import get_currency_precision, get_party_types_from_account_type
# This report gives a summary of all Outstanding Invoices considering the following
@@ -72,9 +72,7 @@ def set_defaults(self):
self.currency_precision = get_currency_precision() or 2
self.dr_or_cr = "debit" if self.filters.account_type == "Receivable" else "credit"
self.account_type = self.filters.account_type
- self.party_type = frappe.db.get_all(
- "Party Type", {"account_type": self.account_type}, pluck="name"
- )
+ self.party_type = get_party_types_from_account_type(self.account_type)
self.party_details = {}
self.invoices = set()
self.skip_total_row = 0
diff --git a/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py b/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py
index 60274cd8b108..d50cf0708e29 100644
--- a/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py
+++ b/erpnext/accounts/report/accounts_receivable_summary/accounts_receivable_summary.py
@@ -8,6 +8,7 @@
from erpnext.accounts.party import get_partywise_advanced_payment_amount
from erpnext.accounts.report.accounts_receivable.accounts_receivable import ReceivablePayableReport
+from erpnext.accounts.utils import get_party_types_from_account_type
def execute(filters=None):
@@ -22,9 +23,7 @@ def execute(filters=None):
class AccountsReceivableSummary(ReceivablePayableReport):
def run(self, args):
self.account_type = args.get("account_type")
- self.party_type = frappe.db.get_all(
- "Party Type", {"account_type": self.account_type}, pluck="name"
- )
+ self.party_type = get_party_types_from_account_type(self.account_type)
self.party_naming_by = frappe.db.get_value(
args.get("naming_by")[0], None, args.get("naming_by")[1]
)
diff --git a/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py b/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py
index e842d2e8dc45..06c9e44b4557 100644
--- a/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py
+++ b/erpnext/accounts/report/tax_withholding_details/tax_withholding_details.py
@@ -316,7 +316,7 @@ def get_tds_docs_query(filters, bank_accounts, tds_accounts):
if not tds_accounts:
frappe.throw(
_("No {0} Accounts found for this company.").format(frappe.bold("Tax Withholding")),
- title="Accounts Missing Error",
+ title=_("Accounts Missing Error"),
)
gle = frappe.qb.DocType("GL Entry")
query = (
diff --git a/erpnext/accounts/utils.py b/erpnext/accounts/utils.py
index 1c7052f8ffb6..aef0c38c634f 100644
--- a/erpnext/accounts/utils.py
+++ b/erpnext/accounts/utils.py
@@ -2041,3 +2041,7 @@ def create_gain_loss_journal(
journal_entry.save()
journal_entry.submit()
return journal_entry.name
+
+
+def get_party_types_from_account_type(account_type):
+ return frappe.db.get_all("Party Type", {"account_type": account_type}, pluck="name")
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
conan-io__conan-4273@a16ecfa
|
conan-io/conan
|
Python
| 4,273
|
fixing version ranges with spaces
|
Changelog: Bugfix: Fix version ranges containing spaces and not separated by commas.
Docs: Omit
Close #4270
I think it doesn't make sense to replace whitespace in the middle of a <ref>. That should be an error, not "magically" fixed.
|
2019-01-10T15:11:56Z
|
conan install unexpectedly reports invalid version range
i have this requirement in my `conanfile.py`:
`foo/[>16.5.0 <17.0.0]@bar/stable`
`conan install` aborts with this error:
`ERROR: version range expression '>16.5.0<17.0.0' is not valid`
note: using the deprecated comma-based syntax works fine
* Python 3.6 on CentOS 7.5
* Conan 1.11.2
* node-semver 0.6.1
|
Reproduced, indeed an undesired bug. Thanks for reporting @mistafunk !
|
[
{
"body": "i have this requirement in my `conanfile.py`:\r\n`foo/[>16.5.0 <17.0.0]@bar/stable`\r\n\r\n`conan install` aborts with this error:\r\n`ERROR: version range expression '>16.5.0<17.0.0' is not valid`\r\n\r\nnote: using the deprecated comma-based syntax works fine\r\n\r\n* Python 3.6 on CentOS 7.5\r\n* Conan 1.11.2\r\n* node-semver 0.6.1",
"number": 4270,
"title": "conan install unexpectedly reports invalid version range"
}
] |
0e8fd48f409faa06ce468d7dae2d01da270dc95d
|
{
"head_commit": "a16ecfa535268733359c4b3f20b0f2bac3013433",
"head_commit_message": "simplified testing",
"patch_to_review": "diff --git a/conans/model/ref.py b/conans/model/ref.py\nindex 560534eade0..984bb0e3d98 100644\n--- a/conans/model/ref.py\n+++ b/conans/model/ref.py\n@@ -78,7 +78,6 @@ class ConanFileReference(namedtuple(\"ConanFileReference\", \"name version user cha\n \"\"\" Full reference of a package recipes, e.g.:\n opencv/2.4.10@lasote/testing\n \"\"\"\n- whitespace_pattern = re.compile(r\"\\s+\")\n sep_pattern = re.compile(r\"([^/]+)/([^/]+)@([^/]+)/([^/#]+)#?(.+)?\")\n \n def __new__(cls, name, version, user, channel, revision=None, validate=True):\n@@ -107,7 +106,6 @@ def _validate(self):\n def loads(text, validate=True):\n \"\"\" Parses a text string to generate a ConanFileReference object\n \"\"\"\n- text = ConanFileReference.whitespace_pattern.sub(\"\", text)\n try:\n # Split returns empty start and end groups\n _, name, version, user, channel, revision, _ = ConanFileReference.sep_pattern.split(text)\ndiff --git a/conans/test/functional/old/paths_test.py b/conans/test/functional/old/paths_test.py\nindex 6fe72b0e7ee..5c79530e7f7 100644\n--- a/conans/test/functional/old/paths_test.py\n+++ b/conans/test/functional/old/paths_test.py\n@@ -32,7 +32,7 @@ def basic_test(self):\n folder = temp_folder()\n paths = SimplePaths(folder)\n self.assertEqual(paths._store_folder, folder)\n- conan_ref = ConanFileReference.loads(\"opencv/2.4.10 @ lasote /testing\")\n+ conan_ref = ConanFileReference.loads(\"opencv/2.4.10@lasote/testing\")\n package_ref = PackageReference(conan_ref, \"456fa678eae68\")\n expected_base = os.path.join(folder, os.path.sep.join([\"opencv\", \"2.4.10\",\n \"lasote\", \"testing\"]))\n@@ -84,4 +84,3 @@ def test_with_env_variable(self, _, short_paths):\n \n self.assertEqual(self.home_short in r, short_paths)\n self.assertEqual(self.home in r, not short_paths)\n-\ndiff --git a/conans/test/unittests/model/ref_test.py b/conans/test/unittests/model/ref_test.py\nindex 0497334ec87..929e55d796b 100644\n--- a/conans/test/unittests/model/ref_test.py\n+++ b/conans/test/unittests/model/ref_test.py\n@@ -7,7 +7,7 @@\n \n class RefTest(unittest.TestCase):\n def basic_test(self):\n- ref = ConanFileReference.loads(\"opencv/2.4.10 @ lasote/testing\")\n+ ref = ConanFileReference.loads(\"opencv/2.4.10@lasote/testing\")\n self.assertEqual(ref.name, \"opencv\")\n self.assertEqual(ref.version, \"2.4.10\")\n self.assertEqual(ref.user, \"lasote\")\n@@ -37,7 +37,7 @@ def basic_test(self):\n def errors_test(self):\n self.assertRaises(ConanException, ConanFileReference.loads, \"\")\n self.assertRaises(ConanException, ConanFileReference.loads, \"opencv/2.4.10\")\n- self.assertRaises(ConanException, ConanFileReference.loads, \"opencv/2.4.10 @ lasote\")\n+ self.assertRaises(ConanException, ConanFileReference.loads, \"opencv/2.4.10@lasote\")\n self.assertRaises(ConanException, ConanFileReference.loads, \"opencv??/2.4.10@laso/testing\")\n self.assertRaises(ConanException, ConanFileReference.loads, \".opencv/2.4.10@lasote/testing\")\n self.assertRaises(ConanException, ConanFileReference.loads, \"o/2.4.10 @ lasote/testing\")\ndiff --git a/conans/test/unittests/model/version_ranges_test.py b/conans/test/unittests/model/version_ranges_test.py\nindex 6790896b78f..9917d68a3e7 100644\n--- a/conans/test/unittests/model/version_ranges_test.py\n+++ b/conans/test/unittests/model/version_ranges_test.py\n@@ -287,8 +287,10 @@ class HelloConan(ConanFile):\n # ranges\n ('\"Say/[<=1.2]@myuser/testing\"', \"1.2.1\", False, False),\n ('\"Say/[>=0.2,<=1.0]@myuser/testing\"', \"0.3\", False, True),\n+ ('\"Say/[>=0.2 <=1.0]@myuser/testing\"', \"0.3\", False, True),\n ('(\"Say/[<=1.2]@myuser/testing\", \"override\")', \"1.2.1\", True, False),\n ('(\"Say/[>=0.2,<=1.0]@myuser/testing\", \"override\")', \"0.3\", True, True),\n+ ('(\"Say/[>=0.2 <=1.0]@myuser/testing\", \"override\")', \"0.3\", True, True),\n ])\n def transitive_test(self, version_range, solution, override, valid):\n hello_text = hello_content % \">0.1, <1\"\n"
}
|
[
{
"diff_hunk": "@@ -37,7 +37,7 @@ def basic_test(self):\n def errors_test(self):\n self.assertRaises(ConanException, ConanFileReference.loads, \"\")\n self.assertRaises(ConanException, ConanFileReference.loads, \"opencv/2.4.10\")\n- self.assertRaises(ConanException, ConanFileReference.loads, \"opencv/2.4.10 @ lasote\")\n+ self.assertRaises(ConanException, ConanFileReference.loads, \"opencv/2.4.10@lasote\")\n self.assertRaises(ConanException, ConanFileReference.loads, \"opencv??/2.4.10@laso/testing\")\n self.assertRaises(ConanException, ConanFileReference.loads, \".opencv/2.4.10@lasote/testing\")\n self.assertRaises(ConanException, ConanFileReference.loads, \"o/2.4.10 @ lasote/testing\")",
"line": 45,
"original_line": 43,
"original_start_line": null,
"path": "conans/test/unittests/model/ref_test.py",
"start_line": null,
"text": "@user2:\nChange this test to:\r\n\r\n```\r\n\"o/2.4.10@user1/testing\"\r\n```\r\n\r\nit should fail because the `name` is too short, not because of the spaces"
}
] |
6b6411d514b6b2a821129a7d307c6dc43913653d
|
diff --git a/conans/model/ref.py b/conans/model/ref.py
index 560534eade0..984bb0e3d98 100644
--- a/conans/model/ref.py
+++ b/conans/model/ref.py
@@ -78,7 +78,6 @@ class ConanFileReference(namedtuple("ConanFileReference", "name version user cha
""" Full reference of a package recipes, e.g.:
opencv/2.4.10@lasote/testing
"""
- whitespace_pattern = re.compile(r"\s+")
sep_pattern = re.compile(r"([^/]+)/([^/]+)@([^/]+)/([^/#]+)#?(.+)?")
def __new__(cls, name, version, user, channel, revision=None, validate=True):
@@ -107,7 +106,6 @@ def _validate(self):
def loads(text, validate=True):
""" Parses a text string to generate a ConanFileReference object
"""
- text = ConanFileReference.whitespace_pattern.sub("", text)
try:
# Split returns empty start and end groups
_, name, version, user, channel, revision, _ = ConanFileReference.sep_pattern.split(text)
diff --git a/conans/test/functional/old/paths_test.py b/conans/test/functional/old/paths_test.py
index 6fe72b0e7ee..5c79530e7f7 100644
--- a/conans/test/functional/old/paths_test.py
+++ b/conans/test/functional/old/paths_test.py
@@ -32,7 +32,7 @@ def basic_test(self):
folder = temp_folder()
paths = SimplePaths(folder)
self.assertEqual(paths._store_folder, folder)
- conan_ref = ConanFileReference.loads("opencv/2.4.10 @ lasote /testing")
+ conan_ref = ConanFileReference.loads("opencv/2.4.10@lasote/testing")
package_ref = PackageReference(conan_ref, "456fa678eae68")
expected_base = os.path.join(folder, os.path.sep.join(["opencv", "2.4.10",
"lasote", "testing"]))
@@ -84,4 +84,3 @@ def test_with_env_variable(self, _, short_paths):
self.assertEqual(self.home_short in r, short_paths)
self.assertEqual(self.home in r, not short_paths)
-
diff --git a/conans/test/unittests/model/ref_test.py b/conans/test/unittests/model/ref_test.py
index 0497334ec87..e079108469f 100644
--- a/conans/test/unittests/model/ref_test.py
+++ b/conans/test/unittests/model/ref_test.py
@@ -7,7 +7,7 @@
class RefTest(unittest.TestCase):
def basic_test(self):
- ref = ConanFileReference.loads("opencv/2.4.10 @ lasote/testing")
+ ref = ConanFileReference.loads("opencv/2.4.10@lasote/testing")
self.assertEqual(ref.name, "opencv")
self.assertEqual(ref.version, "2.4.10")
self.assertEqual(ref.user, "lasote")
@@ -37,8 +37,10 @@ def basic_test(self):
def errors_test(self):
self.assertRaises(ConanException, ConanFileReference.loads, "")
self.assertRaises(ConanException, ConanFileReference.loads, "opencv/2.4.10")
- self.assertRaises(ConanException, ConanFileReference.loads, "opencv/2.4.10 @ lasote")
+ self.assertRaises(ConanException, ConanFileReference.loads, "opencv/2.4.10@lasote")
self.assertRaises(ConanException, ConanFileReference.loads, "opencv??/2.4.10@laso/testing")
+ self.assertRaises(ConanException, ConanFileReference.loads, "opencv/2.4.10 @ laso/testing")
+ self.assertRaises(ConanException, ConanFileReference.loads, "o/2.4.10@laso/testing")
self.assertRaises(ConanException, ConanFileReference.loads, ".opencv/2.4.10@lasote/testing")
self.assertRaises(ConanException, ConanFileReference.loads, "o/2.4.10 @ lasote/testing")
self.assertRaises(ConanException, ConanFileReference.loads, "lib/1.0@user&surname/channel")
diff --git a/conans/test/unittests/model/version_ranges_test.py b/conans/test/unittests/model/version_ranges_test.py
index 6790896b78f..9917d68a3e7 100644
--- a/conans/test/unittests/model/version_ranges_test.py
+++ b/conans/test/unittests/model/version_ranges_test.py
@@ -287,8 +287,10 @@ class HelloConan(ConanFile):
# ranges
('"Say/[<=1.2]@myuser/testing"', "1.2.1", False, False),
('"Say/[>=0.2,<=1.0]@myuser/testing"', "0.3", False, True),
+ ('"Say/[>=0.2 <=1.0]@myuser/testing"', "0.3", False, True),
('("Say/[<=1.2]@myuser/testing", "override")', "1.2.1", True, False),
('("Say/[>=0.2,<=1.0]@myuser/testing", "override")', "0.3", True, True),
+ ('("Say/[>=0.2 <=1.0]@myuser/testing", "override")', "0.3", True, True),
])
def transitive_test(self, version_range, solution, override, valid):
hello_text = hello_content % ">0.1, <1"
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-33874@74fab53
|
frappe/erpnext
|
Python
| 33,874
|
feat: Support for Alternative Items in Quotation
|
> **Note:** Not to be confused with DocType "Item Alternative"
Documentation Draft: https://docs.erpnext.com/docs/v14/user/manual/en/selling/quotation/edit-wiki?wiki_page_patch=b1a88d79a9
Please check https://github.com/frappe/erpnext/issues/33609 to understand the issue
<details>
<summary>Impact on Quotation</summary>
> **Note:** It is important to maintain the right order i.e. Alternative Item rows must follow a Non-alternative item row. Grouping will be done on this basis
> <img width="350" alt="Screenshot 2023-02-06 at 4 38 13 PM" src="https://user-images.githubusercontent.com/25857446/216957147-a47d4c5c-4390-477a-9abb-a7d0ed3bb76e.png">
- Users can add an alternative item in the Quotation to give the customer a chance to choose among items in the quotation
- To add an alternative item: add a row and check **Is Alternative**
<img width="966" alt="Screenshot 2023-02-06 at 4 31 18 PM" src="https://user-images.githubusercontent.com/25857446/216955223-6f564c53-3abf-4455-973d-a035f254bc06.png">
- **Totals**: Only consider original (non-alternative Item)
<img width="1039" alt="Screenshot 2023-01-31 at 5 02 44 PM" src="https://user-images.githubusercontent.com/25857446/215748953-8f4846d1-12cd-4617-8216-1c0c89ec5431.png">
- **Taxes**: Applied accordingly
<img width="1034" alt="Screenshot 2023-01-31 at 5 03 45 PM" src="https://user-images.githubusercontent.com/25857446/215749139-91e4ca04-4aa7-4fcc-9d7e-9d91afaf7ecc.png">
</details>
<details>
<summary>Mapping to Sales Order</summary>
Consider:
<img width="1026" alt="Screenshot 2023-01-31 at 5 05 31 PM" src="https://user-images.githubusercontent.com/25857446/215749415-0d1c179e-8543-41af-a1f4-755f9811bd37.png">
- User will be asked to select an item from among the original-alternatives set, to map in the Sales Order

- One of the alternatives is selected, the same is mapped and the rest in the set are skipped. Apart from that the simple items are also mapped:

</details>
<details>
<summary>Use Cases</summary>
- Item Code has alternatives which are different Items
<img width="1014" alt="Screenshot 2023-02-06 at 4 51 28 PM" src="https://user-images.githubusercontent.com/25857446/216959057-4718ee10-58e1-4167-9c5e-71bb6b6081a7.png">
- Service Item has alternatives and non-alternatives sharing the same Item code. The case here is that the service costing is very dynamic and subjective and so the **same code** with different Item name/Description is used in the Quotation.
<img width="1027" alt="Screenshot 2023-02-06 at 4 53 46 PM" src="https://user-images.githubusercontent.com/25857446/216959394-6ccc4ee2-b412-4a5b-b3b5-5e2ff7e194e8.png">
</details>
### Todo
- [x] Partially ordered status
- [x] Tests
|
2023-01-30T10:58:29Z
|
Alternative line items in Quotation
Summary: I want to offer multiple variations of an item or service at different prices in the same **Quotation**. Only the main item or service shall be included in the total. The customer can decide to order any one of the proposed variations.
Let's say I'm selling flights. I want to tell my customer the prices for Economy, Business and First class. The Quotation should look like this:
Sr | Item Name | Is Alternative | Qty | Rate | Amount
---|------------|:--------------:|----:|------:|---------:
1 | Economy | | 2 | 150 | 300
2 | Business | ✓ | 2 | 200 | 400
3 | First Class | ✓ | 2 | 400 | 800
4 | **Total** | | | | **300**
The first line item is the main product offered, in this case. All following line items that have _Is Alternative_ checked, are alternatives to the first line item. Note that an alternative line item may have the same item code as the main item (but a different description).
The amount of alternative line items should be excluded from the total. It does not make sense to include them in the total, because that would make the value much higher than anything the customer will ever pay (1.500 instead of the maximum possible value of 800 for two first class tickets).
When the **Quotation** is turned into an order, exactly one of these positions must be chosen. The alternative line item only applies to **Quotation**. In **Sales Order** etc. all line items are final. For example, the **Sales Order** might look like this:
Sr | Item Name | Qty | Rate | Amount
---|------------|----:|------:|---------:
1 | Business | 2 | 200 | 400
2 | **Total** | | | **400**
This counts as a full order of all quoted items.
Internal reference: DEP-431
|
[
{
"body": "Summary: I want to offer multiple variations of an item or service at different prices in the same **Quotation**. Only the main item or service shall be included in the total. The customer can decide to order any one of the proposed variations.\r\n\r\nLet's say I'm selling flights. I want to tell my customer the prices for Economy, Business and First class. The Quotation should look like this:\r\n\r\nSr | Item Name | Is Alternative | Qty | Rate | Amount\r\n---|------------|:--------------:|----:|------:|---------:\r\n1 | Economy | | 2 | 150 | 300\r\n2 | Business | ✓ | 2 | 200 | 400\r\n3 | First Class | ✓ | 2 | 400 | 800\r\n4 | **Total** | | | | **300**\r\n\r\nThe first line item is the main product offered, in this case. All following line items that have _Is Alternative_ checked, are alternatives to the first line item. Note that an alternative line item may have the same item code as the main item (but a different description).\r\n\r\nThe amount of alternative line items should be excluded from the total. It does not make sense to include them in the total, because that would make the value much higher than anything the customer will ever pay (1.500 instead of the maximum possible value of 800 for two first class tickets).\r\n\r\nWhen the **Quotation** is turned into an order, exactly one of these positions must be chosen. The alternative line item only applies to **Quotation**. In **Sales Order** etc. all line items are final. For example, the **Sales Order** might look like this:\r\n\r\nSr | Item Name | Qty | Rate | Amount\r\n---|------------|----:|------:|---------:\r\n1 | Business | 2 | 200 | 400\r\n2 | **Total** | | | **400**\r\n\r\nThis counts as a full order of all quoted items.\r\n\r\nInternal reference: DEP-431",
"number": 33609,
"title": "Alternative line items in Quotation"
}
] |
a9920715ab902a276a25936cd1b98a427e7afdbd
|
{
"head_commit": "74fab53e281b42c2eb3436ae3145d820108b1c13",
"head_commit_message": "test: Alternative items in Quotation\n\n- Taxes and totals, mapping, back updation",
"patch_to_review": "diff --git a/erpnext/controllers/taxes_and_totals.py b/erpnext/controllers/taxes_and_totals.py\nindex 8c403aa9bfe2..1edd7bf85e1f 100644\n--- a/erpnext/controllers/taxes_and_totals.py\n+++ b/erpnext/controllers/taxes_and_totals.py\n@@ -24,11 +24,19 @@ class calculate_taxes_and_totals(object):\n \tdef __init__(self, doc: Document):\n \t\tself.doc = doc\n \t\tfrappe.flags.round_off_applicable_accounts = []\n+\n+\t\tself._items = self.filter_rows() if self.doc.doctype == \"Quotation\" else self.doc.get(\"items\")\n+\n \t\tget_round_off_applicable_accounts(self.doc.company, frappe.flags.round_off_applicable_accounts)\n \t\tself.calculate()\n \n+\tdef filter_rows(self):\n+\t\t\"\"\"Exclude rows, that do not fulfill the filter criteria, from totals computation.\"\"\"\n+\t\titems = list(filter(lambda item: not item.get(\"is_alternative\"), self.doc.get(\"items\")))\n+\t\treturn items\n+\n \tdef calculate(self):\n-\t\tif not len(self.doc.get(\"items\")):\n+\t\tif not len(self._items):\n \t\t\treturn\n \n \t\tself.discount_amount_applied = False\n@@ -70,7 +78,7 @@ def calculate_tax_withholding_net_total(self):\n \t\tif hasattr(self.doc, \"tax_withholding_net_total\"):\n \t\t\tsum_net_amount = 0\n \t\t\tsum_base_net_amount = 0\n-\t\t\tfor item in self.doc.get(\"items\"):\n+\t\t\tfor item in self._items:\n \t\t\t\tif hasattr(item, \"apply_tds\") and item.apply_tds:\n \t\t\t\t\tsum_net_amount += item.net_amount\n \t\t\t\t\tsum_base_net_amount += item.base_net_amount\n@@ -79,7 +87,7 @@ def calculate_tax_withholding_net_total(self):\n \t\t\tself.doc.base_tax_withholding_net_total = sum_base_net_amount\n \n \tdef validate_item_tax_template(self):\n-\t\tfor item in self.doc.get(\"items\"):\n+\t\tfor item in self._items:\n \t\t\tif item.item_code and item.get(\"item_tax_template\"):\n \t\t\t\titem_doc = frappe.get_cached_doc(\"Item\", item.item_code)\n \t\t\t\targs = {\n@@ -137,7 +145,7 @@ def calculate_item_values(self):\n \t\t\treturn\n \n \t\tif not self.discount_amount_applied:\n-\t\t\tfor item in self.doc.get(\"items\"):\n+\t\t\tfor item in self._items:\n \t\t\t\tself.doc.round_floats_in(item)\n \n \t\t\t\tif item.discount_percentage == 100:\n@@ -236,7 +244,7 @@ def determine_exclusive_rate(self):\n \t\tif not any(cint(tax.included_in_print_rate) for tax in self.doc.get(\"taxes\")):\n \t\t\treturn\n \n-\t\tfor item in self.doc.get(\"items\"):\n+\t\tfor item in self._items:\n \t\t\titem_tax_map = self._load_item_tax_rate(item.item_tax_rate)\n \t\t\tcumulated_tax_fraction = 0\n \t\t\ttotal_inclusive_tax_amount_per_qty = 0\n@@ -317,7 +325,7 @@ def calculate_net_total(self):\n \t\t\tself.doc.total\n \t\t) = self.doc.base_total = self.doc.net_total = self.doc.base_net_total = 0.0\n \n-\t\tfor item in self.doc.get(\"items\"):\n+\t\tfor item in self._items:\n \t\t\tself.doc.total += item.amount\n \t\t\tself.doc.total_qty += item.qty\n \t\t\tself.doc.base_total += item.base_amount\n@@ -354,7 +362,7 @@ def calculate_taxes(self):\n \t\t\t]\n \t\t)\n \n-\t\tfor n, item in enumerate(self.doc.get(\"items\")):\n+\t\tfor n, item in enumerate(self._items):\n \t\t\titem_tax_map = self._load_item_tax_rate(item.item_tax_rate)\n \t\t\tfor i, tax in enumerate(self.doc.get(\"taxes\")):\n \t\t\t\t# tax_amount represents the amount of tax for the current step\n@@ -363,7 +371,7 @@ def calculate_taxes(self):\n \t\t\t\t# Adjust divisional loss to the last item\n \t\t\t\tif tax.charge_type == \"Actual\":\n \t\t\t\t\tactual_tax_dict[tax.idx] -= current_tax_amount\n-\t\t\t\t\tif n == len(self.doc.get(\"items\")) - 1:\n+\t\t\t\t\tif n == len(self._items) - 1:\n \t\t\t\t\t\tcurrent_tax_amount += actual_tax_dict[tax.idx]\n \n \t\t\t\t# accumulate tax amount into tax.tax_amount\n@@ -391,7 +399,7 @@ def calculate_taxes(self):\n \t\t\t\t\t)\n \n \t\t\t\t# set precision in the last item iteration\n-\t\t\t\tif n == len(self.doc.get(\"items\")) - 1:\n+\t\t\t\tif n == len(self._items) - 1:\n \t\t\t\t\tself.round_off_totals(tax)\n \t\t\t\t\tself._set_in_company_currency(tax, [\"tax_amount\", \"tax_amount_after_discount_amount\"])\n \n@@ -570,7 +578,7 @@ def calculate_totals(self):\n \tdef calculate_total_net_weight(self):\n \t\tif self.doc.meta.get_field(\"total_net_weight\"):\n \t\t\tself.doc.total_net_weight = 0.0\n-\t\t\tfor d in self.doc.items:\n+\t\t\tfor d in self._items:\n \t\t\t\tif d.total_weight:\n \t\t\t\t\tself.doc.total_net_weight += d.total_weight\n \n@@ -630,7 +638,7 @@ def apply_discount_amount(self):\n \n \t\t\tif total_for_discount_amount:\n \t\t\t\t# calculate item amount after Discount Amount\n-\t\t\t\tfor i, item in enumerate(self.doc.get(\"items\")):\n+\t\t\t\tfor i, item in enumerate(self._items):\n \t\t\t\t\tdistributed_amount = (\n \t\t\t\t\t\tflt(self.doc.discount_amount) * item.net_amount / total_for_discount_amount\n \t\t\t\t\t)\n@@ -643,7 +651,7 @@ def apply_discount_amount(self):\n \t\t\t\t\t\tself.doc.apply_discount_on == \"Net Total\"\n \t\t\t\t\t\tor not taxes\n \t\t\t\t\t\tor total_for_discount_amount == self.doc.net_total\n-\t\t\t\t\t) and i == len(self.doc.get(\"items\")) - 1:\n+\t\t\t\t\t) and i == len(self._items) - 1:\n \t\t\t\t\t\tdiscount_amount_loss = flt(\n \t\t\t\t\t\t\tself.doc.net_total - net_total - self.doc.discount_amount, self.doc.precision(\"net_total\")\n \t\t\t\t\t\t)\ndiff --git a/erpnext/public/js/controllers/taxes_and_totals.js b/erpnext/public/js/controllers/taxes_and_totals.js\nindex a87c3ec9514b..623b338e181c 100644\n--- a/erpnext/public/js/controllers/taxes_and_totals.js\n+++ b/erpnext/public/js/controllers/taxes_and_totals.js\n@@ -91,6 +91,9 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t}\n \n \t_calculate_taxes_and_totals() {\n+\t\tconst is_quotation = this.frm.doc.doctype == \"Quotation\";\n+\t\tthis.frm.doc._items = is_quotation ? this.filtered_items() : this.frm.doc.items;\n+\n \t\tthis.validate_conversion_rate();\n \t\tthis.calculate_item_values();\n \t\tthis.initialize_taxes();\n@@ -122,7 +125,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \tcalculate_item_values() {\n \t\tvar me = this;\n \t\tif (!this.discount_amount_applied) {\n-\t\t\tfor (const item of this.frm.doc.items || []) {\n+\t\t\tfor (const item of this.frm.doc._items || []) {\n \t\t\t\tfrappe.model.round_floats_in(item);\n \t\t\t\titem.net_rate = item.rate;\n \t\t\t\titem.qty = item.qty === undefined ? (me.frm.doc.is_return ? -1 : 1) : item.qty;\n@@ -206,7 +209,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\t});\n \t\tif(has_inclusive_tax==false) return;\n \n-\t\t$.each(me.frm.doc[\"items\"] || [], function(n, item) {\n+\t\t$.each(me.frm.doc._items || [], function(n, item) {\n \t\t\tvar item_tax_map = me._load_item_tax_rate(item.item_tax_rate);\n \t\t\tvar cumulated_tax_fraction = 0.0;\n \t\t\tvar total_inclusive_tax_amount_per_qty = 0;\n@@ -277,7 +280,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\tvar me = this;\n \t\tthis.frm.doc.total_qty = this.frm.doc.total = this.frm.doc.base_total = this.frm.doc.net_total = this.frm.doc.base_net_total = 0.0;\n \n-\t\t$.each(this.frm.doc[\"items\"] || [], function(i, item) {\n+\t\t$.each(this.frm.doc._items || [], function(i, item) {\n \t\t\tme.frm.doc.total += item.amount;\n \t\t\tme.frm.doc.total_qty += item.qty;\n \t\t\tme.frm.doc.base_total += item.base_amount;\n@@ -330,7 +333,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\t\t}\n \t\t});\n \n-\t\t$.each(this.frm.doc[\"items\"] || [], function(n, item) {\n+\t\t$.each(this.frm.doc._items || [], function(n, item) {\n \t\t\tvar item_tax_map = me._load_item_tax_rate(item.item_tax_rate);\n \t\t\t$.each(me.frm.doc[\"taxes\"] || [], function(i, tax) {\n \t\t\t\t// tax_amount represents the amount of tax for the current step\n@@ -339,7 +342,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\t\t\t// Adjust divisional loss to the last item\n \t\t\t\tif (tax.charge_type == \"Actual\") {\n \t\t\t\t\tactual_tax_dict[tax.idx] -= current_tax_amount;\n-\t\t\t\t\tif (n == me.frm.doc[\"items\"].length - 1) {\n+\t\t\t\t\tif (n == me.frm.doc._items.length - 1) {\n \t\t\t\t\t\tcurrent_tax_amount += actual_tax_dict[tax.idx];\n \t\t\t\t\t}\n \t\t\t\t}\n@@ -376,7 +379,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\t\t\t}\n \n \t\t\t\t// set precision in the last item iteration\n-\t\t\t\tif (n == me.frm.doc[\"items\"].length - 1) {\n+\t\t\t\tif (n == me.frm.doc._items.length - 1) {\n \t\t\t\t\tme.round_off_totals(tax);\n \t\t\t\t\tme.set_in_company_currency(tax,\n \t\t\t\t\t\t[\"tax_amount\", \"tax_amount_after_discount_amount\"]);\n@@ -599,10 +602,11 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \n \t_cleanup() {\n \t\tthis.frm.doc.base_in_words = this.frm.doc.in_words = \"\";\n+\t\tlet items = this.frm.doc._items;\n \n-\t\tif(this.frm.doc[\"items\"] && this.frm.doc[\"items\"].length) {\n-\t\t\tif(!frappe.meta.get_docfield(this.frm.doc[\"items\"][0].doctype, \"item_tax_amount\", this.frm.doctype)) {\n-\t\t\t\t$.each(this.frm.doc[\"items\"] || [], function(i, item) {\n+\t\tif(items && items.length) {\n+\t\t\tif(!frappe.meta.get_docfield(items[0].doctype, \"item_tax_amount\", this.frm.doctype)) {\n+\t\t\t\t$.each(items || [], function(i, item) {\n \t\t\t\t\tdelete item[\"item_tax_amount\"];\n \t\t\t\t});\n \t\t\t}\n@@ -655,7 +659,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\t\tvar net_total = 0;\n \t\t\t// calculate item amount after Discount Amount\n \t\t\tif (total_for_discount_amount) {\n-\t\t\t\t$.each(this.frm.doc[\"items\"] || [], function(i, item) {\n+\t\t\t\t$.each(this.frm.doc._items || [], function(i, item) {\n \t\t\t\t\tdistributed_amount = flt(me.frm.doc.discount_amount) * item.net_amount / total_for_discount_amount;\n \t\t\t\t\titem.net_amount = flt(item.net_amount - distributed_amount,\n \t\t\t\t\t\tprecision(\"base_amount\", item));\n@@ -663,7 +667,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \n \t\t\t\t\t// discount amount rounding loss adjustment if no taxes\n \t\t\t\t\tif ((!(me.frm.doc.taxes || []).length || total_for_discount_amount==me.frm.doc.net_total || (me.frm.doc.apply_discount_on == \"Net Total\"))\n-\t\t\t\t\t\t\t&& i == (me.frm.doc.items || []).length - 1) {\n+\t\t\t\t\t\t\t&& i == (me.frm.doc._items || []).length - 1) {\n \t\t\t\t\t\tvar discount_amount_loss = flt(me.frm.doc.net_total - net_total\n \t\t\t\t\t\t\t- me.frm.doc.discount_amount, precision(\"net_total\"));\n \t\t\t\t\t\titem.net_amount = flt(item.net_amount + discount_amount_loss,\n@@ -892,4 +896,8 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {\n \t\t}\n \n \t}\n+\n+\tfiltered_items() {\n+\t\treturn this.frm.doc.items.filter(item => !item[\"is_alternative\"]);\n+\t}\n };\ndiff --git a/erpnext/selling/doctype/quotation/quotation.js b/erpnext/selling/doctype/quotation/quotation.js\nindex b348bd35754f..f77dce8f1abd 100644\n--- a/erpnext/selling/doctype/quotation/quotation.js\n+++ b/erpnext/selling/doctype/quotation/quotation.js\n@@ -90,7 +90,7 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.\n \t\t\t\t|| frappe.datetime.get_diff(doc.valid_till, frappe.datetime.get_today()) >= 0) {\n \t\t\t\t\tthis.frm.add_custom_button(\n \t\t\t\t\t\t__(\"Sales Order\"),\n-\t\t\t\t\t\tthis.frm.cscript[\"Make Sales Order\"],\n+\t\t\t\t\t\t() => this.make_sales_order(),\n \t\t\t\t\t\t__(\"Create\")\n \t\t\t\t\t);\n \t\t\t\t}\n@@ -145,6 +145,20 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.\n \n \t}\n \n+\tmake_sales_order() {\n+\t\tvar me = this;\n+\n+\t\tlet has_alternative_item = this.frm.doc.items.some((item) => item.is_alternative);\n+\t\tif (has_alternative_item) {\n+\t\t\tthis.show_alternative_items_dialog();\n+\t\t} else {\n+\t\t\tfrappe.model.open_mapped_doc({\n+\t\t\t\tmethod: \"erpnext.selling.doctype.quotation.quotation.make_sales_order\",\n+\t\t\t\tfrm: me.frm\n+\t\t\t});\n+\t\t}\n+\t}\n+\n \tset_dynamic_field_label(){\n \t\tif (this.frm.doc.quotation_to == \"Customer\")\n \t\t{\n@@ -220,17 +234,111 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.\n \t\t\t}\n \t\t})\n \t}\n+\n+\tshow_alternative_items_dialog() {\n+\t\tvar me = this;\n+\n+\t\tconst table_fields = [\n+\t\t{\n+\t\t\tfieldtype:\"Data\",\n+\t\t\tfieldname:\"name\",\n+\t\t\tlabel: __(\"Name\"),\n+\t\t\tread_only: 1,\n+\t\t},\n+\t\t{\n+\t\t\tfieldtype:\"Link\",\n+\t\t\tfieldname:\"item_code\",\n+\t\t\toptions: \"Item\",\n+\t\t\tlabel: __(\"Item Code\"),\n+\t\t\tread_only: 1,\n+\t\t\tin_list_view: 1,\n+\t\t\tcolumns: 2,\n+\t\t\tformatter: (value, df, options, doc) => {\n+\t\t\t\treturn doc.is_alternative ? `<span class=\"indicator yellow\">${value}</span>` : value;\n+\t\t\t}\n+\t\t},\n+\t\t{\n+\t\t\tfieldtype:\"Data\",\n+\t\t\tfieldname:\"description\",\n+\t\t\tlabel: __(\"Description\"),\n+\t\t\tin_list_view: 1,\n+\t\t\tread_only: 1,\n+\t\t},\n+\t\t{\n+\t\t\tfieldtype:\"Currency\",\n+\t\t\tfieldname:\"amount\",\n+\t\t\tlabel: __(\"Amount\"),\n+\t\t\toptions: \"currency\",\n+\t\t\tin_list_view: 1,\n+\t\t\tread_only: 1,\n+\t\t},\n+\t\t{\n+\t\t\tfieldtype:\"Check\",\n+\t\t\tfieldname:\"is_alternative\",\n+\t\t\tlabel: __(\"Is Alternative\"),\n+\t\t\tread_only: 1,\n+\t\t}];\n+\n+\n+\t\tthis.data = this.frm.doc.items.filter(\n+\t\t\t(item) => item.is_alternative || item.has_alternative_item\n+\t\t).map((item) => {\n+\t\t\treturn {\n+\t\t\t\t\"name\": item.name,\n+\t\t\t\t\"item_code\": item.item_code,\n+\t\t\t\t\"description\": item.description,\n+\t\t\t\t\"amount\": item.amount,\n+\t\t\t\t\"is_alternative\": item.is_alternative,\n+\t\t\t}\n+\t\t});\n+\n+\t\tconst dialog = new frappe.ui.Dialog({\n+\t\t\ttitle: __(\"Select Alternative Items for Sales Order\"),\n+\t\t\tfields: [\n+\t\t\t\t{\n+\t\t\t\t\tfieldname: \"info\",\n+\t\t\t\t\tfieldtype: \"HTML\",\n+\t\t\t\t\tread_only: 1\n+\t\t\t\t},\n+\t\t\t\t{\n+\t\t\t\t\tfieldname: \"alternative_items\",\n+\t\t\t\t\tfieldtype: \"Table\",\n+\t\t\t\t\tcannot_add_rows: true,\n+\t\t\t\t\tin_place_edit: true,\n+\t\t\t\t\treqd: 1,\n+\t\t\t\t\tdata: this.data,\n+\t\t\t\t\tdescription: __(\"Select an item from each set to be used in the Sales Order.\"),\n+\t\t\t\t\tget_data: () => {\n+\t\t\t\t\t\treturn this.data;\n+\t\t\t\t\t},\n+\t\t\t\t\tfields: table_fields\n+\t\t\t\t},\n+\t\t\t],\n+\t\t\tprimary_action: function() {\n+\t\t\t\tfrappe.model.open_mapped_doc({\n+\t\t\t\t\tmethod: \"erpnext.selling.doctype.quotation.quotation.make_sales_order\",\n+\t\t\t\t\tfrm: me.frm,\n+\t\t\t\t\targs: {\n+\t\t\t\t\t\tselected_items: dialog.fields_dict.alternative_items.grid.get_selected_children()\n+\t\t\t\t\t}\n+\t\t\t\t});\n+\t\t\t\tdialog.hide();\n+\t\t\t},\n+\t\t\tprimary_action_label: __('Continue')\n+\t\t});\n+\n+\t\tdialog.fields_dict.info.$wrapper.html(\n+\t\t\t`<p class=\"small text-muted\">\n+\t\t\t\t<span class=\"indicator yellow\"></span>\n+\t\t\t\tAlternative Items\n+\t\t\t</p>`\n+\t\t)\n+\t\tdialog.show();\n+\t}\n };\n \n cur_frm.script_manager.make(erpnext.selling.QuotationController);\n \n-cur_frm.cscript['Make Sales Order'] = function() {\n-\tfrappe.model.open_mapped_doc({\n-\t\tmethod: \"erpnext.selling.doctype.quotation.quotation.make_sales_order\",\n-\t\tfrm: cur_frm\n-\t})\n-}\n-\n frappe.ui.form.on(\"Quotation Item\", \"items_on_form_rendered\", \"packed_items_on_form_rendered\", function(frm, cdt, cdn) {\n \t// enable tax_amount field if Actual\n })\ndiff --git a/erpnext/selling/doctype/quotation/quotation.py b/erpnext/selling/doctype/quotation/quotation.py\nindex 063813b2dc70..185f63c345e0 100644\n--- a/erpnext/selling/doctype/quotation/quotation.py\n+++ b/erpnext/selling/doctype/quotation/quotation.py\n@@ -35,6 +35,9 @@ def validate(self):\n \n \t\tmake_packing_list(self)\n \n+\tdef before_submit(self):\n+\t\tself.set_has_alternative_item()\n+\n \tdef validate_valid_till(self):\n \t\tif self.valid_till and getdate(self.valid_till) < getdate(self.transaction_date):\n \t\t\tfrappe.throw(_(\"Valid till date cannot be before transaction date\"))\n@@ -59,7 +62,18 @@ def validate_shopping_cart_items(self):\n \t\t\t\t\ttitle=_(\"Unpublished Item\"),\n \t\t\t\t)\n \n+\tdef set_has_alternative_item(self):\n+\t\t\"\"\"Mark 'Has Alternative Item' for rows.\"\"\"\n+\t\tif not any(row.is_alternative for row in self.get(\"items\")):\n+\t\t\treturn\n+\n+\t\titems_with_alternatives = self.get_rows_with_alternatives()\n+\t\tfor row in self.get(\"items\"):\n+\t\t\tif not row.is_alternative and row.name in items_with_alternatives:\n+\t\t\t\trow.has_alternative_item = 1\n+\n \tdef get_ordered_status(self):\n+\t\tstatus = \"Open\"\n \t\tordered_items = frappe._dict(\n \t\t\tfrappe.db.get_all(\n \t\t\t\t\"Sales Order Item\",\n@@ -70,16 +84,40 @@ def get_ordered_status(self):\n \t\t\t)\n \t\t)\n \n-\t\tstatus = \"Open\"\n-\t\tif ordered_items:\n-\t\t\tstatus = \"Ordered\"\n+\t\tif not ordered_items:\n+\t\t\treturn status\n \n-\t\t\tfor item in self.get(\"items\"):\n-\t\t\t\tif item.qty > ordered_items.get(item.item_code, 0.0):\n-\t\t\t\t\tstatus = \"Partially Ordered\"\n+\t\thas_alternatives = any(row.is_alternative for row in self.get(\"items\"))\n+\t\tself._items = self.get_valid_items() if has_alternatives else self.get(\"items\")\n+\n+\t\tif any(row.qty > ordered_items.get(row.item_code, 0.0) for row in self._items):\n+\t\t\tstatus = \"Partially Ordered\"\n+\t\telse:\n+\t\t\tstatus = \"Ordered\"\n \n \t\treturn status\n \n+\tdef get_valid_items(self):\n+\t\t\"\"\"\n+\t\tFilters out items in an alternatives set that were not ordered.\n+\t\t\"\"\"\n+\n+\t\tdef is_in_sales_order(row):\n+\t\t\tin_sales_order = bool(\n+\t\t\t\tfrappe.db.exists(\n+\t\t\t\t\t\"Sales Order Item\", {\"quotation_item\": row.name, \"item_code\": row.item_code, \"docstatus\": 1}\n+\t\t\t\t)\n+\t\t\t)\n+\t\t\treturn in_sales_order\n+\n+\t\tdef can_map(row) -> bool:\n+\t\t\tif row.is_alternative or row.has_alternative_item:\n+\t\t\t\treturn is_in_sales_order(row)\n+\n+\t\t\treturn True\n+\n+\t\treturn list(filter(can_map, self.get(\"items\")))\n+\n \tdef is_fully_ordered(self):\n \t\treturn self.get_ordered_status() == \"Ordered\"\n \n@@ -176,6 +214,22 @@ def print_other_charges(self, docname):\n \tdef on_recurring(self, reference_doc, auto_repeat_doc):\n \t\tself.valid_till = None\n \n+\tdef get_rows_with_alternatives(self):\n+\t\trows_with_alternatives = []\n+\t\ttable_length = len(self.get(\"items\"))\n+\n+\t\tfor idx, row in enumerate(self.get(\"items\")):\n+\t\t\tif row.is_alternative:\n+\t\t\t\tcontinue\n+\n+\t\t\tif idx == (table_length - 1):\n+\t\t\t\tbreak\n+\n+\t\t\tif self.get(\"items\")[idx + 1].is_alternative:\n+\t\t\t\trows_with_alternatives.append(row.name)\n+\n+\t\treturn rows_with_alternatives\n+\n \n def get_list_context(context=None):\n \tfrom erpnext.controllers.website_list_for_contact import get_list_context\n@@ -221,6 +275,8 @@ def _make_sales_order(source_name, target_doc=None, ignore_permissions=False):\n \t\t)\n \t)\n \n+\tselected_rows = [x.get(\"name\") for x in frappe.flags.get(\"args\", {}).get(\"selected_items\", [])]\n+\n \tdef set_missing_values(source, target):\n \t\tif customer:\n \t\t\ttarget.customer = customer.name\n@@ -244,6 +300,20 @@ def update_item(obj, target, source_parent):\n \t\t\ttarget.blanket_order = obj.blanket_order\n \t\t\ttarget.blanket_order_rate = obj.blanket_order_rate\n \n+\tdef can_map_row(item) -> bool:\n+\t\t\"\"\"\n+\t\tRow mapping from Quotation to Sales order:\n+\t\t1. Simple row: Map if adequate qty\n+\t\t2. Has Alternative Item: Map if no alternative was selected against original item and #1\n+\t\t3. Is Alternative Item: Map if alternative was selected against original item and #1\n+\t\t\"\"\"\n+\t\thas_qty = item.qty > 0\n+\t\tif not (item.is_alternative or item.has_alternative_item):\n+\t\t\t# No alternative items in doc or current row is a simple item (without alternatives)\n+\t\t\treturn has_qty\n+\n+\t\treturn (item.name in selected_rows) and has_qty\n+\n \tdoclist = get_mapped_doc(\n \t\t\"Quotation\",\n \t\tsource_name,\n@@ -253,7 +323,7 @@ def update_item(obj, target, source_parent):\n \t\t\t\t\"doctype\": \"Sales Order Item\",\n \t\t\t\t\"field_map\": {\"parent\": \"prevdoc_docname\", \"name\": \"quotation_item\"},\n \t\t\t\t\"postprocess\": update_item,\n-\t\t\t\t\"condition\": lambda doc: doc.qty > 0,\n+\t\t\t\t\"condition\": can_map_row,\n \t\t\t},\n \t\t\t\"Sales Taxes and Charges\": {\"doctype\": \"Sales Taxes and Charges\", \"add_if_empty\": True},\n \t\t\t\"Sales Team\": {\"doctype\": \"Sales Team\", \"add_if_empty\": True},\ndiff --git a/erpnext/selling/doctype/quotation/test_quotation.py b/erpnext/selling/doctype/quotation/test_quotation.py\nindex cdf5f5d00c58..67f6518657eb 100644\n--- a/erpnext/selling/doctype/quotation/test_quotation.py\n+++ b/erpnext/selling/doctype/quotation/test_quotation.py\n@@ -457,6 +457,139 @@ def test_packed_items_indices_are_reset_when_product_bundle_is_deleted_from_item\n \t\t\texpected_index = id + 1\n \t\t\tself.assertEqual(item.idx, expected_index)\n \n+\tdef test_alternative_items_with_stock_items(self):\n+\t\t\"\"\"\n+\t\tCheck if taxes & totals considers only non-alternative items with:\n+\t\t- One set of non-alternative & alternative items [first 3 rows]\n+\t\t- One simple stock item\n+\t\t\"\"\"\n+\t\tfrom erpnext.stock.doctype.item.test_item import make_item\n+\n+\t\titem_list = []\n+\t\tstock_items = {\n+\t\t\t\"_Test Simple Item 1\": 100,\n+\t\t\t\"_Test Alt 1\": 120,\n+\t\t\t\"_Test Alt 2\": 110,\n+\t\t\t\"_Test Simple Item 2\": 200,\n+\t\t}\n+\n+\t\tfor item, rate in stock_items.items():\n+\t\t\tmake_item(item, {\"is_stock_item\": 1})\n+\t\t\titem_list.append(\n+\t\t\t\t{\n+\t\t\t\t\t\"item_code\": item,\n+\t\t\t\t\t\"qty\": 1,\n+\t\t\t\t\t\"rate\": rate,\n+\t\t\t\t\t\"is_alternative\": bool(\"Alt\" in item),\n+\t\t\t\t}\n+\t\t\t)\n+\n+\t\tquotation = make_quotation(item_list=item_list, do_not_submit=1)\n+\t\tquotation.append(\n+\t\t\t\"taxes\",\n+\t\t\t{\n+\t\t\t\t\"account_head\": \"_Test Account VAT - _TC\",\n+\t\t\t\t\"charge_type\": \"On Net Total\",\n+\t\t\t\t\"cost_center\": \"_Test Cost Center - _TC\",\n+\t\t\t\t\"description\": \"VAT\",\n+\t\t\t\t\"doctype\": \"Sales Taxes and Charges\",\n+\t\t\t\t\"rate\": 10,\n+\t\t\t},\n+\t\t)\n+\t\tquotation.submit()\n+\n+\t\tself.assertEqual(quotation.net_total, 300)\n+\t\tself.assertEqual(quotation.grand_total, 330)\n+\n+\tdef test_alternative_items_with_service_items(self):\n+\t\t\"\"\"\n+\t\tCheck if taxes & totals considers only non-alternative items with:\n+\t\t- One set of non-alternative & alternative service items [first 3 rows]\n+\t\t- One simple non-alternative service item\n+\t\tAll having the same item code and unique item name/description due to\n+\t\tdynamic services\n+\t\t\"\"\"\n+\t\tfrom erpnext.stock.doctype.item.test_item import make_item\n+\n+\t\titem_list = []\n+\t\tservice_items = {\n+\t\t\t\"Tiling with Standard Tiles\": 100,\n+\t\t\t\"Alt Tiling with Durable Tiles\": 150,\n+\t\t\t\"Alt Tiling with Premium Tiles\": 180,\n+\t\t\t\"False Ceiling with Material #234\": 190,\n+\t\t}\n+\n+\t\tmake_item(\"_Test Dynamic Service Item\", {\"is_stock_item\": 0})\n+\n+\t\tfor name, rate in service_items.items():\n+\t\t\titem_list.append(\n+\t\t\t\t{\n+\t\t\t\t\t\"item_code\": \"_Test Dynamic Service Item\",\n+\t\t\t\t\t\"item_name\": name,\n+\t\t\t\t\t\"description\": name,\n+\t\t\t\t\t\"qty\": 1,\n+\t\t\t\t\t\"rate\": rate,\n+\t\t\t\t\t\"is_alternative\": bool(\"Alt\" in name),\n+\t\t\t\t}\n+\t\t\t)\n+\n+\t\tquotation = make_quotation(item_list=item_list, do_not_submit=1)\n+\t\tquotation.append(\n+\t\t\t\"taxes\",\n+\t\t\t{\n+\t\t\t\t\"account_head\": \"_Test Account VAT - _TC\",\n+\t\t\t\t\"charge_type\": \"On Net Total\",\n+\t\t\t\t\"cost_center\": \"_Test Cost Center - _TC\",\n+\t\t\t\t\"description\": \"VAT\",\n+\t\t\t\t\"doctype\": \"Sales Taxes and Charges\",\n+\t\t\t\t\"rate\": 10,\n+\t\t\t},\n+\t\t)\n+\t\tquotation.submit()\n+\n+\t\tself.assertEqual(quotation.net_total, 290)\n+\t\tself.assertEqual(quotation.grand_total, 319)\n+\n+\tdef test_alternative_items_sales_order_mapping_with_stock_items(self):\n+\t\tfrom erpnext.selling.doctype.quotation.quotation import make_sales_order\n+\t\tfrom erpnext.stock.doctype.item.test_item import make_item\n+\n+\t\tfrappe.flags.args = frappe._dict()\n+\t\titem_list = []\n+\t\tstock_items = {\n+\t\t\t\"_Test Simple Item 1\": 100,\n+\t\t\t\"_Test Alt 1\": 120,\n+\t\t\t\"_Test Alt 2\": 110,\n+\t\t\t\"_Test Simple Item 2\": 200,\n+\t\t}\n+\n+\t\tfor item, rate in stock_items.items():\n+\t\t\tmake_item(item, {\"is_stock_item\": 1})\n+\t\t\titem_list.append(\n+\t\t\t\t{\n+\t\t\t\t\t\"item_code\": item,\n+\t\t\t\t\t\"qty\": 1,\n+\t\t\t\t\t\"rate\": rate,\n+\t\t\t\t\t\"is_alternative\": bool(\"Alt\" in item),\n+\t\t\t\t\t\"warehouse\": \"_Test Warehouse - _TC\",\n+\t\t\t\t}\n+\t\t\t)\n+\n+\t\tquotation = make_quotation(item_list=item_list)\n+\n+\t\tfrappe.flags.args.selected_items = [quotation.items[2]]\n+\t\tsales_order = make_sales_order(quotation.name)\n+\t\tsales_order.delivery_date = add_days(sales_order.transaction_date, 10)\n+\t\tsales_order.save()\n+\n+\t\tself.assertEqual(sales_order.items[0].item_code, \"_Test Alt 2\")\n+\t\tself.assertEqual(sales_order.items[1].item_code, \"_Test Simple Item 2\")\n+\t\tself.assertEqual(sales_order.net_total, 310)\n+\n+\t\tsales_order.submit()\n+\t\tquotation.reload()\n+\t\tself.assertEqual(quotation.status, \"Ordered\")\n+\n \n test_records = frappe.get_test_records(\"Quotation\")\n \ndiff --git a/erpnext/selling/doctype/quotation_item/quotation_item.json b/erpnext/selling/doctype/quotation_item/quotation_item.json\nindex ca7dfd23378f..f2aabc524004 100644\n--- a/erpnext/selling/doctype/quotation_item/quotation_item.json\n+++ b/erpnext/selling/doctype/quotation_item/quotation_item.json\n@@ -49,6 +49,8 @@\n \"pricing_rules\",\n \"stock_uom_rate\",\n \"is_free_item\",\n+ \"is_alternative\",\n+ \"has_alternative_item\",\n \"section_break_43\",\n \"valuation_rate\",\n \"column_break_45\",\n@@ -643,12 +645,28 @@\n \"no_copy\": 1,\n \"options\": \"currency\",\n \"read_only\": 1\n+ },\n+ {\n+ \"default\": \"0\",\n+ \"fieldname\": \"is_alternative\",\n+ \"fieldtype\": \"Check\",\n+ \"label\": \"Is Alternative\",\n+ \"print_hide\": 1\n+ },\n+ {\n+ \"default\": \"0\",\n+ \"fieldname\": \"has_alternative_item\",\n+ \"fieldtype\": \"Check\",\n+ \"hidden\": 1,\n+ \"label\": \"Has Alternative Item\",\n+ \"print_hide\": 1,\n+ \"read_only\": 1\n }\n ],\n \"idx\": 1,\n \"istable\": 1,\n \"links\": [],\n- \"modified\": \"2022-12-25 02:49:53.926625\",\n+ \"modified\": \"2023-02-06 11:00:07.042364\",\n \"modified_by\": \"Administrator\",\n \"module\": \"Selling\",\n \"name\": \"Quotation Item\",\n@@ -656,5 +674,6 @@\n \"permissions\": [],\n \"sort_field\": \"modified\",\n \"sort_order\": \"DESC\",\n+ \"states\": [],\n \"track_changes\": 1\n }\n\\ No newline at end of file\n"
}
|
[
{
"diff_hunk": "@@ -220,17 +234,111 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.\n \t\t\t}\n \t\t})\n \t}\n+\n+\tshow_alternative_items_dialog() {\n+\t\tvar me = this;",
"line": null,
"original_line": 239,
"original_start_line": null,
"path": "erpnext/selling/doctype/quotation/quotation.js",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\tlet me = this;\r\n```"
}
] |
47ab26dedfa2ceb3c3b126375ead68aa993ca583
|
diff --git a/erpnext/controllers/taxes_and_totals.py b/erpnext/controllers/taxes_and_totals.py
index 8c403aa9bfe2..1edd7bf85e1f 100644
--- a/erpnext/controllers/taxes_and_totals.py
+++ b/erpnext/controllers/taxes_and_totals.py
@@ -24,11 +24,19 @@ class calculate_taxes_and_totals(object):
def __init__(self, doc: Document):
self.doc = doc
frappe.flags.round_off_applicable_accounts = []
+
+ self._items = self.filter_rows() if self.doc.doctype == "Quotation" else self.doc.get("items")
+
get_round_off_applicable_accounts(self.doc.company, frappe.flags.round_off_applicable_accounts)
self.calculate()
+ def filter_rows(self):
+ """Exclude rows, that do not fulfill the filter criteria, from totals computation."""
+ items = list(filter(lambda item: not item.get("is_alternative"), self.doc.get("items")))
+ return items
+
def calculate(self):
- if not len(self.doc.get("items")):
+ if not len(self._items):
return
self.discount_amount_applied = False
@@ -70,7 +78,7 @@ def calculate_tax_withholding_net_total(self):
if hasattr(self.doc, "tax_withholding_net_total"):
sum_net_amount = 0
sum_base_net_amount = 0
- for item in self.doc.get("items"):
+ for item in self._items:
if hasattr(item, "apply_tds") and item.apply_tds:
sum_net_amount += item.net_amount
sum_base_net_amount += item.base_net_amount
@@ -79,7 +87,7 @@ def calculate_tax_withholding_net_total(self):
self.doc.base_tax_withholding_net_total = sum_base_net_amount
def validate_item_tax_template(self):
- for item in self.doc.get("items"):
+ for item in self._items:
if item.item_code and item.get("item_tax_template"):
item_doc = frappe.get_cached_doc("Item", item.item_code)
args = {
@@ -137,7 +145,7 @@ def calculate_item_values(self):
return
if not self.discount_amount_applied:
- for item in self.doc.get("items"):
+ for item in self._items:
self.doc.round_floats_in(item)
if item.discount_percentage == 100:
@@ -236,7 +244,7 @@ def determine_exclusive_rate(self):
if not any(cint(tax.included_in_print_rate) for tax in self.doc.get("taxes")):
return
- for item in self.doc.get("items"):
+ for item in self._items:
item_tax_map = self._load_item_tax_rate(item.item_tax_rate)
cumulated_tax_fraction = 0
total_inclusive_tax_amount_per_qty = 0
@@ -317,7 +325,7 @@ def calculate_net_total(self):
self.doc.total
) = self.doc.base_total = self.doc.net_total = self.doc.base_net_total = 0.0
- for item in self.doc.get("items"):
+ for item in self._items:
self.doc.total += item.amount
self.doc.total_qty += item.qty
self.doc.base_total += item.base_amount
@@ -354,7 +362,7 @@ def calculate_taxes(self):
]
)
- for n, item in enumerate(self.doc.get("items")):
+ for n, item in enumerate(self._items):
item_tax_map = self._load_item_tax_rate(item.item_tax_rate)
for i, tax in enumerate(self.doc.get("taxes")):
# tax_amount represents the amount of tax for the current step
@@ -363,7 +371,7 @@ def calculate_taxes(self):
# Adjust divisional loss to the last item
if tax.charge_type == "Actual":
actual_tax_dict[tax.idx] -= current_tax_amount
- if n == len(self.doc.get("items")) - 1:
+ if n == len(self._items) - 1:
current_tax_amount += actual_tax_dict[tax.idx]
# accumulate tax amount into tax.tax_amount
@@ -391,7 +399,7 @@ def calculate_taxes(self):
)
# set precision in the last item iteration
- if n == len(self.doc.get("items")) - 1:
+ if n == len(self._items) - 1:
self.round_off_totals(tax)
self._set_in_company_currency(tax, ["tax_amount", "tax_amount_after_discount_amount"])
@@ -570,7 +578,7 @@ def calculate_totals(self):
def calculate_total_net_weight(self):
if self.doc.meta.get_field("total_net_weight"):
self.doc.total_net_weight = 0.0
- for d in self.doc.items:
+ for d in self._items:
if d.total_weight:
self.doc.total_net_weight += d.total_weight
@@ -630,7 +638,7 @@ def apply_discount_amount(self):
if total_for_discount_amount:
# calculate item amount after Discount Amount
- for i, item in enumerate(self.doc.get("items")):
+ for i, item in enumerate(self._items):
distributed_amount = (
flt(self.doc.discount_amount) * item.net_amount / total_for_discount_amount
)
@@ -643,7 +651,7 @@ def apply_discount_amount(self):
self.doc.apply_discount_on == "Net Total"
or not taxes
or total_for_discount_amount == self.doc.net_total
- ) and i == len(self.doc.get("items")) - 1:
+ ) and i == len(self._items) - 1:
discount_amount_loss = flt(
self.doc.net_total - net_total - self.doc.discount_amount, self.doc.precision("net_total")
)
diff --git a/erpnext/public/js/controllers/taxes_and_totals.js b/erpnext/public/js/controllers/taxes_and_totals.js
index a87c3ec9514b..623b338e181c 100644
--- a/erpnext/public/js/controllers/taxes_and_totals.js
+++ b/erpnext/public/js/controllers/taxes_and_totals.js
@@ -91,6 +91,9 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
}
_calculate_taxes_and_totals() {
+ const is_quotation = this.frm.doc.doctype == "Quotation";
+ this.frm.doc._items = is_quotation ? this.filtered_items() : this.frm.doc.items;
+
this.validate_conversion_rate();
this.calculate_item_values();
this.initialize_taxes();
@@ -122,7 +125,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
calculate_item_values() {
var me = this;
if (!this.discount_amount_applied) {
- for (const item of this.frm.doc.items || []) {
+ for (const item of this.frm.doc._items || []) {
frappe.model.round_floats_in(item);
item.net_rate = item.rate;
item.qty = item.qty === undefined ? (me.frm.doc.is_return ? -1 : 1) : item.qty;
@@ -206,7 +209,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
});
if(has_inclusive_tax==false) return;
- $.each(me.frm.doc["items"] || [], function(n, item) {
+ $.each(me.frm.doc._items || [], function(n, item) {
var item_tax_map = me._load_item_tax_rate(item.item_tax_rate);
var cumulated_tax_fraction = 0.0;
var total_inclusive_tax_amount_per_qty = 0;
@@ -277,7 +280,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
var me = this;
this.frm.doc.total_qty = this.frm.doc.total = this.frm.doc.base_total = this.frm.doc.net_total = this.frm.doc.base_net_total = 0.0;
- $.each(this.frm.doc["items"] || [], function(i, item) {
+ $.each(this.frm.doc._items || [], function(i, item) {
me.frm.doc.total += item.amount;
me.frm.doc.total_qty += item.qty;
me.frm.doc.base_total += item.base_amount;
@@ -330,7 +333,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
}
});
- $.each(this.frm.doc["items"] || [], function(n, item) {
+ $.each(this.frm.doc._items || [], function(n, item) {
var item_tax_map = me._load_item_tax_rate(item.item_tax_rate);
$.each(me.frm.doc["taxes"] || [], function(i, tax) {
// tax_amount represents the amount of tax for the current step
@@ -339,7 +342,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
// Adjust divisional loss to the last item
if (tax.charge_type == "Actual") {
actual_tax_dict[tax.idx] -= current_tax_amount;
- if (n == me.frm.doc["items"].length - 1) {
+ if (n == me.frm.doc._items.length - 1) {
current_tax_amount += actual_tax_dict[tax.idx];
}
}
@@ -376,7 +379,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
}
// set precision in the last item iteration
- if (n == me.frm.doc["items"].length - 1) {
+ if (n == me.frm.doc._items.length - 1) {
me.round_off_totals(tax);
me.set_in_company_currency(tax,
["tax_amount", "tax_amount_after_discount_amount"]);
@@ -599,10 +602,11 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
_cleanup() {
this.frm.doc.base_in_words = this.frm.doc.in_words = "";
+ let items = this.frm.doc._items;
- if(this.frm.doc["items"] && this.frm.doc["items"].length) {
- if(!frappe.meta.get_docfield(this.frm.doc["items"][0].doctype, "item_tax_amount", this.frm.doctype)) {
- $.each(this.frm.doc["items"] || [], function(i, item) {
+ if(items && items.length) {
+ if(!frappe.meta.get_docfield(items[0].doctype, "item_tax_amount", this.frm.doctype)) {
+ $.each(items || [], function(i, item) {
delete item["item_tax_amount"];
});
}
@@ -655,7 +659,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
var net_total = 0;
// calculate item amount after Discount Amount
if (total_for_discount_amount) {
- $.each(this.frm.doc["items"] || [], function(i, item) {
+ $.each(this.frm.doc._items || [], function(i, item) {
distributed_amount = flt(me.frm.doc.discount_amount) * item.net_amount / total_for_discount_amount;
item.net_amount = flt(item.net_amount - distributed_amount,
precision("base_amount", item));
@@ -663,7 +667,7 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
// discount amount rounding loss adjustment if no taxes
if ((!(me.frm.doc.taxes || []).length || total_for_discount_amount==me.frm.doc.net_total || (me.frm.doc.apply_discount_on == "Net Total"))
- && i == (me.frm.doc.items || []).length - 1) {
+ && i == (me.frm.doc._items || []).length - 1) {
var discount_amount_loss = flt(me.frm.doc.net_total - net_total
- me.frm.doc.discount_amount, precision("net_total"));
item.net_amount = flt(item.net_amount + discount_amount_loss,
@@ -892,4 +896,8 @@ erpnext.taxes_and_totals = class TaxesAndTotals extends erpnext.payments {
}
}
+
+ filtered_items() {
+ return this.frm.doc.items.filter(item => !item["is_alternative"]);
+ }
};
diff --git a/erpnext/selling/doctype/quotation/quotation.js b/erpnext/selling/doctype/quotation/quotation.js
index b348bd35754f..81ef44d53ed9 100644
--- a/erpnext/selling/doctype/quotation/quotation.js
+++ b/erpnext/selling/doctype/quotation/quotation.js
@@ -90,7 +90,7 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.
|| frappe.datetime.get_diff(doc.valid_till, frappe.datetime.get_today()) >= 0) {
this.frm.add_custom_button(
__("Sales Order"),
- this.frm.cscript["Make Sales Order"],
+ () => this.make_sales_order(),
__("Create")
);
}
@@ -145,6 +145,20 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.
}
+ make_sales_order() {
+ var me = this;
+
+ let has_alternative_item = this.frm.doc.items.some((item) => item.is_alternative);
+ if (has_alternative_item) {
+ this.show_alternative_items_dialog();
+ } else {
+ frappe.model.open_mapped_doc({
+ method: "erpnext.selling.doctype.quotation.quotation.make_sales_order",
+ frm: me.frm
+ });
+ }
+ }
+
set_dynamic_field_label(){
if (this.frm.doc.quotation_to == "Customer")
{
@@ -220,17 +234,111 @@ erpnext.selling.QuotationController = class QuotationController extends erpnext.
}
})
}
+
+ show_alternative_items_dialog() {
+ let me = this;
+
+ const table_fields = [
+ {
+ fieldtype:"Data",
+ fieldname:"name",
+ label: __("Name"),
+ read_only: 1,
+ },
+ {
+ fieldtype:"Link",
+ fieldname:"item_code",
+ options: "Item",
+ label: __("Item Code"),
+ read_only: 1,
+ in_list_view: 1,
+ columns: 2,
+ formatter: (value, df, options, doc) => {
+ return doc.is_alternative ? `<span class="indicator yellow">${value}</span>` : value;
+ }
+ },
+ {
+ fieldtype:"Data",
+ fieldname:"description",
+ label: __("Description"),
+ in_list_view: 1,
+ read_only: 1,
+ },
+ {
+ fieldtype:"Currency",
+ fieldname:"amount",
+ label: __("Amount"),
+ options: "currency",
+ in_list_view: 1,
+ read_only: 1,
+ },
+ {
+ fieldtype:"Check",
+ fieldname:"is_alternative",
+ label: __("Is Alternative"),
+ read_only: 1,
+ }];
+
+
+ this.data = this.frm.doc.items.filter(
+ (item) => item.is_alternative || item.has_alternative_item
+ ).map((item) => {
+ return {
+ "name": item.name,
+ "item_code": item.item_code,
+ "description": item.description,
+ "amount": item.amount,
+ "is_alternative": item.is_alternative,
+ }
+ });
+
+ const dialog = new frappe.ui.Dialog({
+ title: __("Select Alternative Items for Sales Order"),
+ fields: [
+ {
+ fieldname: "info",
+ fieldtype: "HTML",
+ read_only: 1
+ },
+ {
+ fieldname: "alternative_items",
+ fieldtype: "Table",
+ cannot_add_rows: true,
+ in_place_edit: true,
+ reqd: 1,
+ data: this.data,
+ description: __("Select an item from each set to be used in the Sales Order."),
+ get_data: () => {
+ return this.data;
+ },
+ fields: table_fields
+ },
+ ],
+ primary_action: function() {
+ frappe.model.open_mapped_doc({
+ method: "erpnext.selling.doctype.quotation.quotation.make_sales_order",
+ frm: me.frm,
+ args: {
+ selected_items: dialog.fields_dict.alternative_items.grid.get_selected_children()
+ }
+ });
+ dialog.hide();
+ },
+ primary_action_label: __('Continue')
+ });
+
+ dialog.fields_dict.info.$wrapper.html(
+ `<p class="small text-muted">
+ <span class="indicator yellow"></span>
+ Alternative Items
+ </p>`
+ )
+ dialog.show();
+ }
};
cur_frm.script_manager.make(erpnext.selling.QuotationController);
-cur_frm.cscript['Make Sales Order'] = function() {
- frappe.model.open_mapped_doc({
- method: "erpnext.selling.doctype.quotation.quotation.make_sales_order",
- frm: cur_frm
- })
-}
-
frappe.ui.form.on("Quotation Item", "items_on_form_rendered", "packed_items_on_form_rendered", function(frm, cdt, cdn) {
// enable tax_amount field if Actual
})
diff --git a/erpnext/selling/doctype/quotation/quotation.py b/erpnext/selling/doctype/quotation/quotation.py
index 063813b2dc70..fc66db20d29e 100644
--- a/erpnext/selling/doctype/quotation/quotation.py
+++ b/erpnext/selling/doctype/quotation/quotation.py
@@ -35,6 +35,9 @@ def validate(self):
make_packing_list(self)
+ def before_submit(self):
+ self.set_has_alternative_item()
+
def validate_valid_till(self):
if self.valid_till and getdate(self.valid_till) < getdate(self.transaction_date):
frappe.throw(_("Valid till date cannot be before transaction date"))
@@ -59,7 +62,18 @@ def validate_shopping_cart_items(self):
title=_("Unpublished Item"),
)
+ def set_has_alternative_item(self):
+ """Mark 'Has Alternative Item' for rows."""
+ if not any(row.is_alternative for row in self.get("items")):
+ return
+
+ items_with_alternatives = self.get_rows_with_alternatives()
+ for row in self.get("items"):
+ if not row.is_alternative and row.name in items_with_alternatives:
+ row.has_alternative_item = 1
+
def get_ordered_status(self):
+ status = "Open"
ordered_items = frappe._dict(
frappe.db.get_all(
"Sales Order Item",
@@ -70,16 +84,40 @@ def get_ordered_status(self):
)
)
- status = "Open"
- if ordered_items:
- status = "Ordered"
+ if not ordered_items:
+ return status
- for item in self.get("items"):
- if item.qty > ordered_items.get(item.item_code, 0.0):
- status = "Partially Ordered"
+ has_alternatives = any(row.is_alternative for row in self.get("items"))
+ self._items = self.get_valid_items() if has_alternatives else self.get("items")
+
+ if any(row.qty > ordered_items.get(row.item_code, 0.0) for row in self._items):
+ status = "Partially Ordered"
+ else:
+ status = "Ordered"
return status
+ def get_valid_items(self):
+ """
+ Filters out items in an alternatives set that were not ordered.
+ """
+
+ def is_in_sales_order(row):
+ in_sales_order = bool(
+ frappe.db.exists(
+ "Sales Order Item", {"quotation_item": row.name, "item_code": row.item_code, "docstatus": 1}
+ )
+ )
+ return in_sales_order
+
+ def can_map(row) -> bool:
+ if row.is_alternative or row.has_alternative_item:
+ return is_in_sales_order(row)
+
+ return True
+
+ return list(filter(can_map, self.get("items")))
+
def is_fully_ordered(self):
return self.get_ordered_status() == "Ordered"
@@ -176,6 +214,22 @@ def print_other_charges(self, docname):
def on_recurring(self, reference_doc, auto_repeat_doc):
self.valid_till = None
+ def get_rows_with_alternatives(self):
+ rows_with_alternatives = []
+ table_length = len(self.get("items"))
+
+ for idx, row in enumerate(self.get("items")):
+ if row.is_alternative:
+ continue
+
+ if idx == (table_length - 1):
+ break
+
+ if self.get("items")[idx + 1].is_alternative:
+ rows_with_alternatives.append(row.name)
+
+ return rows_with_alternatives
+
def get_list_context(context=None):
from erpnext.controllers.website_list_for_contact import get_list_context
@@ -221,6 +275,8 @@ def _make_sales_order(source_name, target_doc=None, ignore_permissions=False):
)
)
+ selected_rows = [x.get("name") for x in frappe.flags.get("args", {}).get("selected_items", [])]
+
def set_missing_values(source, target):
if customer:
target.customer = customer.name
@@ -244,6 +300,24 @@ def update_item(obj, target, source_parent):
target.blanket_order = obj.blanket_order
target.blanket_order_rate = obj.blanket_order_rate
+ def can_map_row(item) -> bool:
+ """
+ Row mapping from Quotation to Sales order:
+ 1. If no selections, map all non-alternative rows (that sum up to the grand total)
+ 2. If selections: Is Alternative Item/Has Alternative Item: Map if selected and adequate qty
+ 3. If selections: Simple row: Map if adequate qty
+ """
+ has_qty = item.qty > 0
+
+ if not selected_rows:
+ return not item.is_alternative
+
+ if selected_rows and (item.is_alternative or item.has_alternative_item):
+ return (item.name in selected_rows) and has_qty
+
+ # Simple row
+ return has_qty
+
doclist = get_mapped_doc(
"Quotation",
source_name,
@@ -253,7 +327,7 @@ def update_item(obj, target, source_parent):
"doctype": "Sales Order Item",
"field_map": {"parent": "prevdoc_docname", "name": "quotation_item"},
"postprocess": update_item,
- "condition": lambda doc: doc.qty > 0,
+ "condition": can_map_row,
},
"Sales Taxes and Charges": {"doctype": "Sales Taxes and Charges", "add_if_empty": True},
"Sales Team": {"doctype": "Sales Team", "add_if_empty": True},
@@ -322,7 +396,11 @@ def update_item(obj, target, source_parent):
source_name,
{
"Quotation": {"doctype": "Sales Invoice", "validation": {"docstatus": ["=", 1]}},
- "Quotation Item": {"doctype": "Sales Invoice Item", "postprocess": update_item},
+ "Quotation Item": {
+ "doctype": "Sales Invoice Item",
+ "postprocess": update_item,
+ "condition": lambda row: not row.is_alternative,
+ },
"Sales Taxes and Charges": {"doctype": "Sales Taxes and Charges", "add_if_empty": True},
"Sales Team": {"doctype": "Sales Team", "add_if_empty": True},
},
diff --git a/erpnext/selling/doctype/quotation/test_quotation.py b/erpnext/selling/doctype/quotation/test_quotation.py
index cdf5f5d00c58..67f6518657eb 100644
--- a/erpnext/selling/doctype/quotation/test_quotation.py
+++ b/erpnext/selling/doctype/quotation/test_quotation.py
@@ -457,6 +457,139 @@ def test_packed_items_indices_are_reset_when_product_bundle_is_deleted_from_item
expected_index = id + 1
self.assertEqual(item.idx, expected_index)
+ def test_alternative_items_with_stock_items(self):
+ """
+ Check if taxes & totals considers only non-alternative items with:
+ - One set of non-alternative & alternative items [first 3 rows]
+ - One simple stock item
+ """
+ from erpnext.stock.doctype.item.test_item import make_item
+
+ item_list = []
+ stock_items = {
+ "_Test Simple Item 1": 100,
+ "_Test Alt 1": 120,
+ "_Test Alt 2": 110,
+ "_Test Simple Item 2": 200,
+ }
+
+ for item, rate in stock_items.items():
+ make_item(item, {"is_stock_item": 1})
+ item_list.append(
+ {
+ "item_code": item,
+ "qty": 1,
+ "rate": rate,
+ "is_alternative": bool("Alt" in item),
+ }
+ )
+
+ quotation = make_quotation(item_list=item_list, do_not_submit=1)
+ quotation.append(
+ "taxes",
+ {
+ "account_head": "_Test Account VAT - _TC",
+ "charge_type": "On Net Total",
+ "cost_center": "_Test Cost Center - _TC",
+ "description": "VAT",
+ "doctype": "Sales Taxes and Charges",
+ "rate": 10,
+ },
+ )
+ quotation.submit()
+
+ self.assertEqual(quotation.net_total, 300)
+ self.assertEqual(quotation.grand_total, 330)
+
+ def test_alternative_items_with_service_items(self):
+ """
+ Check if taxes & totals considers only non-alternative items with:
+ - One set of non-alternative & alternative service items [first 3 rows]
+ - One simple non-alternative service item
+ All having the same item code and unique item name/description due to
+ dynamic services
+ """
+ from erpnext.stock.doctype.item.test_item import make_item
+
+ item_list = []
+ service_items = {
+ "Tiling with Standard Tiles": 100,
+ "Alt Tiling with Durable Tiles": 150,
+ "Alt Tiling with Premium Tiles": 180,
+ "False Ceiling with Material #234": 190,
+ }
+
+ make_item("_Test Dynamic Service Item", {"is_stock_item": 0})
+
+ for name, rate in service_items.items():
+ item_list.append(
+ {
+ "item_code": "_Test Dynamic Service Item",
+ "item_name": name,
+ "description": name,
+ "qty": 1,
+ "rate": rate,
+ "is_alternative": bool("Alt" in name),
+ }
+ )
+
+ quotation = make_quotation(item_list=item_list, do_not_submit=1)
+ quotation.append(
+ "taxes",
+ {
+ "account_head": "_Test Account VAT - _TC",
+ "charge_type": "On Net Total",
+ "cost_center": "_Test Cost Center - _TC",
+ "description": "VAT",
+ "doctype": "Sales Taxes and Charges",
+ "rate": 10,
+ },
+ )
+ quotation.submit()
+
+ self.assertEqual(quotation.net_total, 290)
+ self.assertEqual(quotation.grand_total, 319)
+
+ def test_alternative_items_sales_order_mapping_with_stock_items(self):
+ from erpnext.selling.doctype.quotation.quotation import make_sales_order
+ from erpnext.stock.doctype.item.test_item import make_item
+
+ frappe.flags.args = frappe._dict()
+ item_list = []
+ stock_items = {
+ "_Test Simple Item 1": 100,
+ "_Test Alt 1": 120,
+ "_Test Alt 2": 110,
+ "_Test Simple Item 2": 200,
+ }
+
+ for item, rate in stock_items.items():
+ make_item(item, {"is_stock_item": 1})
+ item_list.append(
+ {
+ "item_code": item,
+ "qty": 1,
+ "rate": rate,
+ "is_alternative": bool("Alt" in item),
+ "warehouse": "_Test Warehouse - _TC",
+ }
+ )
+
+ quotation = make_quotation(item_list=item_list)
+
+ frappe.flags.args.selected_items = [quotation.items[2]]
+ sales_order = make_sales_order(quotation.name)
+ sales_order.delivery_date = add_days(sales_order.transaction_date, 10)
+ sales_order.save()
+
+ self.assertEqual(sales_order.items[0].item_code, "_Test Alt 2")
+ self.assertEqual(sales_order.items[1].item_code, "_Test Simple Item 2")
+ self.assertEqual(sales_order.net_total, 310)
+
+ sales_order.submit()
+ quotation.reload()
+ self.assertEqual(quotation.status, "Ordered")
+
test_records = frappe.get_test_records("Quotation")
diff --git a/erpnext/selling/doctype/quotation_item/quotation_item.json b/erpnext/selling/doctype/quotation_item/quotation_item.json
index ca7dfd23378f..f2aabc524004 100644
--- a/erpnext/selling/doctype/quotation_item/quotation_item.json
+++ b/erpnext/selling/doctype/quotation_item/quotation_item.json
@@ -49,6 +49,8 @@
"pricing_rules",
"stock_uom_rate",
"is_free_item",
+ "is_alternative",
+ "has_alternative_item",
"section_break_43",
"valuation_rate",
"column_break_45",
@@ -643,12 +645,28 @@
"no_copy": 1,
"options": "currency",
"read_only": 1
+ },
+ {
+ "default": "0",
+ "fieldname": "is_alternative",
+ "fieldtype": "Check",
+ "label": "Is Alternative",
+ "print_hide": 1
+ },
+ {
+ "default": "0",
+ "fieldname": "has_alternative_item",
+ "fieldtype": "Check",
+ "hidden": 1,
+ "label": "Has Alternative Item",
+ "print_hide": 1,
+ "read_only": 1
}
],
"idx": 1,
"istable": 1,
"links": [],
- "modified": "2022-12-25 02:49:53.926625",
+ "modified": "2023-02-06 11:00:07.042364",
"modified_by": "Administrator",
"module": "Selling",
"name": "Quotation Item",
@@ -656,5 +674,6 @@
"permissions": [],
"sort_field": "modified",
"sort_order": "DESC",
+ "states": [],
"track_changes": 1
}
\ No newline at end of file
diff --git a/erpnext/selling/doctype/sales_order/sales_order.js b/erpnext/selling/doctype/sales_order/sales_order.js
index fb64772479b5..a0a63f61d2b4 100644
--- a/erpnext/selling/doctype/sales_order/sales_order.js
+++ b/erpnext/selling/doctype/sales_order/sales_order.js
@@ -275,7 +275,7 @@ erpnext.selling.SalesOrderController = class SalesOrderController extends erpnex
if (this.frm.doc.docstatus===0) {
this.frm.add_custom_button(__('Quotation'),
function() {
- erpnext.utils.map_current_doc({
+ let d = erpnext.utils.map_current_doc({
method: "erpnext.selling.doctype.quotation.quotation.make_sales_order",
source_doctype: "Quotation",
target: me.frm,
@@ -293,7 +293,16 @@ erpnext.selling.SalesOrderController = class SalesOrderController extends erpnex
docstatus: 1,
status: ["!=", "Lost"]
}
- })
+ });
+
+ setTimeout(() => {
+ d.$parent.append(`
+ <span class='small text-muted'>
+ ${__("Note: Please create Sales Orders from individual Quotations to select from among Alternative Items.")}
+ </span>
+ `);
+ }, 200);
+
}, __("Get Items From"));
}
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-30870@d4779e3
|
frappe/erpnext
|
Python
| 30,870
|
fix(UX): misleading stock entry lables
|
1. flip field location (consumption on left, receipt on right like warehouses)
2. Change incorrect label on value difference field
closes https://github.com/frappe/erpnext/issues/30867
before
<img width="1132" alt="Screenshot 2022-05-02 at 2 43 50 PM" src="https://user-images.githubusercontent.com/9079960/166211942-9ac6678e-ba23-4a2f-b5fc-d5f416915fea.png">
after

|
2022-05-02T09:26:29Z
|
Field "Value Difference" in Stock Entry doctype calculation wrong way round.
### Information about bug
I think the Value Difference field is supposed to calculate the value added to an item. The text says "Total Value Difference(Out - In) but the code calculates (In - Out)
From stock_entry.py
`def set_total_incoming_outgoing_value(self):`
`self.total_incoming_value = self.total_outgoing_value = 0.0`
`for d in self.get("items"):`
`if d.t_warehouse:`
`self.total_incoming_value += flt(d.amount)`
`if d.s_warehouse:`
`self.total_outgoing_value += flt(d.amount)`
`self.value_difference = self.total_incoming_value - self.total_outgoing_value`
### Module
stock
### Version
v13.21.0
v13.21.0
### Installation method
manual install
### Relevant log output / Stack trace / Full Error Message.
_No response_
|
It's correct.
Example:
Additional cost is added to inventory value using repack:
<img width="1132" alt="Screenshot 2022-05-02 at 2 43 50 PM" src="https://user-images.githubusercontent.com/9079960/166211942-9ac6678e-ba23-4a2f-b5fc-d5f416915fea.png">
Outgoing = consumption
incoming = receipt
difference ~= difference added in inventory value.
Anyway, this field isn't used anywhere in business logic.
The field order and field labels are confusing maybe?
|
[
{
"body": "### Information about bug\r\n\r\nI think the Value Difference field is supposed to calculate the value added to an item. The text says \"Total Value Difference(Out - In) but the code calculates (In - Out)\r\n\r\nFrom stock_entry.py\r\n`def set_total_incoming_outgoing_value(self):`\r\n\t\t`self.total_incoming_value = self.total_outgoing_value = 0.0`\r\n\t\t`for d in self.get(\"items\"):`\r\n\t\t\t`if d.t_warehouse:`\r\n\t\t\t\t`self.total_incoming_value += flt(d.amount)`\r\n\t\t\t`if d.s_warehouse:`\r\n\t\t\t\t`self.total_outgoing_value += flt(d.amount)`\r\n\r\n\t\t`self.value_difference = self.total_incoming_value - self.total_outgoing_value`\r\n\r\n\r\n\r\n### Module\r\n\r\nstock\r\n\r\n### Version\r\n\r\nv13.21.0\r\nv13.21.0\r\n\r\n### Installation method\r\n\r\nmanual install\r\n\r\n### Relevant log output / Stack trace / Full Error Message.\r\n\r\n_No response_",
"number": 30867,
"title": "Field \"Value Difference\" in Stock Entry doctype calculation wrong way round."
}
] |
dcda55641b822f62b9808c36027b59b5eaf697b3
|
{
"head_commit": "d4779e3a8416feca156b234ef0feb791f2c02f2e",
"head_commit_message": "fix(UX): misleading stock entry lables",
"patch_to_review": "diff --git a/erpnext/stock/doctype/stock_entry/stock_entry.json b/erpnext/stock/doctype/stock_entry/stock_entry.json\nindex c38dfaa1c844..e658040bd0a6 100644\n--- a/erpnext/stock/doctype/stock_entry/stock_entry.json\n+++ b/erpnext/stock/doctype/stock_entry/stock_entry.json\n@@ -46,9 +46,9 @@\n \"items\",\n \"get_stock_and_rate\",\n \"section_break_19\",\n- \"total_incoming_value\",\n- \"column_break_22\",\n \"total_outgoing_value\",\n+ \"column_break_22\",\n+ \"total_incoming_value\",\n \"value_difference\",\n \"additional_costs_section\",\n \"additional_costs\",\n@@ -374,7 +374,7 @@\n {\n \"fieldname\": \"total_incoming_value\",\n \"fieldtype\": \"Currency\",\n- \"label\": \"Total Incoming Value\",\n+ \"label\": \"Total Incoming Value (Receipt)\",\n \"options\": \"Company:company:default_currency\",\n \"print_hide\": 1,\n \"read_only\": 1\n@@ -386,7 +386,7 @@\n {\n \"fieldname\": \"total_outgoing_value\",\n \"fieldtype\": \"Currency\",\n- \"label\": \"Total Outgoing Value\",\n+ \"label\": \"Total Outgoing Value (Consumption)\",\n \"options\": \"Company:company:default_currency\",\n \"print_hide\": 1,\n \"read_only\": 1\n@@ -394,7 +394,7 @@\n {\n \"fieldname\": \"value_difference\",\n \"fieldtype\": \"Currency\",\n- \"label\": \"Total Value Difference (Out - In)\",\n+ \"label\": \"Total Value Difference (In - Out)\",\n \"options\": \"Company:company:default_currency\",\n \"print_hide_if_no_value\": 1,\n \"read_only\": 1\n@@ -619,7 +619,7 @@\n \"index_web_pages_for_search\": 1,\n \"is_submittable\": 1,\n \"links\": [],\n- \"modified\": \"2022-02-07 12:55:14.614077\",\n+ \"modified\": \"2022-05-02 05:21:39.060501\",\n \"modified_by\": \"Administrator\",\n \"module\": \"Stock\",\n \"name\": \"Stock Entry\",\n"
}
|
[
{
"diff_hunk": "@@ -386,15 +386,15 @@\n {\n \"fieldname\": \"total_outgoing_value\",\n \"fieldtype\": \"Currency\",\n- \"label\": \"Total Outgoing Value\",\n+ \"label\": \"Total Outgoing Value (Consumption)\",\n \"options\": \"Company:company:default_currency\",\n \"print_hide\": 1,\n \"read_only\": 1\n },\n {\n \"fieldname\": \"value_difference\",\n \"fieldtype\": \"Currency\",\n- \"label\": \"Total Value Difference (Out - In)\",\n+ \"label\": \"Total Value Difference (In - Out)\",",
"line": null,
"original_line": 397,
"original_start_line": null,
"path": "erpnext/stock/doctype/stock_entry/stock_entry.json",
"start_line": null,
"text": "@user1:\n```suggestion\r\n \"label\": \"Total Value Difference (Incoming - Outgoing)\",\r\n```\r\nA bit hard to visually cross check right now because instinct was to do LHS - RHS"
}
] |
388de113875bbba6b8c4d7c1539466db32d0c04b
|
diff --git a/erpnext/stock/doctype/stock_entry/stock_entry.json b/erpnext/stock/doctype/stock_entry/stock_entry.json
index c38dfaa1c844..f56e059f81c3 100644
--- a/erpnext/stock/doctype/stock_entry/stock_entry.json
+++ b/erpnext/stock/doctype/stock_entry/stock_entry.json
@@ -46,9 +46,9 @@
"items",
"get_stock_and_rate",
"section_break_19",
- "total_incoming_value",
- "column_break_22",
"total_outgoing_value",
+ "column_break_22",
+ "total_incoming_value",
"value_difference",
"additional_costs_section",
"additional_costs",
@@ -374,7 +374,7 @@
{
"fieldname": "total_incoming_value",
"fieldtype": "Currency",
- "label": "Total Incoming Value",
+ "label": "Total Incoming Value (Receipt)",
"options": "Company:company:default_currency",
"print_hide": 1,
"read_only": 1
@@ -386,7 +386,7 @@
{
"fieldname": "total_outgoing_value",
"fieldtype": "Currency",
- "label": "Total Outgoing Value",
+ "label": "Total Outgoing Value (Consumption)",
"options": "Company:company:default_currency",
"print_hide": 1,
"read_only": 1
@@ -394,7 +394,7 @@
{
"fieldname": "value_difference",
"fieldtype": "Currency",
- "label": "Total Value Difference (Out - In)",
+ "label": "Total Value Difference (Incoming - Outgoing)",
"options": "Company:company:default_currency",
"print_hide_if_no_value": 1,
"read_only": 1
@@ -619,7 +619,7 @@
"index_web_pages_for_search": 1,
"is_submittable": 1,
"links": [],
- "modified": "2022-02-07 12:55:14.614077",
+ "modified": "2022-05-02 05:21:39.060501",
"modified_by": "Administrator",
"module": "Stock",
"name": "Stock Entry",
|
{
"difficulty": "low",
"estimated_review_effort": 1,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-30865@5e0dee3
|
frappe/erpnext
|
Python
| 30,865
|
fix: Period Closing Voucher is considering GL entries with is_cancelled=1
|
**Period Closing Voucher is considering GL entry with is_cancelled=1 as well**
Frappe version - v13.23.0 (version-13)
ERPNext version - v13.23.0 [(version-13)
Module
accounts
Version
Frappe version - v13.27.0 (version-13)
ERPNext version - v13.27.1 (version-13)
Installation method
FrappeCloud
Currently there is an condition to check cancelled documents. but It's added for Account, Changed to GL Entry.
#30849
|
2022-05-02T07:42:35Z
|
Period Closing Voucher is considering GL entry with is_cancelled=1 as well
### Information about bug
The period closing voucher is considering the canceled GL entries as well.
If we cancel any transaction and do not delete it and try to post the period closing voucher then the balance of ledger is not mataching with a period closing voucher.
Steps to reproduce -
1. Create one Journal entry with a sales account
2. Cancel this journal entry (do not delete it)
3. Post the period closing voucher ( it is considering the sum of debit GL account - the sum of credit of GL account)
4. Point no 3 should not consider if the GL entry is having is_cancelled=1.
### Module
accounts
### Version
Currently, I am using the below git repo -
Frappe version-13
ERPNext version-13
### Installation method
manual install
### Relevant log output / Stack trace / Full Error Message.
```shell
There is no error but the data which is posted in the period closing voucher is not matching the general ledger of the particular account.
In General Ledger Report is showing the correct closing balances but while posting the period closing voucher it is actually calculating the amount with a SQL query in period_closing_voucher.py.
Problematic method -
def get_pl_balances(self):
"""Get balance for dimension-wise pl accounts"""
dimension_fields = ["t1.cost_center", "t1.finance_book"]
self.accounting_dimensions = get_accounting_dimensions()
for dimension in self.accounting_dimensions:
dimension_fields.append("t1.{0}".format(dimension))
return frappe.db.sql(
"""
select
t1.account, t2.account_currency, {dimension_fields},
sum(t1.debit_in_account_currency) - sum(t1.credit_in_account_currency) as bal_in_account_currency,
sum(t1.debit) - sum(t1.credit) as bal_in_company_currency
from `tabGL Entry` t1, `tabAccount` t2
where t1.account = t2.name and t2.report_type = 'Profit and Loss'
and t2.docstatus < 2 and t2.company = %s
and t1.posting_date between %s and %s
group by t1.account, {dimension_fields}
""".format(
dimension_fields=", ".join(dimension_fields)
),
(self.company, self.get("year_start_date"), self.posting_date),
as_dict=1,
)
```
|
[
{
"body": "### Information about bug\n\nThe period closing voucher is considering the canceled GL entries as well. \r\n\r\nIf we cancel any transaction and do not delete it and try to post the period closing voucher then the balance of ledger is not mataching with a period closing voucher. \r\n\r\nSteps to reproduce -\r\n1. Create one Journal entry with a sales account \r\n2. Cancel this journal entry (do not delete it)\r\n3. Post the period closing voucher ( it is considering the sum of debit GL account - the sum of credit of GL account) \r\n4. Point no 3 should not consider if the GL entry is having is_cancelled=1.\n\n### Module\n\naccounts\n\n### Version\n\nCurrently, I am using the below git repo -\r\nFrappe version-13\r\nERPNext version-13\n\n### Installation method\n\nmanual install\n\n### Relevant log output / Stack trace / Full Error Message.\n\n```shell\nThere is no error but the data which is posted in the period closing voucher is not matching the general ledger of the particular account.\r\n\r\nIn General Ledger Report is showing the correct closing balances but while posting the period closing voucher it is actually calculating the amount with a SQL query in period_closing_voucher.py. \r\n\r\nProblematic method - \r\n\r\n\tdef get_pl_balances(self):\r\n\t\t\"\"\"Get balance for dimension-wise pl accounts\"\"\"\r\n\r\n\t\tdimension_fields = [\"t1.cost_center\", \"t1.finance_book\"]\r\n\r\n\t\tself.accounting_dimensions = get_accounting_dimensions()\r\n\t\tfor dimension in self.accounting_dimensions:\r\n\t\t\tdimension_fields.append(\"t1.{0}\".format(dimension))\r\n\r\n\t\treturn frappe.db.sql(\r\n\t\t\t\"\"\"\r\n\t\t\tselect\r\n\t\t\t\tt1.account, t2.account_currency, {dimension_fields},\r\n\t\t\t\tsum(t1.debit_in_account_currency) - sum(t1.credit_in_account_currency) as bal_in_account_currency,\r\n\t\t\t\tsum(t1.debit) - sum(t1.credit) as bal_in_company_currency\r\n\t\t\tfrom `tabGL Entry` t1, `tabAccount` t2\r\n\t\t\twhere t1.account = t2.name and t2.report_type = 'Profit and Loss'\r\n\t\t\tand t2.docstatus < 2 and t2.company = %s\r\n\t\t\tand t1.posting_date between %s and %s\r\n\t\t\tgroup by t1.account, {dimension_fields}\r\n\t\t\"\"\".format(\r\n\t\t\t\tdimension_fields=\", \".join(dimension_fields)\r\n\t\t\t),\r\n\t\t\t(self.company, self.get(\"year_start_date\"), self.posting_date),\r\n\t\t\tas_dict=1,\r\n\t\t)\n```\n",
"number": 30849,
"title": "Period Closing Voucher is considering GL entry with is_cancelled=1 as well"
}
] |
0b9a59c605fe02f49f358b1a3c5a58c1eadec734
|
{
"head_commit": "5e0dee3dcd30e7ea190d74061ccde47a60a8bd2f",
"head_commit_message": "fix: Period Closing Voucher - Period Closing Voucher is considering GL entry with is_cancelled=1 as well",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py b/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py\nindex f66cf1c9f1a7..2b1d3e6d8117 100644\n--- a/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py\n+++ b/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py\n@@ -167,7 +167,7 @@ def get_pl_balances(self):\n \t\t\t\tsum(t1.debit) - sum(t1.credit) as bal_in_company_currency\n \t\t\tfrom `tabGL Entry` t1, `tabAccount` t2\n \t\t\twhere t1.account = t2.name and t2.report_type = 'Profit and Loss'\n-\t\t\tand t2.docstatus < 2 and t2.company = %s\n+\t\t\tand t1.docstatus < 2 and t2.company = %s\n \t\t\tand t1.posting_date between %s and %s\n \t\t\tgroup by t1.account, {dimension_fields}\n \t\t\"\"\".format(\n"
}
|
[
{
"diff_hunk": "@@ -167,7 +167,7 @@ def get_pl_balances(self):\n \t\t\t\tsum(t1.debit) - sum(t1.credit) as bal_in_company_currency\n \t\t\tfrom `tabGL Entry` t1, `tabAccount` t2\n \t\t\twhere t1.account = t2.name and t2.report_type = 'Profit and Loss'\n-\t\t\tand t2.docstatus < 2 and t2.company = %s\n+\t\t\tand t1.docstatus < 2 and t2.company = %s",
"line": null,
"original_line": 170,
"original_start_line": null,
"path": "erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\t\tand t2.docstatus < 2 and t2.company = %s\r\n```"
},
{
"diff_hunk": "@@ -167,7 +167,7 @@ def get_pl_balances(self):\n \t\t\t\tsum(t1.debit) - sum(t1.credit) as bal_in_company_currency\n \t\t\tfrom `tabGL Entry` t1, `tabAccount` t2\n \t\t\twhere t1.account = t2.name and t2.report_type = 'Profit and Loss'",
"line": null,
"original_line": 169,
"original_start_line": null,
"path": "erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\t\twhere t1.is_cancelled = 0 and t1.account = t2.name and t2.report_type = 'Profit and Loss'\r\n```"
}
] |
681529c6825c86f8f04327a640b4b05a7cc4456d
|
diff --git a/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py b/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py
index f66cf1c9f1a7..53b1c64c4603 100644
--- a/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py
+++ b/erpnext/accounts/doctype/period_closing_voucher/period_closing_voucher.py
@@ -166,7 +166,7 @@ def get_pl_balances(self):
sum(t1.debit_in_account_currency) - sum(t1.credit_in_account_currency) as bal_in_account_currency,
sum(t1.debit) - sum(t1.credit) as bal_in_company_currency
from `tabGL Entry` t1, `tabAccount` t2
- where t1.account = t2.name and t2.report_type = 'Profit and Loss'
+ where t1.is_cancelled = 0 and t1.account = t2.name and t2.report_type = 'Profit and Loss'
and t2.docstatus < 2 and t2.company = %s
and t1.posting_date between %s and %s
group by t1.account, {dimension_fields}
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
frappe__erpnext-32091@2a100ab
|
frappe/erpnext
|
Python
| 32,091
|
fix: Migrate old lead notes as per the new format
|
fixes #31800
|
2022-09-05T10:03:52Z
|
notes for leads created in V13 generate exception after upgrade to V14
### Information about bug
All leads created with the version V13 containing a note generate a exception after the upgrade to V14 and are inaccessible.
New Leads work fine but the Notes field is missing in the frontend even if in the tabLead table there's the "notes" column filled with the original note.
As I can see Notes have a dedicated tab in V14 but all the notes in the V13 have not been converted in the new format.
Thanks for the support.
### Module
CRM
### Version
Frappe Version 14.0.0
ERPNext Version 14.0.0
### Installation method
docker
### Relevant log output / Stack trace / Full Error Message.
```shell
### App Versions
{
"erpnext": "14.0.0",
"frappe": "14.0.0"
}
```
### Route
```
Form/Lead/CRM-LEAD-2022-00073
```
### Trackeback
```
Traceback (most recent call last):
File "apps/frappe/frappe/app.py", line 69, in application
response = frappe.api.handle()
File "apps/frappe/frappe/api.py", line 54, in handle
return frappe.handler.handle()
File "apps/frappe/frappe/handler.py", line 45, in handle
data = execute_cmd(cmd)
File "apps/frappe/frappe/handler.py", line 83, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "apps/frappe/frappe/__init__.py", line 1581, in call
return fn(*args, **newargs)
File "apps/frappe/frappe/desk/form/load.py", line 37, in getdoc
doc = frappe.get_doc(doctype, name)
File "apps/frappe/frappe/__init__.py", line 1172, in get_doc
doc = frappe.model.document.get_doc(*args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 73, in get_doc
return controller(*args, **kwargs)
File "apps/erpnext/erpnext/controllers/accounts_controller.py", line 80, in __init__
super(AccountsController, self).__init__(*args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 105, in __init__
self.load_from_db()
File "apps/frappe/frappe/model/document.py", line 151, in load_from_db
super().__init__(d)
File "apps/frappe/frappe/model/base_document.py", line 109, in __init__
self.update(d)
File "apps/frappe/frappe/model/base_document.py", line 156, in update
self.set(key, value)
File "apps/frappe/frappe/model/base_document.py", line 208, in set
self.extend(key, value)
File "apps/frappe/frappe/model/base_document.py", line 249, in extend
self.append(key, v)
File "apps/frappe/frappe/model/base_document.py", line 234, in append
value = self._init_child(value, key)
File "apps/frappe/frappe/model/base_document.py", line 262, in _init_child
value["doctype"] = doctype
TypeError: 'str' object does not support item assignment
```
### Request Data
```
{
"type": "GET",
"args": {
"doctype": "Lead",
"name": "CRM-LEAD-2022-00734"
},
"headers": {},
"error_handlers": {},
"url": "/api/method/frappe.desk.form.load.getdoc"
}
```
### Response Data
```
{
"exception": "TypeError: 'str' object does not support item assignment"
}
```
```
|
It looks like even though the `notes` field was changed from `Text Editor` to `Table` in https://github.com/frappe/erpnext/pull/31311, it wasn't patched.
|
[
{
"body": "### Information about bug\n\nAll leads created with the version V13 containing a note generate a exception after the upgrade to V14 and are inaccessible.\r\nNew Leads work fine but the Notes field is missing in the frontend even if in the tabLead table there's the \"notes\" column filled with the original note.\r\n\r\nAs I can see Notes have a dedicated tab in V14 but all the notes in the V13 have not been converted in the new format.\r\n\r\nThanks for the support.\n\n### Module\n\nCRM\n\n### Version\n\nFrappe Version 14.0.0\r\nERPNext Version 14.0.0\n\n### Installation method\n\ndocker\n\n### Relevant log output / Stack trace / Full Error Message.\n\n```shell\n### App Versions\r\n\r\n{\r\n\t\"erpnext\": \"14.0.0\",\r\n\t\"frappe\": \"14.0.0\"\r\n}\r\n```\r\n### Route\r\n```\r\nForm/Lead/CRM-LEAD-2022-00073\r\n```\r\n### Trackeback\r\n```\r\nTraceback (most recent call last):\r\n File \"apps/frappe/frappe/app.py\", line 69, in application\r\n response = frappe.api.handle()\r\n File \"apps/frappe/frappe/api.py\", line 54, in handle\r\n return frappe.handler.handle()\r\n File \"apps/frappe/frappe/handler.py\", line 45, in handle\r\n data = execute_cmd(cmd)\r\n File \"apps/frappe/frappe/handler.py\", line 83, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"apps/frappe/frappe/__init__.py\", line 1581, in call\r\n return fn(*args, **newargs)\r\n File \"apps/frappe/frappe/desk/form/load.py\", line 37, in getdoc\r\n doc = frappe.get_doc(doctype, name)\r\n File \"apps/frappe/frappe/__init__.py\", line 1172, in get_doc\r\n doc = frappe.model.document.get_doc(*args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 73, in get_doc\r\n return controller(*args, **kwargs)\r\n File \"apps/erpnext/erpnext/controllers/accounts_controller.py\", line 80, in __init__\r\n super(AccountsController, self).__init__(*args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 105, in __init__\r\n self.load_from_db()\r\n File \"apps/frappe/frappe/model/document.py\", line 151, in load_from_db\r\n super().__init__(d)\r\n File \"apps/frappe/frappe/model/base_document.py\", line 109, in __init__\r\n self.update(d)\r\n File \"apps/frappe/frappe/model/base_document.py\", line 156, in update\r\n self.set(key, value)\r\n File \"apps/frappe/frappe/model/base_document.py\", line 208, in set\r\n self.extend(key, value)\r\n File \"apps/frappe/frappe/model/base_document.py\", line 249, in extend\r\n self.append(key, v)\r\n File \"apps/frappe/frappe/model/base_document.py\", line 234, in append\r\n value = self._init_child(value, key)\r\n File \"apps/frappe/frappe/model/base_document.py\", line 262, in _init_child\r\n value[\"doctype\"] = doctype\r\nTypeError: 'str' object does not support item assignment\r\n\r\n```\r\n### Request Data\r\n```\r\n{\r\n\t\"type\": \"GET\",\r\n\t\"args\": {\r\n\t\t\"doctype\": \"Lead\",\r\n\t\t\"name\": \"CRM-LEAD-2022-00734\"\r\n\t},\r\n\t\"headers\": {},\r\n\t\"error_handlers\": {},\r\n\t\"url\": \"/api/method/frappe.desk.form.load.getdoc\"\r\n}\r\n```\r\n### Response Data\r\n```\r\n{\r\n\t\"exception\": \"TypeError: 'str' object does not support item assignment\"\r\n}\r\n```\n```\n",
"number": 31800,
"title": "notes for leads created in V13 generate exception after upgrade to V14 "
}
] |
e00ece7a788c31d810857e53bdeaea506fd15492
|
{
"head_commit": "2a100abef19d4db9566b7b760b1c029e1609f3e1",
"head_commit_message": "perf: lesser SQL queries and no validation\n\nCo-authored-by: Sagar Vora <[email protected]>",
"patch_to_review": "diff --git a/erpnext/patches.txt b/erpnext/patches.txt\nindex 4729add16b38..f48b2a1bb0af 100644\n--- a/erpnext/patches.txt\n+++ b/erpnext/patches.txt\n@@ -307,6 +307,7 @@ erpnext.patches.v13_0.job_card_status_on_hold\n erpnext.patches.v14_0.copy_is_subcontracted_value_to_is_old_subcontracting_flow\n erpnext.patches.v14_0.migrate_gl_to_payment_ledger\n erpnext.patches.v14_0.crm_ux_cleanup\n+erpnext.patches.v14_0.migrate_existing_lead_notes_as_per_the_new_format\n erpnext.patches.v14_0.remove_india_localisation # 14-07-2022\n erpnext.patches.v13_0.fix_number_and_frequency_for_monthly_depreciation\n erpnext.patches.v14_0.remove_hr_and_payroll_modules # 20-07-2022\ndiff --git a/erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py b/erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py\nnew file mode 100644\nindex 000000000000..6ba5a9ff2143\n--- /dev/null\n+++ b/erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py\n@@ -0,0 +1,21 @@\n+import frappe\n+from frappe.utils import cstr, strip_html\n+\n+\n+def execute():\n+\tfor doctype in (\"Lead\", \"Prospect\"):\n+\t\tif not frappe.db.has_column(doctype, \"notes\"):\n+\t\t\tcontinue\n+\n+\t\tdt = frappe.qb.DocType(doctype)\n+\t\trecords = (\n+\t\t\tfrappe.qb.from_(dt)\n+\t\t\t.select(dt.name, dt.notes, dt.modified_by, dt.modified)\n+\t\t\t.where(dt.notes.isnotnull() & dt.notes != \"\")\n+\t\t).run()\n+\n+\t\tfor d in records:\n+\t\t\tif strip_html(cstr(d.notes)).strip():\n+\t\t\t\tdoc = frappe.get_doc(doctype, d.name)\n+\t\t\t\tdoc.append(\"notes\", {\"note\": d.notes, \"added_by\": d.modified_by, \"added_on\": d.modified})\n+\t\t\t\tdoc.update_child_table(\"notes\")\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,21 @@\n+import frappe\n+from frappe.utils import cstr, strip_html\n+\n+\n+def execute():\n+\tfor doctype in (\"Lead\", \"Prospect\"):",
"line": null,
"original_line": 6,
"original_start_line": null,
"path": "erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py",
"start_line": null,
"text": "@user1:\nMaybe add `Opportunity` too? There is no harm. If user had custom field, it will get migrated. If not, will continue after `frappe.db.has_column`\r\n\r\n```suggestion\r\n\tfor doctype in (\"Lead\", \"Prospect\", \"Opportunity\"):\r\n```"
}
] |
51c37aeee326a7648564bb969e2e09416e465e36
|
diff --git a/erpnext/patches.txt b/erpnext/patches.txt
index d780213209c8..2a0ca8c49619 100644
--- a/erpnext/patches.txt
+++ b/erpnext/patches.txt
@@ -307,6 +307,7 @@ erpnext.patches.v13_0.job_card_status_on_hold
erpnext.patches.v14_0.copy_is_subcontracted_value_to_is_old_subcontracting_flow
erpnext.patches.v14_0.migrate_gl_to_payment_ledger
erpnext.patches.v14_0.crm_ux_cleanup
+erpnext.patches.v14_0.migrate_existing_lead_notes_as_per_the_new_format
erpnext.patches.v14_0.remove_india_localisation # 14-07-2022
erpnext.patches.v13_0.fix_number_and_frequency_for_monthly_depreciation
erpnext.patches.v14_0.remove_hr_and_payroll_modules # 20-07-2022
diff --git a/erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py b/erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py
new file mode 100644
index 000000000000..032aeccc23d7
--- /dev/null
+++ b/erpnext/patches/v14_0/migrate_existing_lead_notes_as_per_the_new_format.py
@@ -0,0 +1,23 @@
+import frappe
+from frappe.utils import cstr, strip_html
+
+
+def execute():
+ for doctype in ("Lead", "Prospect", "Opportunity"):
+ if not frappe.db.has_column(doctype, "notes"):
+ continue
+
+ dt = frappe.qb.DocType(doctype)
+ records = (
+ frappe.qb.from_(dt)
+ .select(dt.name, dt.notes, dt.modified_by, dt.modified)
+ .where(dt.notes.isnotnull() & dt.notes != "")
+ ).run()
+
+ for d in records:
+ if strip_html(cstr(d.notes)).strip():
+ doc = frappe.get_doc(doctype, d.name)
+ doc.append("notes", {"note": d.notes, "added_by": d.modified_by, "added_on": d.modified})
+ doc.update_child_table("notes")
+
+ frappe.db.sql_ddl(f"alter table `tab{doctype}` drop column `notes`")
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-29182@ca17c72
|
frappe/erpnext
|
Python
| 29,182
|
fix: get project from PO into payment entry
|
Fix for [Issue 16662]( https://github.com/frappe/erpnext/issues/16662)
Now, creating payments for Purchase orders will pull the project from the line items.

`no-docs`
|
2022-01-07T04:57:19Z
|
Payment entry from Purchase Invoice does NOT catch Project from PO line
I want to check profitability on Project base and encounter one problem with payment entries created from a Purchase Invoice.
The item in my PO is linked to a certain project
### expected behavior
when I use "Make Payment" from that PO, the Project which is connected to the item lines in the PO should be fetched in the payment entry automatically
### actual behavior
the connected Project is **not** fetched automatically in the PE and has to be selected manually
|
Thanks for reporting this @vrms
|
[
{
"body": "I want to check profitability on Project base and encounter one problem with payment entries created from a Purchase Invoice.\r\n\r\nThe item in my PO is linked to a certain project\r\n\r\n### expected behavior \r\nwhen I use \"Make Payment\" from that PO, the Project which is connected to the item lines in the PO should be fetched in the payment entry automatically\r\n\r\n### actual behavior\r\nthe connected Project is **not** fetched automatically in the PE and has to be selected manually",
"number": 16662,
"title": "Payment entry from Purchase Invoice does NOT catch Project from PO line"
}
] |
3438e1f0c48f57fe747be3324574374d3d2e1364
|
{
"head_commit": "ca17c7226ce343e63cf929023babee425e3da3c5",
"head_commit_message": "fix: get project from PO into payment entry",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/payment_entry/payment_entry.py b/erpnext/accounts/doctype/payment_entry/payment_entry.py\nindex c1b056b9c704..18f35722c0fb 100644\n--- a/erpnext/accounts/doctype/payment_entry/payment_entry.py\n+++ b/erpnext/accounts/doctype/payment_entry/payment_entry.py\n@@ -3,6 +3,7 @@\n \n \n import json\n+from functools import reduce\n \n import frappe\n from frappe import ValidationError, _, scrub, throw\n@@ -1522,6 +1523,8 @@ def get_payment_entry(dt, dn, party_amount=None, bank_account=None, bank_amount=\n \tpe.paid_amount = paid_amount\n \tpe.received_amount = received_amount\n \tpe.letter_head = doc.get(\"letter_head\")\n+\tif dt == 'Purchase Order':\n+\t\tpe.project = reduce(lambda prev,cur: prev or cur, [x.project for x in doc.get('items')], None) # get first non-empty project from items\n \n \tif pe.party_type in [\"Customer\", \"Supplier\"]:\n \t\tbank_account = get_party_bank_account(pe.party_type, pe.party)\n"
}
|
[
{
"diff_hunk": "@@ -1522,6 +1523,8 @@ def get_payment_entry(dt, dn, party_amount=None, bank_account=None, bank_amount=\n \tpe.paid_amount = paid_amount\n \tpe.received_amount = received_amount\n \tpe.letter_head = doc.get(\"letter_head\")\n+\tif dt == 'Purchase Order':",
"line": null,
"original_line": 1526,
"original_start_line": null,
"path": "erpnext/accounts/doctype/payment_entry/payment_entry.py",
"start_line": null,
"text": "@user1:\nThis will also be applicable for other transactions like \"Sales Order\", \"Purchase Receipt\", \"Delivery Note\", \"Sales Invoice\" and \"Purchase Invoice\". Can we make this more generic to handle for all doctypes?\n\n@author:\nWe can avoid 'Purchase Receipt' and 'Delivery Note', as there are no process available to create Payment Entry from them."
}
] |
09172002e729a2bbee2362f3132d0cbd688c6b20
|
diff --git a/erpnext/accounts/doctype/payment_entry/payment_entry.py b/erpnext/accounts/doctype/payment_entry/payment_entry.py
index 0e07abd7255c..02a144d3e79f 100644
--- a/erpnext/accounts/doctype/payment_entry/payment_entry.py
+++ b/erpnext/accounts/doctype/payment_entry/payment_entry.py
@@ -3,6 +3,7 @@
import json
+from functools import reduce
import frappe
from frappe import ValidationError, _, scrub, throw
@@ -1523,6 +1524,10 @@ def get_payment_entry(dt, dn, party_amount=None, bank_account=None, bank_amount=
pe.received_amount = received_amount
pe.letter_head = doc.get("letter_head")
+ if dt in ['Purchase Order', 'Sales Order', 'Sales Invoice', 'Purchase Invoice']:
+ pe.project = (doc.get('project') or
+ reduce(lambda prev,cur: prev or cur, [x.get('project') for x in doc.get('items')], None)) # get first non-empty project from items
+
if pe.party_type in ["Customer", "Supplier"]:
bank_account = get_party_bank_account(pe.party_type, pe.party)
pe.set("bank_account", bank_account)
|
{
"difficulty": "low",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-28818@44cbd23
|
frappe/erpnext
|
Python
| 28,818
|
fix: no module named 'redisearch'
|
This PR aims to fix the module not found error while installing ERPNext.
Objectives:
- [x] Rename `erpnext/e_commerce/redisearch.py`
- [x] Rename its references
Closes #28730
|
2021-12-10T11:18:22Z
|
Error: No module named 'redisearch' while installing ERPNext.
**Issue:**
Ran `bench get-app erpnext` and then `bench install-app erpnext`
Cannot install ERPNext.
```
Installing erpnext...
Updating DocTypes for erpnext : [=================== ] 99%
An error occurred while installing erpnext:
Module import failed for Website Item (erpnext.e_commerce.doctype.website_item.website_item
Error: No module named 'redisearch')
```
Ran `bench setup requirements` and tried again but to no avail.
Tried installing redisearch via pip(and pip3) in the venv but no effect.
```
frappe@ubuntu:~/frappe-bench$ ./env/bin/pip install redisearch
Requirement already satisfied: redisearch in ./env/lib/python3.7/site-packages (2.0.0)
Requirement already satisfied: redis>=2.10 in ./env/lib/python3.7/site-packages (from redisearch) (3.5.3)
Requirement already satisfied: hiredis>=0.2.0 in ./env/lib/python3.7/site-packages (from redisearch) (2.0.0)
Requirement already satisfied: six>=1.10.0 in ./env/lib/python3.7/site-packages (from redisearch) (1.15.0)
Requirement already satisfied: rmtest>=0.2 in ./env/lib/python3.7/site-packages (from redisearch) (0.7.0)
```
Error log when I try to run the file:
```
frappe@ubuntu:~/frappe-bench$ ./env/bin/python ./apps/erpnext/erpnext/e_commerce/redisearch.py
Traceback (most recent call last):
File "/home/frappe/frappe-bench/apps/erpnext/erpnext/e_commerce/redisearch.py", line 6, in <module>
from redisearch import AutoCompleter, Client, IndexDefinition, Suggestion, TagField, TextField
File "/home/frappe/frappe-bench/apps/erpnext/erpnext/e_commerce/redisearch.py", line 6, in <module>
from redisearch import AutoCompleter, Client, IndexDefinition, Suggestion, TagField, TextField
ImportError: cannot import name 'AutoCompleter' from 'redisearch'
(/home/frappe/frappe-bench/apps/erpnext/erpnext/e_commerce/redisearch.py)
```
At the same time, from inside venv, I can import those without any issues.
```
(env) frappe@ubuntu:~/frappe-bench$ python3
Python 3.7.5 (default, Feb 23 2021, 13:22:40)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import redisearch
>>> from redisearch import AutoCompleter, Client, IndexDefinition, Suggestion, TagField, TextField
>>> exit()
```
**Solution:**
I suspect naming issues. The file is also named redisearch and the package is also named redisearch.
As suspected, the issue was with the name. I renamed the file to `rsearch.py` and tried.
```
frappe@ubuntu:~/frappe-bench$ source env/bin/activate
(env) frappe@ubuntu:~/frappe-bench$ python3 ./apps/erpnext/erpnext/e_commerce/rsearch.py
(env) frappe@ubuntu:~/frappe-bench$
```
Runs without any issues.
Please look into this @marination @NagariaHussain @ankush
|
Very similar to https://github.com/frappe/frappe/pull/14093
Probably just need to rename the file and fix all the references.
@rtdany10 interested in fixing this? :grinning:
> @rtdany10 interested in fixing this? grinning
Haha, I was planning to send a fix right away, but [github doesn't tell me where all this file/module is referenced](https://github.com/frappe/erpnext/search?q=redisearch) and I have no idea where all this file is imported or used. I shall open a PR and try rename wherever I find it, but might need help from you guys. :)
github is only showing results from `develop` branch. Since this affects v13 only you'll have to search it locally :P
I think the usage of it is quite limited. Just used in e-commerce search.
> github is only showing results from `develop` branch. Since this affects v13 only you'll have to search it locally :P
>
> I think the usage of it is quite limited. Just used in e-commerce search.
In that case, you can unassign @aaronmenezes
I shall send a PR shortly :smile:
|
[
{
"body": "**Issue:**\r\nRan `bench get-app erpnext` and then `bench install-app erpnext`\r\nCannot install ERPNext.\r\n```\r\nInstalling erpnext...\r\nUpdating DocTypes for erpnext : [=================== ] 99%\r\nAn error occurred while installing erpnext:\r\nModule import failed for Website Item (erpnext.e_commerce.doctype.website_item.website_item \r\nError: No module named 'redisearch')\r\n```\r\n\r\nRan `bench setup requirements` and tried again but to no avail.\r\nTried installing redisearch via pip(and pip3) in the venv but no effect.\r\n```\r\nfrappe@ubuntu:~/frappe-bench$ ./env/bin/pip install redisearch\r\nRequirement already satisfied: redisearch in ./env/lib/python3.7/site-packages (2.0.0)\r\nRequirement already satisfied: redis>=2.10 in ./env/lib/python3.7/site-packages (from redisearch) (3.5.3)\r\nRequirement already satisfied: hiredis>=0.2.0 in ./env/lib/python3.7/site-packages (from redisearch) (2.0.0)\r\nRequirement already satisfied: six>=1.10.0 in ./env/lib/python3.7/site-packages (from redisearch) (1.15.0)\r\nRequirement already satisfied: rmtest>=0.2 in ./env/lib/python3.7/site-packages (from redisearch) (0.7.0)\r\n```\r\n\r\nError log when I try to run the file:\r\n```\r\nfrappe@ubuntu:~/frappe-bench$ ./env/bin/python ./apps/erpnext/erpnext/e_commerce/redisearch.py\r\nTraceback (most recent call last):\r\n File \"/home/frappe/frappe-bench/apps/erpnext/erpnext/e_commerce/redisearch.py\", line 6, in <module>\r\n from redisearch import AutoCompleter, Client, IndexDefinition, Suggestion, TagField, TextField\r\n File \"/home/frappe/frappe-bench/apps/erpnext/erpnext/e_commerce/redisearch.py\", line 6, in <module>\r\n from redisearch import AutoCompleter, Client, IndexDefinition, Suggestion, TagField, TextField\r\nImportError: cannot import name 'AutoCompleter' from 'redisearch' \r\n(/home/frappe/frappe-bench/apps/erpnext/erpnext/e_commerce/redisearch.py)\r\n```\r\nAt the same time, from inside venv, I can import those without any issues.\r\n```\r\n(env) frappe@ubuntu:~/frappe-bench$ python3\r\nPython 3.7.5 (default, Feb 23 2021, 13:22:40) \r\n[GCC 8.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import redisearch\r\n>>> from redisearch import AutoCompleter, Client, IndexDefinition, Suggestion, TagField, TextField\r\n>>> exit()\r\n```\r\n\r\n**Solution:**\r\nI suspect naming issues. The file is also named redisearch and the package is also named redisearch.\r\n\r\nAs suspected, the issue was with the name. I renamed the file to `rsearch.py` and tried.\r\n```\r\nfrappe@ubuntu:~/frappe-bench$ source env/bin/activate\r\n(env) frappe@ubuntu:~/frappe-bench$ python3 ./apps/erpnext/erpnext/e_commerce/rsearch.py\r\n(env) frappe@ubuntu:~/frappe-bench$ \r\n```\r\n\r\nRuns without any issues.\r\nPlease look into this @marination @NagariaHussain @ankush ",
"number": 28730,
"title": "Error: No module named 'redisearch' while installing ERPNext."
}
] |
d2fac1492292cb7b1f34eb1fa79dbe258766f2d6
|
{
"head_commit": "44cbd2341e04346629b0600cf1c2f419407b75ec",
"head_commit_message": "fix: rename ref in product_search.py",
"patch_to_review": "diff --git a/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py b/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py\nindex 1b0480bbf0de..4c4836e14c0b 100644\n--- a/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py\n+++ b/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py\n@@ -6,7 +6,7 @@\n from frappe.model.document import Document\n from frappe.utils import comma_and, flt, unique\n \n-from erpnext.e_commerce.redisearch import (\n+from erpnext.e_commerce.rsearch import (\n \tcreate_website_items_index,\n \tget_indexable_web_fields,\n \tis_search_module_loaded,\ndiff --git a/erpnext/e_commerce/doctype/website_item/website_item.py b/erpnext/e_commerce/doctype/website_item/website_item.py\nindex b4d06b3542fe..85d9333d3764 100644\n--- a/erpnext/e_commerce/doctype/website_item/website_item.py\n+++ b/erpnext/e_commerce/doctype/website_item/website_item.py\n@@ -11,7 +11,7 @@\n from frappe.website.website_generator import WebsiteGenerator\n \n from erpnext.e_commerce.doctype.item_review.item_review import get_item_reviews\n-from erpnext.e_commerce.redisearch import (\n+from erpnext.e_commerce.rsearch import (\n \tdelete_item_from_index,\n \tinsert_item_to_index,\n \tupdate_index_for_item,\ndiff --git a/erpnext/e_commerce/redisearch.py b/erpnext/e_commerce/rsearch.py\nsimilarity index 100%\nrename from erpnext/e_commerce/redisearch.py\nrename to erpnext/e_commerce/rsearch.py\ndiff --git a/erpnext/templates/pages/product_search.py b/erpnext/templates/pages/product_search.py\nindex 99ad648e5308..0229dc6545ce 100644\n--- a/erpnext/templates/pages/product_search.py\n+++ b/erpnext/templates/pages/product_search.py\n@@ -5,7 +5,7 @@\n from frappe.utils import cint, cstr\n from redisearch import AutoCompleter, Client, Query\n \n-from erpnext.e_commerce.redisearch import (\n+from erpnext.e_commerce.rsearch import (\n \tWEBSITE_ITEM_CATEGORY_AUTOCOMPLETE,\n \tWEBSITE_ITEM_INDEX,\n \tWEBSITE_ITEM_NAME_AUTOCOMPLETE,\n"
}
|
[
{
"diff_hunk": "@@ -6,7 +6,7 @@\n from frappe.model.document import Document\n from frappe.utils import comma_and, flt, unique\n \n-from erpnext.e_commerce.redisearch import (\n+from erpnext.e_commerce.rsearch import (",
"line": null,
"original_line": 9,
"original_start_line": null,
"path": "erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py",
"start_line": null,
"text": "@user1:\n'redisearch_utils' or would be a better name for this IMO. \n\nIt will make it clearer without opening the file what this file exactly contains. \n\n@user2:\nagree with @user1 it's also consistent with other util files 👍 \n\n@author:\nAgreed, have done that :smile:"
}
] |
af2cf67f0882ad074c870b0bd6145b2b844e3630
|
diff --git a/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py b/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py
index 1b0480bbf0de..1110eb1accd7 100644
--- a/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py
+++ b/erpnext/e_commerce/doctype/e_commerce_settings/e_commerce_settings.py
@@ -6,7 +6,7 @@
from frappe.model.document import Document
from frappe.utils import comma_and, flt, unique
-from erpnext.e_commerce.redisearch import (
+from erpnext.e_commerce.redisearch_utils import (
create_website_items_index,
get_indexable_web_fields,
is_search_module_loaded,
diff --git a/erpnext/e_commerce/doctype/website_item/website_item.py b/erpnext/e_commerce/doctype/website_item/website_item.py
index b4d06b3542fe..2e60dfd94599 100644
--- a/erpnext/e_commerce/doctype/website_item/website_item.py
+++ b/erpnext/e_commerce/doctype/website_item/website_item.py
@@ -11,7 +11,7 @@
from frappe.website.website_generator import WebsiteGenerator
from erpnext.e_commerce.doctype.item_review.item_review import get_item_reviews
-from erpnext.e_commerce.redisearch import (
+from erpnext.e_commerce.redisearch_utils import (
delete_item_from_index,
insert_item_to_index,
update_index_for_item,
diff --git a/erpnext/e_commerce/redisearch.py b/erpnext/e_commerce/redisearch_utils.py
similarity index 100%
rename from erpnext/e_commerce/redisearch.py
rename to erpnext/e_commerce/redisearch_utils.py
diff --git a/erpnext/templates/pages/product_search.py b/erpnext/templates/pages/product_search.py
index 99ad648e5308..a2351a718043 100644
--- a/erpnext/templates/pages/product_search.py
+++ b/erpnext/templates/pages/product_search.py
@@ -5,7 +5,7 @@
from frappe.utils import cint, cstr
from redisearch import AutoCompleter, Client, Query
-from erpnext.e_commerce.redisearch import (
+from erpnext.e_commerce.redisearch_utils import (
WEBSITE_ITEM_CATEGORY_AUTOCOMPLETE,
WEBSITE_ITEM_INDEX,
WEBSITE_ITEM_NAME_AUTOCOMPLETE,
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-33005@404de1e
|
frappe/erpnext
|
Python
| 33,005
|
feat(pos): multiple item prices
|
- [x] show uom with product price
- [x] multiple item (variant) depending on uom
|
2022-11-17T07:05:57Z
|
Barcode wise UOM not working in Point of Sale
### Information about bug
I have created 2 barcodes of one item in Item Barcode with 2 different UOM i.e., Nos, and Box.
The Barcode UOM is appearing fine if we scan the item barcode in a sales invoice, sales order, or stock entry is created; however, while scanning the barcode in Point of Sale it is not showing UOM as per scanned barcode. It always shows stock_uom irrespective of other UOM mentioned in the Item Barcode table.
### Module
selling
### Version
ERPNext: v14.5.1 (HEAD)
Frappe Framework: v14.14.2 (HEAD)
### Installation method
FrappeCloud
### Relevant log output / Stack trace / Full Error Message.
```shell
There is no error but Item Barcode shows incorrect UOM.
```
|
[
{
"body": "### Information about bug\n\nI have created 2 barcodes of one item in Item Barcode with 2 different UOM i.e., Nos, and Box. \r\n\r\nThe Barcode UOM is appearing fine if we scan the item barcode in a sales invoice, sales order, or stock entry is created; however, while scanning the barcode in Point of Sale it is not showing UOM as per scanned barcode. It always shows stock_uom irrespective of other UOM mentioned in the Item Barcode table.\n\n### Module\n\nselling\n\n### Version\n\nERPNext: v14.5.1 (HEAD)\r\nFrappe Framework: v14.14.2 (HEAD)\r\n\n\n### Installation method\n\nFrappeCloud\n\n### Relevant log output / Stack trace / Full Error Message.\n\n```shell\nThere is no error but Item Barcode shows incorrect UOM.\n```\n",
"number": 32922,
"title": "Barcode wise UOM not working in Point of Sale"
}
] |
3598bcc9a85b79bf1db33f79982023cb658324ca
|
{
"head_commit": "404de1e65ac3f90201e537ef86c6de342a6c4b71",
"head_commit_message": "chore: format\n\nSigned-off-by: Sabu Siyad <[email protected]>",
"patch_to_review": "diff --git a/erpnext/selling/page/point_of_sale/point_of_sale.py b/erpnext/selling/page/point_of_sale/point_of_sale.py\nindex 999ddc23f0c3..20017a5b8309 100644\n--- a/erpnext/selling/page/point_of_sale/point_of_sale.py\n+++ b/erpnext/selling/page/point_of_sale/point_of_sale.py\n@@ -17,45 +17,79 @@\n def search_by_term(search_term, warehouse, price_list):\n \tresult = search_for_serial_or_batch_or_barcode_number(search_term) or {}\n \n-\titem_code = result.get(\"item_code\") or search_term\n-\tserial_no = result.get(\"serial_no\") or \"\"\n-\tbatch_no = result.get(\"batch_no\") or \"\"\n-\tbarcode = result.get(\"barcode\") or \"\"\n-\n-\tif result:\n-\t\titem_info = frappe.db.get_value(\n-\t\t\t\"Item\",\n-\t\t\titem_code,\n-\t\t\t[\n-\t\t\t\t\"name as item_code\",\n-\t\t\t\t\"item_name\",\n-\t\t\t\t\"description\",\n-\t\t\t\t\"stock_uom\",\n-\t\t\t\t\"image as item_image\",\n-\t\t\t\t\"is_stock_item\",\n-\t\t\t],\n-\t\t\tas_dict=1,\n-\t\t)\n+\titem_code = result.get(\"item_code\", search_term)\n+\tserial_no = result.get(\"serial_no\", \"\")\n+\tbatch_no = result.get(\"batch_no\", \"\")\n+\tbarcode = result.get(\"barcode\", \"\")\n+\n+\tif not result:\n+\t\treturn\n+\n+\titem_doc = frappe.get_doc(\"Item\", item_code)\n+\n+\tif not item_doc:\n+\t\treturn\n+\n+\titem = {\n+\t\t\"barcode\": barcode,\n+\t\t\"batch_no\": batch_no,\n+\t\t\"description\": item_doc.description,\n+\t\t\"is_stock_item\": item_doc.is_stock_item,\n+\t\t\"item_code\": item_doc.name,\n+\t\t\"item_image\": item_doc.image,\n+\t\t\"item_name\": item_doc.item_name,\n+\t\t\"serial_no\": serial_no,\n+\t\t\"stock_uom\": item_doc.stock_uom,\n+\t\t\"uom\": item_doc.stock_uom,\n+\t}\n+\n+\tif barcode:\n+\t\tbarcode_info = next(filter(lambda x: x.barcode == barcode, item_doc.get(\"barcodes\", [])), None)\n+\t\tif barcode_info and barcode_info.uom:\n+\t\t\tuom = next(filter(lambda x: x.uom == barcode_info.uom, item_doc.uoms), {})\n+\t\t\titem.update(\n+\t\t\t\t{\n+\t\t\t\t\t\"uom\": barcode_info.uom,\n+\t\t\t\t\t\"conversion_factor\": uom.get(\"conversion_factor\", 1),\n+\t\t\t\t}\n+\t\t\t)\n \n-\t\titem_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)\n-\t\tprice_list_rate, currency = frappe.db.get_value(\n-\t\t\t\"Item Price\",\n-\t\t\t{\"price_list\": price_list, \"item_code\": item_code},\n-\t\t\t[\"price_list_rate\", \"currency\"],\n-\t\t) or [None, None]\n+\titem_stock_qty, _ = get_stock_availability(item_code, warehouse)\n+\titem_stock_qty = item_stock_qty // item.get(\"conversion_factor\")\n+\titem.update({\"actual_qty\": item_stock_qty})\n+\n+\tprice = frappe.get_list(\n+\t\tdoctype=\"Item Price\",\n+\t\tfilters={\n+\t\t\t\"price_list\": price_list,\n+\t\t\t\"item_code\": item_code,\n+\t\t},\n+\t\tfields=\"*\",\n+\t)\n \n-\t\titem_info.update(\n+\tdef __sort(p):\n+\t\tp_uom = p.get(\"uom\")\n+\n+\t\tif p_uom == item.get(\"uom\"):\n+\t\t\treturn 0\n+\t\telif p_uom == item.get(\"stock_uom\"):\n+\t\t\treturn 1\n+\t\telse:\n+\t\t\treturn 2\n+\n+\t# sort by fallback preference. always pick exact uom match if available\n+\tprice = sorted(price, key=__sort)\n+\n+\tif len(price) > 0:\n+\t\tp = price.pop(0)\n+\t\titem.update(\n \t\t\t{\n-\t\t\t\t\"serial_no\": serial_no,\n-\t\t\t\t\"batch_no\": batch_no,\n-\t\t\t\t\"barcode\": barcode,\n-\t\t\t\t\"price_list_rate\": price_list_rate,\n-\t\t\t\t\"currency\": currency,\n-\t\t\t\t\"actual_qty\": item_stock_qty,\n+\t\t\t\t\"currency\": p.get(\"currency\"),\n+\t\t\t\t\"price_list_rate\": p.get(\"price_list_rate\"),\n \t\t\t}\n \t\t)\n \n-\t\treturn {\"items\": [item_info]}\n+\treturn {\"items\": [item]}\n \n \n @frappe.whitelist()\n@@ -121,33 +155,43 @@ def get_items(start, page_length, price_list, item_group, pos_profile, search_te\n \t\tas_dict=1,\n \t)\n \n-\tif items_data:\n-\t\titems = [d.item_code for d in items_data]\n-\t\titem_prices_data = frappe.get_all(\n+\t# return (empty) list if there are no results\n+\tif not items_data:\n+\t\treturn result\n+\n+\tfor item in items_data:\n+\t\tuoms = frappe.get_doc(\"Item\", item.item_code).get(\"uoms\", [])\n+\n+\t\titem.actual_qty, _ = get_stock_availability(item.item_code, warehouse)\n+\t\titem.uom = item.stock_uom\n+\n+\t\titem_price = frappe.get_all(\n \t\t\t\"Item Price\",\n-\t\t\tfields=[\"item_code\", \"price_list_rate\", \"currency\"],\n-\t\t\tfilters={\"price_list\": price_list, \"item_code\": [\"in\", items]},\n+\t\t\tfields=[\"price_list_rate\", \"currency\", \"uom\"],\n+\t\t\tfilters={\n+\t\t\t\t\"price_list\": price_list,\n+\t\t\t\t\"item_code\": item.item_code,\n+\t\t\t\t\"selling\": True,\n+\t\t\t},\n \t\t)\n \n-\t\titem_prices = {}\n-\t\tfor d in item_prices_data:\n-\t\t\titem_prices[d.item_code] = d\n+\t\tif not item_price:\n+\t\t\tresult.append(item)\n+\n+\t\tfor price in item_price:\n+\t\t\tuom = next(filter(lambda x: x.uom == price.uom, uoms), {})\n \n-\t\tfor item in items_data:\n-\t\t\titem_code = item.item_code\n-\t\t\titem_price = item_prices.get(item_code) or {}\n-\t\t\titem_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)\n+\t\t\tif price.uom != item.stock_uom and uom and uom.conversion_factor:\n+\t\t\t\titem.actual_qty = item.actual_qty // uom.conversion_factor\n \n-\t\t\trow = {}\n-\t\t\trow.update(item)\n-\t\t\trow.update(\n+\t\t\tresult.append(\n \t\t\t\t{\n-\t\t\t\t\t\"price_list_rate\": item_price.get(\"price_list_rate\"),\n-\t\t\t\t\t\"currency\": item_price.get(\"currency\"),\n-\t\t\t\t\t\"actual_qty\": item_stock_qty,\n+\t\t\t\t\t**item,\n+\t\t\t\t\t\"price_list_rate\": price.get(\"price_list_rate\"),\n+\t\t\t\t\t\"currency\": price.get(\"currency\"),\n+\t\t\t\t\t\"uom\": price.uom or item.uom,\n \t\t\t\t}\n \t\t\t)\n-\t\t\tresult.append(row)\n \n \treturn {\"items\": result}\n \ndiff --git a/erpnext/selling/page/point_of_sale/pos_controller.js b/erpnext/selling/page/point_of_sale/pos_controller.js\nindex 595b9196e848..c442774d0f78 100644\n--- a/erpnext/selling/page/point_of_sale/pos_controller.js\n+++ b/erpnext/selling/page/point_of_sale/pos_controller.js\n@@ -542,12 +542,12 @@ erpnext.PointOfSale.Controller = class {\n \t\t\t\tif (!this.frm.doc.customer)\n \t\t\t\t\treturn this.raise_customer_selection_alert();\n \n-\t\t\t\tconst { item_code, batch_no, serial_no, rate } = item;\n+\t\t\t\tconst { item_code, batch_no, serial_no, rate, uom } = item;\n \n \t\t\t\tif (!item_code)\n \t\t\t\t\treturn;\n \n-\t\t\t\tconst new_item = { item_code, batch_no, rate, [field]: value };\n+\t\t\t\tconst new_item = { item_code, batch_no, rate, uom, [field]: value };\n \n \t\t\t\tif (serial_no) {\n \t\t\t\t\tawait this.check_serial_no_availablilty(item_code, this.frm.doc.set_warehouse, serial_no);\n@@ -649,6 +649,7 @@ erpnext.PointOfSale.Controller = class {\n \t\tconst is_stock_item = resp[1];\n \n \t\tfrappe.dom.unfreeze();\n+\t\tconst bold_uom = item_row.stock_uom.bold();\n \t\tconst bold_item_code = item_row.item_code.bold();\n \t\tconst bold_warehouse = warehouse.bold();\n \t\tconst bold_available_qty = available_qty.toString().bold()\n@@ -664,7 +665,7 @@ erpnext.PointOfSale.Controller = class {\n \t\t\t}\n \t\t} else if (is_stock_item && available_qty < qty_needed) {\n \t\t\tfrappe.throw({\n-\t\t\t\tmessage: __('Stock quantity not enough for Item Code: {0} under warehouse {1}. Available quantity {2}.', [bold_item_code, bold_warehouse, bold_available_qty]),\n+\t\t\t\tmessage: __('Stock quantity not enough for Item Code: {0} under warehouse {1}. Available quantity {2} {3}.', [bold_item_code, bold_warehouse, bold_available_qty, bold_uom]),\n \t\t\t\tindicator: 'orange'\n \t\t\t});\n \t\t\tfrappe.utils.play_sound(\"error\");\ndiff --git a/erpnext/selling/page/point_of_sale/pos_item_cart.js b/erpnext/selling/page/point_of_sale/pos_item_cart.js\nindex e7dd211c0f47..12cc629776cc 100644\n--- a/erpnext/selling/page/point_of_sale/pos_item_cart.js\n+++ b/erpnext/selling/page/point_of_sale/pos_item_cart.js\n@@ -609,7 +609,7 @@ erpnext.PointOfSale.ItemCart = class {\n \t\t\tif (item_data.rate && item_data.amount && item_data.rate !== item_data.amount) {\n \t\t\t\treturn `\n \t\t\t\t\t<div class=\"item-qty-rate\">\n-\t\t\t\t\t\t<div class=\"item-qty\"><span>${item_data.qty || 0}</span></div>\n+\t\t\t\t\t\t<div class=\"item-qty\"><span>${item_data.qty || 0} ${item_data.uom}</span></div>\n \t\t\t\t\t\t<div class=\"item-rate-amount\">\n \t\t\t\t\t\t\t<div class=\"item-rate\">${format_currency(item_data.amount, currency)}</div>\n \t\t\t\t\t\t\t<div class=\"item-amount\">${format_currency(item_data.rate, currency)}</div>\n@@ -618,7 +618,7 @@ erpnext.PointOfSale.ItemCart = class {\n \t\t\t} else {\n \t\t\t\treturn `\n \t\t\t\t\t<div class=\"item-qty-rate\">\n-\t\t\t\t\t\t<div class=\"item-qty\"><span>${item_data.qty || 0}</span></div>\n+\t\t\t\t\t\t<div class=\"item-qty\"><span>${item_data.qty || 0} ${item_data.uom}</span></div>\n \t\t\t\t\t\t<div class=\"item-rate-amount\">\n \t\t\t\t\t\t\t<div class=\"item-rate\">${format_currency(item_data.rate, currency)}</div>\n \t\t\t\t\t\t</div>\ndiff --git a/erpnext/selling/page/point_of_sale/pos_item_selector.js b/erpnext/selling/page/point_of_sale/pos_item_selector.js\nindex b5eb0489f9d2..ec67bdfd9dd8 100644\n--- a/erpnext/selling/page/point_of_sale/pos_item_selector.js\n+++ b/erpnext/selling/page/point_of_sale/pos_item_selector.js\n@@ -78,7 +78,7 @@ erpnext.PointOfSale.ItemSelector = class {\n \tget_item_html(item) {\n \t\tconst me = this;\n \t\t// eslint-disable-next-line no-unused-vars\n-\t\tconst { item_image, serial_no, batch_no, barcode, actual_qty, stock_uom, price_list_rate } = item;\n+\t\tconst { item_image, serial_no, batch_no, barcode, actual_qty, uom, price_list_rate } = item;\n \t\tconst precision = flt(price_list_rate, 2) % 1 != 0 ? 2 : 0;\n \t\tlet indicator_color;\n \t\tlet qty_to_display = actual_qty;\n@@ -118,7 +118,7 @@ erpnext.PointOfSale.ItemSelector = class {\n \t\treturn (\n \t\t\t`<div class=\"item-wrapper\"\n \t\t\t\tdata-item-code=\"${escape(item.item_code)}\" data-serial-no=\"${escape(serial_no)}\"\n-\t\t\t\tdata-batch-no=\"${escape(batch_no)}\" data-uom=\"${escape(stock_uom)}\"\n+\t\t\t\tdata-batch-no=\"${escape(batch_no)}\" data-uom=\"${escape(uom)}\"\n \t\t\t\tdata-rate=\"${escape(price_list_rate || 0)}\"\n \t\t\t\ttitle=\"${item.item_name}\">\n \n@@ -128,7 +128,7 @@ erpnext.PointOfSale.ItemSelector = class {\n \t\t\t\t\t<div class=\"item-name\">\n \t\t\t\t\t\t${frappe.ellipsis(item.item_name, 18)}\n \t\t\t\t\t</div>\n-\t\t\t\t\t<div class=\"item-rate\">${format_currency(price_list_rate, item.currency, precision) || 0}</div>\n+\t\t\t\t\t<div class=\"item-rate\">${format_currency(price_list_rate, item.currency, precision) || 0} / ${uom}</div>\n \t\t\t\t</div>\n \t\t\t</div>`\n \t\t);\ndiff --git a/erpnext/selling/page/point_of_sale/pos_past_order_summary.js b/erpnext/selling/page/point_of_sale/pos_past_order_summary.js\nindex 40165c3484fa..be75bd64cfd7 100644\n--- a/erpnext/selling/page/point_of_sale/pos_past_order_summary.js\n+++ b/erpnext/selling/page/point_of_sale/pos_past_order_summary.js\n@@ -94,7 +94,7 @@ erpnext.PointOfSale.PastOrderSummary = class {\n \tget_item_html(doc, item_data) {\n \t\treturn `<div class=\"item-row-wrapper\">\n \t\t\t\t\t<div class=\"item-name\">${item_data.item_name}</div>\n-\t\t\t\t\t<div class=\"item-qty\">${item_data.qty || 0}</div>\n+\t\t\t\t\t<div class=\"item-qty\">${item_data.qty || 0} ${item_data.uom}</div>\n \t\t\t\t\t<div class=\"item-rate-disc\">${get_rate_discount_html()}</div>\n \t\t\t\t</div>`;\n \n"
}
|
[
{
"diff_hunk": "@@ -17,45 +17,79 @@\n def search_by_term(search_term, warehouse, price_list):\n \tresult = search_for_serial_or_batch_or_barcode_number(search_term) or {}\n \n-\titem_code = result.get(\"item_code\") or search_term\n-\tserial_no = result.get(\"serial_no\") or \"\"\n-\tbatch_no = result.get(\"batch_no\") or \"\"\n-\tbarcode = result.get(\"barcode\") or \"\"\n-\n-\tif result:\n-\t\titem_info = frappe.db.get_value(\n-\t\t\t\"Item\",\n-\t\t\titem_code,\n-\t\t\t[\n-\t\t\t\t\"name as item_code\",\n-\t\t\t\t\"item_name\",\n-\t\t\t\t\"description\",\n-\t\t\t\t\"stock_uom\",\n-\t\t\t\t\"image as item_image\",\n-\t\t\t\t\"is_stock_item\",\n-\t\t\t],\n-\t\t\tas_dict=1,\n-\t\t)\n+\titem_code = result.get(\"item_code\", search_term)\n+\tserial_no = result.get(\"serial_no\", \"\")\n+\tbatch_no = result.get(\"batch_no\", \"\")\n+\tbarcode = result.get(\"barcode\", \"\")\n+\n+\tif not result:\n+\t\treturn\n+\n+\titem_doc = frappe.get_doc(\"Item\", item_code)\n+\n+\tif not item_doc:\n+\t\treturn\n+\n+\titem = {\n+\t\t\"barcode\": barcode,\n+\t\t\"batch_no\": batch_no,\n+\t\t\"description\": item_doc.description,\n+\t\t\"is_stock_item\": item_doc.is_stock_item,\n+\t\t\"item_code\": item_doc.name,\n+\t\t\"item_image\": item_doc.image,\n+\t\t\"item_name\": item_doc.item_name,\n+\t\t\"serial_no\": serial_no,\n+\t\t\"stock_uom\": item_doc.stock_uom,\n+\t\t\"uom\": item_doc.stock_uom,\n+\t}\n+\n+\tif barcode:\n+\t\tbarcode_info = next(filter(lambda x: x.barcode == barcode, item_doc.get(\"barcodes\", [])), None)\n+\t\tif barcode_info and barcode_info.uom:\n+\t\t\tuom = next(filter(lambda x: x.uom == barcode_info.uom, item_doc.uoms), {})\n+\t\t\titem.update(\n+\t\t\t\t{\n+\t\t\t\t\t\"uom\": barcode_info.uom,\n+\t\t\t\t\t\"conversion_factor\": uom.get(\"conversion_factor\", 1),\n+\t\t\t\t}\n+\t\t\t)\n \n-\t\titem_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)\n-\t\tprice_list_rate, currency = frappe.db.get_value(\n-\t\t\t\"Item Price\",\n-\t\t\t{\"price_list\": price_list, \"item_code\": item_code},\n-\t\t\t[\"price_list_rate\", \"currency\"],\n-\t\t) or [None, None]\n+\titem_stock_qty, _ = get_stock_availability(item_code, warehouse)",
"line": null,
"original_line": 57,
"original_start_line": null,
"path": "erpnext/selling/page/point_of_sale/point_of_sale.py",
"start_line": null,
"text": "@user1:\n`_()` is the translation function in most parts of ERPNext. Redefining it here may lead to confusion."
},
{
"diff_hunk": "@@ -17,45 +17,79 @@\n def search_by_term(search_term, warehouse, price_list):\n \tresult = search_for_serial_or_batch_or_barcode_number(search_term) or {}\n \n-\titem_code = result.get(\"item_code\") or search_term\n-\tserial_no = result.get(\"serial_no\") or \"\"\n-\tbatch_no = result.get(\"batch_no\") or \"\"\n-\tbarcode = result.get(\"barcode\") or \"\"\n-\n-\tif result:\n-\t\titem_info = frappe.db.get_value(\n-\t\t\t\"Item\",\n-\t\t\titem_code,\n-\t\t\t[\n-\t\t\t\t\"name as item_code\",\n-\t\t\t\t\"item_name\",\n-\t\t\t\t\"description\",\n-\t\t\t\t\"stock_uom\",\n-\t\t\t\t\"image as item_image\",\n-\t\t\t\t\"is_stock_item\",\n-\t\t\t],\n-\t\t\tas_dict=1,\n-\t\t)\n+\titem_code = result.get(\"item_code\", search_term)\n+\tserial_no = result.get(\"serial_no\", \"\")\n+\tbatch_no = result.get(\"batch_no\", \"\")\n+\tbarcode = result.get(\"barcode\", \"\")\n+\n+\tif not result:\n+\t\treturn\n+\n+\titem_doc = frappe.get_doc(\"Item\", item_code)\n+\n+\tif not item_doc:\n+\t\treturn\n+\n+\titem = {\n+\t\t\"barcode\": barcode,\n+\t\t\"batch_no\": batch_no,\n+\t\t\"description\": item_doc.description,\n+\t\t\"is_stock_item\": item_doc.is_stock_item,\n+\t\t\"item_code\": item_doc.name,\n+\t\t\"item_image\": item_doc.image,\n+\t\t\"item_name\": item_doc.item_name,\n+\t\t\"serial_no\": serial_no,\n+\t\t\"stock_uom\": item_doc.stock_uom,\n+\t\t\"uom\": item_doc.stock_uom,\n+\t}\n+\n+\tif barcode:\n+\t\tbarcode_info = next(filter(lambda x: x.barcode == barcode, item_doc.get(\"barcodes\", [])), None)\n+\t\tif barcode_info and barcode_info.uom:\n+\t\t\tuom = next(filter(lambda x: x.uom == barcode_info.uom, item_doc.uoms), {})\n+\t\t\titem.update(\n+\t\t\t\t{\n+\t\t\t\t\t\"uom\": barcode_info.uom,\n+\t\t\t\t\t\"conversion_factor\": uom.get(\"conversion_factor\", 1),\n+\t\t\t\t}\n+\t\t\t)\n \n-\t\titem_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)\n-\t\tprice_list_rate, currency = frappe.db.get_value(\n-\t\t\t\"Item Price\",\n-\t\t\t{\"price_list\": price_list, \"item_code\": item_code},\n-\t\t\t[\"price_list_rate\", \"currency\"],\n-\t\t) or [None, None]\n+\titem_stock_qty, _ = get_stock_availability(item_code, warehouse)\n+\titem_stock_qty = item_stock_qty // item.get(\"conversion_factor\")\n+\titem.update({\"actual_qty\": item_stock_qty})\n+\n+\tprice = frappe.get_list(\n+\t\tdoctype=\"Item Price\",\n+\t\tfilters={\n+\t\t\t\"price_list\": price_list,\n+\t\t\t\"item_code\": item_code,\n+\t\t},\n+\t\tfields=\"*\",",
"line": null,
"original_line": 67,
"original_start_line": null,
"path": "erpnext/selling/page/point_of_sale/point_of_sale.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\tfields=[\"uom\", \"stock_uom\", \"currency\", \"price_list_rate\"],\r\n```"
}
] |
f9f9cdc3ee2e74aaaffe56d8001faa21dcbfe617
|
diff --git a/erpnext/selling/page/point_of_sale/point_of_sale.py b/erpnext/selling/page/point_of_sale/point_of_sale.py
index 999ddc23f0c3..158ac1d049a5 100644
--- a/erpnext/selling/page/point_of_sale/point_of_sale.py
+++ b/erpnext/selling/page/point_of_sale/point_of_sale.py
@@ -17,45 +17,79 @@
def search_by_term(search_term, warehouse, price_list):
result = search_for_serial_or_batch_or_barcode_number(search_term) or {}
- item_code = result.get("item_code") or search_term
- serial_no = result.get("serial_no") or ""
- batch_no = result.get("batch_no") or ""
- barcode = result.get("barcode") or ""
-
- if result:
- item_info = frappe.db.get_value(
- "Item",
- item_code,
- [
- "name as item_code",
- "item_name",
- "description",
- "stock_uom",
- "image as item_image",
- "is_stock_item",
- ],
- as_dict=1,
- )
+ item_code = result.get("item_code", search_term)
+ serial_no = result.get("serial_no", "")
+ batch_no = result.get("batch_no", "")
+ barcode = result.get("barcode", "")
+
+ if not result:
+ return
+
+ item_doc = frappe.get_doc("Item", item_code)
+
+ if not item_doc:
+ return
+
+ item = {
+ "barcode": barcode,
+ "batch_no": batch_no,
+ "description": item_doc.description,
+ "is_stock_item": item_doc.is_stock_item,
+ "item_code": item_doc.name,
+ "item_image": item_doc.image,
+ "item_name": item_doc.item_name,
+ "serial_no": serial_no,
+ "stock_uom": item_doc.stock_uom,
+ "uom": item_doc.stock_uom,
+ }
+
+ if barcode:
+ barcode_info = next(filter(lambda x: x.barcode == barcode, item_doc.get("barcodes", [])), None)
+ if barcode_info and barcode_info.uom:
+ uom = next(filter(lambda x: x.uom == barcode_info.uom, item_doc.uoms), {})
+ item.update(
+ {
+ "uom": barcode_info.uom,
+ "conversion_factor": uom.get("conversion_factor", 1),
+ }
+ )
- item_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)
- price_list_rate, currency = frappe.db.get_value(
- "Item Price",
- {"price_list": price_list, "item_code": item_code},
- ["price_list_rate", "currency"],
- ) or [None, None]
+ item_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)
+ item_stock_qty = item_stock_qty // item.get("conversion_factor")
+ item.update({"actual_qty": item_stock_qty})
+
+ price = frappe.get_list(
+ doctype="Item Price",
+ filters={
+ "price_list": price_list,
+ "item_code": item_code,
+ },
+ fields=["uom", "stock_uom", "currency", "price_list_rate"],
+ )
- item_info.update(
+ def __sort(p):
+ p_uom = p.get("uom")
+
+ if p_uom == item.get("uom"):
+ return 0
+ elif p_uom == item.get("stock_uom"):
+ return 1
+ else:
+ return 2
+
+ # sort by fallback preference. always pick exact uom match if available
+ price = sorted(price, key=__sort)
+
+ if len(price) > 0:
+ p = price.pop(0)
+ item.update(
{
- "serial_no": serial_no,
- "batch_no": batch_no,
- "barcode": barcode,
- "price_list_rate": price_list_rate,
- "currency": currency,
- "actual_qty": item_stock_qty,
+ "currency": p.get("currency"),
+ "price_list_rate": p.get("price_list_rate"),
}
)
- return {"items": [item_info]}
+ return {"items": [item]}
@frappe.whitelist()
@@ -121,33 +155,43 @@ def get_items(start, page_length, price_list, item_group, pos_profile, search_te
as_dict=1,
)
- if items_data:
- items = [d.item_code for d in items_data]
- item_prices_data = frappe.get_all(
+ # return (empty) list if there are no results
+ if not items_data:
+ return result
+
+ for item in items_data:
+ uoms = frappe.get_doc("Item", item.item_code).get("uoms", [])
+
+ item.actual_qty, _ = get_stock_availability(item.item_code, warehouse)
+ item.uom = item.stock_uom
+
+ item_price = frappe.get_all(
"Item Price",
- fields=["item_code", "price_list_rate", "currency"],
- filters={"price_list": price_list, "item_code": ["in", items]},
+ fields=["price_list_rate", "currency", "uom"],
+ filters={
+ "price_list": price_list,
+ "item_code": item.item_code,
+ "selling": True,
+ },
)
- item_prices = {}
- for d in item_prices_data:
- item_prices[d.item_code] = d
+ if not item_price:
+ result.append(item)
+
+ for price in item_price:
+ uom = next(filter(lambda x: x.uom == price.uom, uoms), {})
- for item in items_data:
- item_code = item.item_code
- item_price = item_prices.get(item_code) or {}
- item_stock_qty, is_stock_item = get_stock_availability(item_code, warehouse)
+ if price.uom != item.stock_uom and uom and uom.conversion_factor:
+ item.actual_qty = item.actual_qty // uom.conversion_factor
- row = {}
- row.update(item)
- row.update(
+ result.append(
{
- "price_list_rate": item_price.get("price_list_rate"),
- "currency": item_price.get("currency"),
- "actual_qty": item_stock_qty,
+ **item,
+ "price_list_rate": price.get("price_list_rate"),
+ "currency": price.get("currency"),
+ "uom": price.uom or item.uom,
}
)
- result.append(row)
return {"items": result}
diff --git a/erpnext/selling/page/point_of_sale/pos_controller.js b/erpnext/selling/page/point_of_sale/pos_controller.js
index 595b9196e848..c442774d0f78 100644
--- a/erpnext/selling/page/point_of_sale/pos_controller.js
+++ b/erpnext/selling/page/point_of_sale/pos_controller.js
@@ -542,12 +542,12 @@ erpnext.PointOfSale.Controller = class {
if (!this.frm.doc.customer)
return this.raise_customer_selection_alert();
- const { item_code, batch_no, serial_no, rate } = item;
+ const { item_code, batch_no, serial_no, rate, uom } = item;
if (!item_code)
return;
- const new_item = { item_code, batch_no, rate, [field]: value };
+ const new_item = { item_code, batch_no, rate, uom, [field]: value };
if (serial_no) {
await this.check_serial_no_availablilty(item_code, this.frm.doc.set_warehouse, serial_no);
@@ -649,6 +649,7 @@ erpnext.PointOfSale.Controller = class {
const is_stock_item = resp[1];
frappe.dom.unfreeze();
+ const bold_uom = item_row.stock_uom.bold();
const bold_item_code = item_row.item_code.bold();
const bold_warehouse = warehouse.bold();
const bold_available_qty = available_qty.toString().bold()
@@ -664,7 +665,7 @@ erpnext.PointOfSale.Controller = class {
}
} else if (is_stock_item && available_qty < qty_needed) {
frappe.throw({
- message: __('Stock quantity not enough for Item Code: {0} under warehouse {1}. Available quantity {2}.', [bold_item_code, bold_warehouse, bold_available_qty]),
+ message: __('Stock quantity not enough for Item Code: {0} under warehouse {1}. Available quantity {2} {3}.', [bold_item_code, bold_warehouse, bold_available_qty, bold_uom]),
indicator: 'orange'
});
frappe.utils.play_sound("error");
diff --git a/erpnext/selling/page/point_of_sale/pos_item_cart.js b/erpnext/selling/page/point_of_sale/pos_item_cart.js
index e7dd211c0f47..12cc629776cc 100644
--- a/erpnext/selling/page/point_of_sale/pos_item_cart.js
+++ b/erpnext/selling/page/point_of_sale/pos_item_cart.js
@@ -609,7 +609,7 @@ erpnext.PointOfSale.ItemCart = class {
if (item_data.rate && item_data.amount && item_data.rate !== item_data.amount) {
return `
<div class="item-qty-rate">
- <div class="item-qty"><span>${item_data.qty || 0}</span></div>
+ <div class="item-qty"><span>${item_data.qty || 0} ${item_data.uom}</span></div>
<div class="item-rate-amount">
<div class="item-rate">${format_currency(item_data.amount, currency)}</div>
<div class="item-amount">${format_currency(item_data.rate, currency)}</div>
@@ -618,7 +618,7 @@ erpnext.PointOfSale.ItemCart = class {
} else {
return `
<div class="item-qty-rate">
- <div class="item-qty"><span>${item_data.qty || 0}</span></div>
+ <div class="item-qty"><span>${item_data.qty || 0} ${item_data.uom}</span></div>
<div class="item-rate-amount">
<div class="item-rate">${format_currency(item_data.rate, currency)}</div>
</div>
diff --git a/erpnext/selling/page/point_of_sale/pos_item_selector.js b/erpnext/selling/page/point_of_sale/pos_item_selector.js
index b5eb0489f9d2..ec67bdfd9dd8 100644
--- a/erpnext/selling/page/point_of_sale/pos_item_selector.js
+++ b/erpnext/selling/page/point_of_sale/pos_item_selector.js
@@ -78,7 +78,7 @@ erpnext.PointOfSale.ItemSelector = class {
get_item_html(item) {
const me = this;
// eslint-disable-next-line no-unused-vars
- const { item_image, serial_no, batch_no, barcode, actual_qty, stock_uom, price_list_rate } = item;
+ const { item_image, serial_no, batch_no, barcode, actual_qty, uom, price_list_rate } = item;
const precision = flt(price_list_rate, 2) % 1 != 0 ? 2 : 0;
let indicator_color;
let qty_to_display = actual_qty;
@@ -118,7 +118,7 @@ erpnext.PointOfSale.ItemSelector = class {
return (
`<div class="item-wrapper"
data-item-code="${escape(item.item_code)}" data-serial-no="${escape(serial_no)}"
- data-batch-no="${escape(batch_no)}" data-uom="${escape(stock_uom)}"
+ data-batch-no="${escape(batch_no)}" data-uom="${escape(uom)}"
data-rate="${escape(price_list_rate || 0)}"
title="${item.item_name}">
@@ -128,7 +128,7 @@ erpnext.PointOfSale.ItemSelector = class {
<div class="item-name">
${frappe.ellipsis(item.item_name, 18)}
</div>
- <div class="item-rate">${format_currency(price_list_rate, item.currency, precision) || 0}</div>
+ <div class="item-rate">${format_currency(price_list_rate, item.currency, precision) || 0} / ${uom}</div>
</div>
</div>`
);
diff --git a/erpnext/selling/page/point_of_sale/pos_past_order_summary.js b/erpnext/selling/page/point_of_sale/pos_past_order_summary.js
index 40165c3484fa..be75bd64cfd7 100644
--- a/erpnext/selling/page/point_of_sale/pos_past_order_summary.js
+++ b/erpnext/selling/page/point_of_sale/pos_past_order_summary.js
@@ -94,7 +94,7 @@ erpnext.PointOfSale.PastOrderSummary = class {
get_item_html(doc, item_data) {
return `<div class="item-row-wrapper">
<div class="item-name">${item_data.item_name}</div>
- <div class="item-qty">${item_data.qty || 0}</div>
+ <div class="item-qty">${item_data.qty || 0} ${item_data.uom}</div>
<div class="item-rate-disc">${get_rate_discount_html()}</div>
</div>`;
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
frappe__erpnext-28842@2813e5e
|
frappe/erpnext
|
Python
| 28,842
|
feat: new column 'Time taken to Deliver' in sales order analysis
|
### Current Report
_'Sales Order Analysis'_ helps in quickly identifying billed amount, pending amount, delay in days and more information. Along with this a new column is added
### New Column - 'Time taken to Deliver'
This denotes the no of days taken to completely deliver an item. This helps in identifying bottlenecks and get more efficient.
### Image
<img width="1552" alt="time_taken_to_deliver" src="https://user-images.githubusercontent.com/3272205/145781927-0f7b4b9e-621d-4f35-8e13-d60a1ef4d6de.png">
**no-docs**
|
2021-12-13T09:06:34Z
|
Include column for 'Days taken to deliver' in Sales Order Analysis Report
Currently, the Sales Order Analysis reflects the Delay in Days took them to deliver the Ordered Items, they want one more column to reflect how many days they take to completely deliver a Sales Order in order to increase the efficiency and get more productive accordingly.
|
[
{
"body": "Currently, the Sales Order Analysis reflects the Delay in Days took them to deliver the Ordered Items, they want one more column to reflect how many days they take to completely deliver a Sales Order in order to increase the efficiency and get more productive accordingly.",
"number": 28753,
"title": "Include column for 'Days taken to deliver' in Sales Order Analysis Report"
}
] |
98d417602f77700b93f8eccd8d93ca27a5571887
|
{
"head_commit": "2813e5ee2956f4667ecb9e02cb3a824c0fb7a0bd",
"head_commit_message": "feat: new column 'Time taken to Deliver' in sales order analysis",
"patch_to_review": "diff --git a/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py b/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py\nindex 82e5d0ce57d9..f1edca4cef39 100644\n--- a/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py\n+++ b/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py\n@@ -61,6 +61,7 @@ def get_data(conditions, filters):\n \t\t\tIF(so.status in ('Completed','To Bill'), 0, (SELECT delay_days)) as delay,\n \t\t\tsoi.qty, soi.delivered_qty,\n \t\t\t(soi.qty - soi.delivered_qty) AS pending_qty,\n+\t\t\tIF((SELECT pending_qty) = 0, DATEDIFF(Max(dn.posting_date), so.transaction_date), 0) as time_taken_to_deliver,\n \t\t\tIFNULL(SUM(sii.qty), 0) as billed_qty,\n \t\t\tsoi.base_amount as amount,\n \t\t\t(soi.delivered_qty * soi.base_rate) as delivered_qty_amount,\n@@ -70,9 +71,13 @@ def get_data(conditions, filters):\n \t\t\tso.company, soi.name\n \t\tFROM\n \t\t\t`tabSales Order` so,\n-\t\t\t`tabSales Order Item` soi\n+\t\t\t(`tabSales Order Item` soi\n \t\tLEFT JOIN `tabSales Invoice Item` sii\n-\t\t\tON sii.so_detail = soi.name and sii.docstatus = 1\n+\t\t\tON sii.so_detail = soi.name and sii.docstatus = 1)\n+\t\tLEFT JOIN `tabDelivery Note Item` dni\n+\t\t\ton dni.so_detail = soi.name\n+\t\tRIGHT JOIN `tabDelivery Note` dn\n+\t\t\ton dni.parent = dn.name and dn.docstatus = 1\n \t\tWHERE\n \t\t\tsoi.parent = so.name\n \t\t\tand so.status not in ('Stopped', 'Closed', 'On Hold')\n@@ -259,6 +264,12 @@ def get_columns(filters):\n \t\t\t\"fieldname\": \"delay\",\n \t\t\t\"fieldtype\": \"Data\",\n \t\t\t\"width\": 100\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Time Taken to Deliver\"),\n+\t\t\t\"fieldname\": \"time_taken_to_deliver\",\n+\t\t\t\"fieldtype\": \"Data\",\n+\t\t\t\"width\": 100\n \t\t}\n \t])\n \tif not filters.get(\"group_by_so\"):\n"
}
|
[
{
"diff_hunk": "@@ -259,6 +264,12 @@ def get_columns(filters):\n \t\t\t\"fieldname\": \"delay\",\n \t\t\t\"fieldtype\": \"Data\",\n \t\t\t\"width\": 100\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Time Taken to Deliver\"),\n+\t\t\t\"fieldname\": \"time_taken_to_deliver\",\n+\t\t\t\"fieldtype\": \"Data\",",
"line": null,
"original_line": 271,
"original_start_line": null,
"path": "erpnext/selling/report/sales_order_analysis/sales_order_analysis.py",
"start_line": null,
"text": "@user1:\nCan we make this a Duration field? And store the seconds difference between `posting_date` & `transaction_date`?\n\n@author:\nChanged to Duration"
}
] |
da67403bbd5dc5bd304648150b3dac466c6f354c
|
diff --git a/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py b/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py
index 82e5d0ce57d9..0c0acc76e399 100644
--- a/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py
+++ b/erpnext/selling/report/sales_order_analysis/sales_order_analysis.py
@@ -61,6 +61,7 @@ def get_data(conditions, filters):
IF(so.status in ('Completed','To Bill'), 0, (SELECT delay_days)) as delay,
soi.qty, soi.delivered_qty,
(soi.qty - soi.delivered_qty) AS pending_qty,
+ IF((SELECT pending_qty) = 0, (TO_SECONDS(Max(dn.posting_date))-TO_SECONDS(so.transaction_date)), 0) as time_taken_to_deliver,
IFNULL(SUM(sii.qty), 0) as billed_qty,
soi.base_amount as amount,
(soi.delivered_qty * soi.base_rate) as delivered_qty_amount,
@@ -70,9 +71,13 @@ def get_data(conditions, filters):
so.company, soi.name
FROM
`tabSales Order` so,
- `tabSales Order Item` soi
+ (`tabSales Order Item` soi
LEFT JOIN `tabSales Invoice Item` sii
- ON sii.so_detail = soi.name and sii.docstatus = 1
+ ON sii.so_detail = soi.name and sii.docstatus = 1)
+ LEFT JOIN `tabDelivery Note Item` dni
+ on dni.so_detail = soi.name
+ RIGHT JOIN `tabDelivery Note` dn
+ on dni.parent = dn.name and dn.docstatus = 1
WHERE
soi.parent = so.name
and so.status not in ('Stopped', 'Closed', 'On Hold')
@@ -259,6 +264,12 @@ def get_columns(filters):
"fieldname": "delay",
"fieldtype": "Data",
"width": 100
+ },
+ {
+ "label": _("Time Taken to Deliver"),
+ "fieldname": "time_taken_to_deliver",
+ "fieldtype": "Duration",
+ "width": 100
}
])
if not filters.get("group_by_so"):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-28822@c3453cd
|
frappe/erpnext
|
Python
| 28,822
|
feat: Deferred Revenue and Expense report with actual and upcoming postings
|
### Deferred Revenue and Expense Report
This report aim to provide a simplified view of all deferred income/expense in a certain period. It can help in identifying the actual and expected posting.
### Deferred Revenue
<img width="1552" alt="def_revenue" src="https://user-images.githubusercontent.com/3272205/145595902-5c57a12e-1559-4e61-a471-7aad7276cbd6.png">
### Deferred Expense
<img width="1552" alt="def_expense" src="https://user-images.githubusercontent.com/3272205/145595967-98d6e46e-d00f-46b7-a369-4fef1b02f0ca.png">
### Documentation
https://docs.erpnext.com/docs/v13/user/manual/en/accounts/deferred_revenue/expense_report
|
2021-12-10T15:11:47Z
|
Deferred Accounting Report for Income and Expenses
### Problem Statement:
1. Since income and expenses are deferred, you don't come to know:
1. The actual income booked in a current month
2. The upcoming income which will be posted in coming months, as per start and end date
4. The upcoming expense ------------------ // -----------------
5. The expected against actual post
6. Actual posting of expense as per the start and end date for each fiscal year / quarter / month
7. Actual posting of income as per the start and end date for each fiscal year / quarter / month
### Expected Solution:
1. Make new reports to provide this information
### Who is the end user? Define persona(s).
1. Business owner, who wants to:
2. see income and expense, with and without deferred posting
3. Upcoming Income / Expense as per future deferred posting
4. Account Manager / Auditor, who wants to review if the posting of deferred entries has happened correctly as per the Start and End Period set in the invoices.
|
[
{
"body": "### Problem Statement:\r\n\r\n1. Since income and expenses are deferred, you don't come to know:\r\n 1. The actual income booked in a current month\r\n 2. The upcoming income which will be posted in coming months, as per start and end date\r\n4. The upcoming expense ------------------ // -----------------\r\n5. The expected against actual post\r\n6. Actual posting of expense as per the start and end date for each fiscal year / quarter / month\r\n7. Actual posting of income as per the start and end date for each fiscal year / quarter / month\r\n \r\n### Expected Solution:\r\n\r\n1. Make new reports to provide this information\r\n\r\n### Who is the end user? Define persona(s).\r\n\r\n1. Business owner, who wants to:\r\n2. see income and expense, with and without deferred posting\r\n3. Upcoming Income / Expense as per future deferred posting\r\n4. Account Manager / Auditor, who wants to review if the posting of deferred entries has happened correctly as per the Start and End Period set in the invoices.",
"number": 28225,
"title": "Deferred Accounting Report for Income and Expenses"
}
] |
5c4d3f89d2dd3102edeeb072b63f8ed5a0a22e13
|
{
"head_commit": "c3453cd73c14d744264373bab4b2380b63470021",
"head_commit_message": "feat: Deferred Revenue and Expense report\n\n - show deferred revenue and expense with actual and expected postings\n - unit tests added",
"patch_to_review": "diff --git a/erpnext/accounts/report/deferred_revenue_and_expense/__init__.py b/erpnext/accounts/report/deferred_revenue_and_expense/__init__.py\nnew file mode 100644\nindex 000000000000..e69de29bb2d1\ndiff --git a/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.js b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.js\nnew file mode 100644\nindex 000000000000..0056b9e8f564\n--- /dev/null\n+++ b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.js\n@@ -0,0 +1,114 @@\n+// Copyright (c) 2016, Frappe Technologies Pvt. Ltd. and contributors\n+// For license information, please see license.txt\n+/* eslint-disable */\n+\n+function get_filters() {\n+\tlet filters = [\n+\t\t{\n+\t\t\t\"fieldname\":\"company\",\n+\t\t\t\"label\": __(\"Company\"),\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Company\",\n+\t\t\t\"default\": frappe.defaults.get_user_default(\"Company\"),\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"filter_based_on\",\n+\t\t\t\"label\": __(\"Filter Based On\"),\n+\t\t\t\"fieldtype\": \"Select\",\n+\t\t\t\"options\": [\"Fiscal Year\", \"Date Range\"],\n+\t\t\t\"default\": [\"Fiscal Year\"],\n+\t\t\t\"reqd\": 1,\n+\t\t\ton_change: function() {\n+\t\t\t\tlet filter_based_on = frappe.query_report.get_filter_value('filter_based_on');\n+\t\t\t\tfrappe.query_report.toggle_filter_display('from_fiscal_year', filter_based_on === 'Date Range');\n+\t\t\t\tfrappe.query_report.toggle_filter_display('to_fiscal_year', filter_based_on === 'Date Range');\n+\t\t\t\tfrappe.query_report.toggle_filter_display('period_start_date', filter_based_on === 'Fiscal Year');\n+\t\t\t\tfrappe.query_report.toggle_filter_display('period_end_date', filter_based_on === 'Fiscal Year');\n+\n+\t\t\t\tfrappe.query_report.refresh();\n+\t\t\t}\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"period_start_date\",\n+\t\t\t\"label\": __(\"Start Date\"),\n+\t\t\t\"fieldtype\": \"Date\",\n+\t\t\t\"hidden\": 1,\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"period_end_date\",\n+\t\t\t\"label\": __(\"End Date\"),\n+\t\t\t\"fieldtype\": \"Date\",\n+\t\t\t\"hidden\": 1,\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"from_fiscal_year\",\n+\t\t\t\"label\": __(\"Start Year\"),\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Fiscal Year\",\n+\t\t\t\"default\": frappe.defaults.get_user_default(\"fiscal_year\"),\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"to_fiscal_year\",\n+\t\t\t\"label\": __(\"End Year\"),\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Fiscal Year\",\n+\t\t\t\"default\": frappe.defaults.get_user_default(\"fiscal_year\"),\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\": \"periodicity\",\n+\t\t\t\"label\": __(\"Periodicity\"),\n+\t\t\t\"fieldtype\": \"Select\",\n+\t\t\t\"options\": [\n+\t\t\t\t{ \"value\": \"Monthly\", \"label\": __(\"Monthly\") },\n+\t\t\t\t{ \"value\": \"Quarterly\", \"label\": __(\"Quarterly\") },\n+\t\t\t\t{ \"value\": \"Half-Yearly\", \"label\": __(\"Half-Yearly\") },\n+\t\t\t\t{ \"value\": \"Yearly\", \"label\": __(\"Yearly\") }\n+\t\t\t],\n+\t\t\t\"default\": \"Monthly\",\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\": \"type\",\n+\t\t\t\"label\": __(\"Invoice Type\"),\n+\t\t\t\"fieldtype\": \"Select\",\n+\t\t\t\"options\": [\n+\t\t\t\t{ \"value\": \"Revenue\", \"label\": __(\"Revenue\") },\n+\t\t\t\t{ \"value\": \"Expense\", \"label\": __(\"Expense\") }\n+\t\t\t],\n+\t\t\t\"default\": \"Revenue\",\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\" : \"with_upcoming_postings\",\n+\t\t\t\"label\": __(\"Show with upcoming revenue/expense\"),\n+\t\t\t\"fieldtype\": \"Check\",\n+\t\t\t\"default\": 1\n+\t\t}\n+\t]\n+\n+\treturn filters;\n+}\n+\n+frappe.query_reports[\"Deferred Revenue and Expense\"] = {\n+\t\"filters\": get_filters(),\n+\t\"formatter\": function(value, row, column, data, default_formatter){\n+\t\treturn default_formatter(value, row, column, data);\n+\t},\n+\tonload: function(report){\n+\t\tlet fiscal_year = frappe.defaults.get_user_default(\"fiscal_year\");\n+\n+\t\tfrappe.model.with_doc(\"Fiscal Year\", fiscal_year, function(r) {\n+\t\t\tvar fy = frappe.model.get_doc(\"Fiscal Year\", fiscal_year);\n+\t\t\tfrappe.query_report.set_filter_value({\n+\t\t\t\tperiod_start_date: fy.year_start_date,\n+\t\t\t\tperiod_end_date: fy.year_end_date\n+\t\t\t});\n+\t\t});\n+\t}\n+};\n+\ndiff --git a/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.json b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.json\nnew file mode 100644\nindex 000000000000..c7dfb3b7142a\n--- /dev/null\n+++ b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.json\n@@ -0,0 +1,32 @@\n+{\n+ \"add_total_row\": 0,\n+ \"columns\": [],\n+ \"creation\": \"2021-12-10 19:27:14.654220\",\n+ \"disable_prepared_report\": 0,\n+ \"disabled\": 0,\n+ \"docstatus\": 0,\n+ \"doctype\": \"Report\",\n+ \"filters\": [],\n+ \"idx\": 0,\n+ \"is_standard\": \"Yes\",\n+ \"modified\": \"2021-12-10 19:27:14.654220\",\n+ \"modified_by\": \"Administrator\",\n+ \"module\": \"Accounts\",\n+ \"name\": \"Deferred Revenue and Expense\",\n+ \"owner\": \"Administrator\",\n+ \"prepared_report\": 0,\n+ \"ref_doctype\": \"GL Entry\",\n+ \"report_name\": \"Deferred Revenue and Expense\",\n+ \"report_type\": \"Script Report\",\n+ \"roles\": [\n+ {\n+ \"role\": \"Accounts User\"\n+ },\n+ {\n+ \"role\": \"Accounts Manager\"\n+ },\n+ {\n+ \"role\": \"Auditor\"\n+ }\n+ ]\n+}\n\\ No newline at end of file\ndiff --git a/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py\nnew file mode 100644\nindex 000000000000..91c5bb4965ac\n--- /dev/null\n+++ b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py\n@@ -0,0 +1,436 @@\n+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n+# License: MIT. See LICENSE\n+\n+import frappe\n+from frappe import qb\n+from frappe.query_builder import Column, functions\n+from frappe.utils import add_days, date_diff, flt, get_first_day, get_last_day, rounded\n+\n+from erpnext.accounts.report.financial_statements import get_period_list\n+\n+\n+class Deferred_Item(object):\n+\t\"\"\"\n+\tHelper class for processing items with deferred revenue/expense\n+\t\"\"\"\n+\n+\tdef __init__(self, item, inv, gle_entries):\n+\t\tself.name = item\n+\t\tself.parent = inv.name\n+\t\tself.item_name = gle_entries[0].item_name\n+\t\tself.service_start_date = gle_entries[0].service_start_date\n+\t\tself.service_end_date = gle_entries[0].service_end_date\n+\t\tself.base_net_amount = gle_entries[0].base_net_amount\n+\t\tself.filters = inv.filters\n+\t\tself.period_list = inv.period_list\n+\n+\t\tif gle_entries[0].deferred_revenue_account:\n+\t\t\tself.type = \"Deferred Sale Item\"\n+\t\t\tself.deferred_account = gle_entries[0].deferred_revenue_account\n+\t\telif gle_entries[0].deferred_expense_account:\n+\t\t\tself.type = \"Deferred Purchase Item\"\n+\t\t\tself.deferred_account = gle_entries[0].deferred_expense_account\n+\n+\t\tself.gle_entries = []\n+\t\t# holds period wise total for item\n+\t\tself.period_total = []\n+\t\tself.last_entry_date = self.service_start_date\n+\n+\t\tif gle_entries:\n+\t\t\tself.gle_entries = gle_entries\n+\t\t\tfor x in self.gle_entries:\n+\t\t\t\tif self.get_amount(x):\n+\t\t\t\t\tself.last_entry_date = x.gle_posting_date\n+\n+\tdef report_data(self):\n+\t\t\"\"\"\n+\t\tGenerate report data for output\n+\t\t\"\"\"\n+\t\tret_data = frappe._dict({\"name\": self.item_name})\n+\t\tfor period in self.period_total:\n+\t\t\tret_data[period.key] = period.total\n+\t\t\tret_data.indent = 1\n+\t\treturn ret_data\n+\n+\tdef get_amount(self, entry):\n+\t\t\"\"\"\n+\t\tFor a given GL/Journal posting, get balance based on item type\n+\t\t\"\"\"\n+\t\tif self.type == \"Deferred Sale Item\":\n+\t\t\treturn entry.debit - entry.credit\n+\t\telif self.type == \"Deferred Purchase Item\":\n+\t\t\treturn -(entry.credit - entry.debit)\n+\t\treturn 0\n+\n+\tdef get_item_total(self):\n+\t\t\"\"\"\n+\t\tHelper method - calculate booked amount. Includes simulated postings as well\n+\t\t\"\"\"\n+\t\ttotal = 0\n+\t\tfor gle_posting in self.gle_entries:\n+\t\t\ttotal += self.get_amount(gle_posting)\n+\n+\t\treturn total\n+\n+\tdef calculate_amount(self, start_date, end_date):\n+\t\t\"\"\"\n+\t\tstart_date, end_date - datetime.datetime.date\n+\t\treturn - estimated amount to post for given period\n+\t\tCalculated based on already booked amount and item service period\n+\t\t\"\"\"\n+\t\ttotal_months = (\n+\t\t\t(self.service_end_date.year - self.service_start_date.year) * 12\n+\t\t\t+ (self.service_end_date.month - self.service_start_date.month)\n+\t\t\t+ 1\n+\t\t)\n+\n+\t\tprorate = date_diff(self.service_end_date, self.service_start_date) / date_diff(\n+\t\t\tget_last_day(self.service_end_date), get_first_day(self.service_start_date)\n+\t\t)\n+\n+\t\tactual_months = rounded(total_months * prorate, 1)\n+\n+\t\talready_booked_amount = self.get_item_total()\n+\t\tbase_amount = self.base_net_amount / actual_months\n+\n+\t\tif base_amount + already_booked_amount > self.base_net_amount:\n+\t\t\tbase_amount = self.base_net_amount - already_booked_amount\n+\n+\t\tif not (get_first_day(start_date) == start_date and get_last_day(end_date) == end_date):\n+\t\t\tpartial_month = flt(date_diff(end_date, start_date)) / flt(\n+\t\t\t\tdate_diff(get_last_day(end_date), get_first_day(start_date))\n+\t\t\t)\n+\t\t\tbase_amount *= rounded(partial_month, 1)\n+\n+\t\treturn base_amount\n+\n+\tdef make_dummy_gle(self, name, date, amount):\n+\t\t\"\"\"\n+\t\treturn - frappe._dict() of a dummy gle entry\n+\t\t\"\"\"\n+\t\tentry = frappe._dict(\n+\t\t\t{\"name\": name, \"gle_posting_date\": date, \"debit\": 0, \"credit\": 0, \"posted\": \"not\"}\n+\t\t)\n+\t\tif self.type == \"Deferred Sale Item\":\n+\t\t\tentry.debit = amount\n+\t\telif self.type == \"Deferred Purchase Item\":\n+\t\t\tentry.credit = amount\n+\t\treturn entry\n+\n+\tdef simulate_future_posting(self):\n+\t\t\"\"\"\n+\t\tsimulate future posting by creating dummy gl entries. starts from the last posting date.\n+\t\t\"\"\"\n+\t\tif add_days(self.last_entry_date, 1) < self.period_list[-1].to_date:\n+\t\t\tself.estimate_for_period_list = get_period_list(\n+\t\t\t\tself.filters.from_fiscal_year,\n+\t\t\t\tself.filters.to_fiscal_year,\n+\t\t\t\tadd_days(self.last_entry_date, 1),\n+\t\t\t\tself.period_list[-1].to_date,\n+\t\t\t\t\"Date Range\",\n+\t\t\t\t\"Monthly\",\n+\t\t\t\tcompany=self.filters.company,\n+\t\t\t)\n+\t\t\tfor period in self.estimate_for_period_list:\n+\t\t\t\tamount = self.calculate_amount(period.from_date, period.to_date)\n+\t\t\t\tgle = self.make_dummy_gle(period.key, period.to_date, amount)\n+\t\t\t\tself.gle_entries.append(gle)\n+\n+\tdef calculate_item_revenue_expense_for_period(self):\n+\t\t\"\"\"\n+\t\tcalculate item postings for each period and update period_total list\n+\t\t\"\"\"\n+\t\tfor period in self.period_list:\n+\t\t\tperiod_sum = 0\n+\t\t\tactual = 0\n+\t\t\tfor posting in self.gle_entries:\n+\t\t\t\t# if period.from_date <= posting.posting_date <= period.to_date:\n+\t\t\t\tif period.from_date <= posting.gle_posting_date <= period.to_date:\n+\t\t\t\t\tperiod_sum += self.get_amount(posting)\n+\t\t\t\t\tif posting.posted == \"posted\":\n+\t\t\t\t\t\tactual += self.get_amount(posting)\n+\n+\t\t\tself.period_total.append(\n+\t\t\t\tfrappe._dict({\"key\": period.key, \"total\": period_sum, \"actual\": actual})\n+\t\t\t)\n+\t\treturn self.period_total\n+\n+\n+class Deferred_Invoice(object):\n+\tdef __init__(self, invoice, items, filters, period_list):\n+\t\t\"\"\"\n+\t\tHelper class for processing invoices with deferred revenue/expense items\n+\t\tinvoice - string : invoice name\n+\t\titems - list : frappe._dict() with item details. Refer Deferred_Item for required fields\n+\t\t\"\"\"\n+\t\tself.name = invoice\n+\t\tself.posting_date = items[0].posting_date\n+\t\tself.filters = filters\n+\t\tself.period_list = period_list\n+\t\t# holds period wise total for invoice\n+\t\tself.period_total = []\n+\n+\t\tif items[0].deferred_revenue_account:\n+\t\t\tself.type = \"Sales\"\n+\t\telif items[0].deferred_expense_account:\n+\t\t\tself.type = \"Purchase\"\n+\n+\t\tself.items = []\n+\t\t# for each uniq items\n+\t\tself.uniq_items = set([x.item for x in items])\n+\t\tfor item in self.uniq_items:\n+\t\t\tself.items.append(Deferred_Item(item, self, [x for x in items if x.item == item]))\n+\n+\tdef get_postings(self):\n+\t\t\"\"\"\n+\t\tget GL/Journal postings for deferred items in invoice\n+\t\t\"\"\"\n+\t\t[item.get_gl_and_journal_postings() for item in self.items]\n+\n+\tdef calculate_invoice_revenue_expense_for_period(self):\n+\t\t\"\"\"\n+\t\tcalculate deferred revenue/expense for all items in invoice\n+\t\t\"\"\"\n+\t\t# initialize period_total list for invoice\n+\t\tfor period in self.period_list:\n+\t\t\tself.period_total.append(frappe._dict({\"key\": period.key, \"total\": 0, \"actual\": 0}))\n+\n+\t\tfor item in self.items:\n+\t\t\titem_total = item.calculate_item_revenue_expense_for_period()\n+\t\t\t# update invoice total\n+\t\t\tfor idx, period in enumerate(self.period_list, 0):\n+\t\t\t\tself.period_total[idx].total += item_total[idx].total\n+\t\t\t\tself.period_total[idx].actual += item_total[idx].actual\n+\t\treturn self.period_total\n+\n+\tdef estimate_future(self):\n+\t\t\"\"\"\n+\t\tcreate dummy GL entries for upcoming months for all items in invoice\n+\t\t\"\"\"\n+\t\t[item.simulate_future_posting() for item in self.items]\n+\n+\tdef report_data(self):\n+\t\t\"\"\"\n+\t\tgenerate report data for invoice, includes invoice total\n+\t\t\"\"\"\n+\t\tret_data = []\n+\t\tinv_total = frappe._dict({\"name\": self.name})\n+\t\tfor x in self.period_total:\n+\t\t\tinv_total[x.key] = x.total\n+\t\t\tinv_total.indent = 0\n+\t\tret_data.append(inv_total)\n+\t\tlist(map(lambda item: ret_data.append(item.report_data()), self.items))\n+\t\treturn ret_data\n+\n+\n+class Deferred_Revenue_and_Expense_Report(object):\n+\tdef __init__(self, filters=None):\n+\t\t\"\"\"\n+\t\tInitialize deferred revenue/expense report with user provided filters or system defaults, if none is provided\n+\t\t\"\"\"\n+\n+\t\t# If no filters are provided, get user defaults\n+\t\tif not filters:\n+\t\t\tfiscal_year = frappe.get_doc(\"Fiscal Year\", frappe.defaults.get_user_default(\"fiscal_year\"))\n+\t\t\tself.filters = frappe._dict(\n+\t\t\t\t{\n+\t\t\t\t\t\"company\": frappe.defaults.get_user_default(\"Company\"),\n+\t\t\t\t\t\"filter_based_on\": \"Fiscal Year\",\n+\t\t\t\t\t\"period_start_date\": fiscal_year.year_start_date,\n+\t\t\t\t\t\"period_end_date\": fiscal_year.year_end_date,\n+\t\t\t\t\t\"from_fiscal_year\": fiscal_year.year,\n+\t\t\t\t\t\"to_fiscal_year\": fiscal_year.year,\n+\t\t\t\t\t\"periodicity\": \"Monthly\",\n+\t\t\t\t\t\"type\": \"Revenue\",\n+\t\t\t\t\t\"with_upcoming_postings\": True,\n+\t\t\t\t}\n+\t\t\t)\n+\t\telse:\n+\t\t\tself.filters = frappe._dict(filters)\n+\n+\t\tself.period_list = None\n+\t\tself.deferred_invoices = []\n+\t\t# holds period wise total for report\n+\t\tself.period_total = []\n+\n+\tdef get_period_list(self):\n+\t\t\"\"\"\n+\t\tFigure out selected period based on filters\n+\t\t\"\"\"\n+\t\tself.period_list = get_period_list(\n+\t\t\tself.filters.from_fiscal_year,\n+\t\t\tself.filters.to_fiscal_year,\n+\t\t\tself.filters.period_start_date,\n+\t\t\tself.filters.period_end_date,\n+\t\t\tself.filters.filter_based_on,\n+\t\t\tself.filters.periodicity,\n+\t\t\tcompany=self.filters.company,\n+\t\t)\n+\n+\tdef get_invoices(self):\n+\t\t\"\"\"\n+\t\tGet all sales and purchase invoices which has deferred revenue/expense items\n+\t\t\"\"\"\n+\t\tgle = qb.DocType(\"GL Entry\")\n+\t\t# column doesn't have an alias option\n+\t\tposted = Column(\"posted\")\n+\n+\t\tif self.filters.type == \"Revenue\":\n+\t\t\tinv = qb.DocType(\"Sales Invoice\")\n+\t\t\tinv_item = qb.DocType(\"Sales Invoice Item\")\n+\t\t\tdeferred_flag_field = inv_item[\"enable_deferred_revenue\"]\n+\t\t\tdeferred_account_field = inv_item[\"deferred_revenue_account\"]\n+\n+\t\telif self.filters.type == \"Expense\":\n+\t\t\tinv = qb.DocType(\"Purchase Invoice\")\n+\t\t\tinv_item = qb.DocType(\"Purchase Invoice Item\")\n+\t\t\tdeferred_flag_field = inv_item[\"enable_deferred_expense\"]\n+\t\t\tdeferred_account_field = inv_item[\"deferred_expense_account\"]\n+\n+\t\tquery = (\n+\t\t\tqb.from_(inv_item)\n+\t\t\t.join(inv)\n+\t\t\t.on(inv.name == inv_item.parent)\n+\t\t\t.join(gle)\n+\t\t\t.on((inv_item.name == gle.voucher_detail_no) & (deferred_account_field == gle.account))\n+\t\t\t.select(\n+\t\t\t\tinv.name.as_(\"doc\"),\n+\t\t\t\tinv.posting_date,\n+\t\t\t\tinv_item.name.as_(\"item\"),\n+\t\t\t\tinv_item.item_name,\n+\t\t\t\tinv_item.service_start_date,\n+\t\t\t\tinv_item.service_end_date,\n+\t\t\t\tinv_item.base_net_amount,\n+\t\t\t\tdeferred_account_field,\n+\t\t\t\tgle.posting_date.as_(\"gle_posting_date\"),\n+\t\t\t\tfunctions.Sum(gle.debit).as_(\"debit\"),\n+\t\t\t\tfunctions.Sum(gle.credit).as_(\"credit\"),\n+\t\t\t\tposted,\n+\t\t\t)\n+\t\t\t.where(\n+\t\t\t\t(inv.docstatus == 1)\n+\t\t\t\t& (deferred_flag_field == 1)\n+\t\t\t\t& (\n+\t\t\t\t\t(\n+\t\t\t\t\t\t(self.period_list[0].from_date >= inv_item.service_start_date)\n+\t\t\t\t\t\t& (inv_item.service_end_date >= self.period_list[0].from_date)\n+\t\t\t\t\t)\n+\t\t\t\t\t| (\n+\t\t\t\t\t\t(inv_item.service_start_date >= self.period_list[0].from_date)\n+\t\t\t\t\t\t& (inv_item.service_start_date <= self.period_list[-1].to_date)\n+\t\t\t\t\t)\n+\t\t\t\t)\n+\t\t\t)\n+\t\t\t.groupby(inv.name, inv_item.name, gle.posting_date)\n+\t\t\t.orderby(gle.posting_date)\n+\t\t)\n+\t\tself.invoices = query.run(as_dict=True)\n+\n+\t\tuniq_invoice = set([x.doc for x in self.invoices])\n+\t\tfor inv in uniq_invoice:\n+\t\t\tself.deferred_invoices.append(\n+\t\t\t\tDeferred_Invoice(\n+\t\t\t\t\tinv, [x for x in self.invoices if x.doc == inv], self.filters, self.period_list\n+\t\t\t\t)\n+\t\t\t)\n+\n+\tdef estimate_future(self):\n+\t\t\"\"\"\n+\t\tFor all Invoices estimate upcoming postings\n+\t\t\"\"\"\n+\t\tfor x in self.deferred_invoices:\n+\t\t\tx.estimate_future()\n+\n+\tdef calculate_revenue_and_expense(self):\n+\t\t\"\"\"\n+\t\tcalculate the deferred revenue/expense for all invoices\n+\t\t\"\"\"\n+\t\t# initialize period_total list for report\n+\t\tfor period in self.period_list:\n+\t\t\tself.period_total.append(frappe._dict({\"key\": period.key, \"total\": 0, \"actual\": 0}))\n+\n+\t\tfor inv in self.deferred_invoices:\n+\t\t\tinv_total = inv.calculate_invoice_revenue_expense_for_period()\n+\t\t\t# calculate total for whole report\n+\t\t\tfor idx, period in enumerate(self.period_list, 0):\n+\t\t\t\tself.period_total[idx].total += inv_total[idx].total\n+\t\t\t\tself.period_total[idx].actual += inv_total[idx].actual\n+\n+\tdef get_columns(self):\n+\t\tcolumns = []\n+\t\tcolumns.append({\"label\": \"Name\", \"fieldname\": \"name\", \"fieldtype\": \"Data\", \"read_only\": 1})\n+\t\tfor period in self.period_list:\n+\t\t\tcolumns.append(\n+\t\t\t\t{\"label\": period.label, \"fieldname\": period.key, \"fieldtype\": \"Currency\", \"read_only\": 1,}\n+\t\t\t)\n+\t\treturn columns\n+\n+\tdef generate_report_data(self):\n+\t\t\"\"\"\n+\t\tGenerate report data for all invoices. Adds total rows for revenue and expense\n+\t\t\"\"\"\n+\t\tret = []\n+\n+\t\tfor inv in self.deferred_invoices:\n+\t\t\tret += inv.report_data()\n+\n+\t\t# empty row for padding\n+\t\tret += [{}]\n+\n+\t\t# add total row\n+\t\tif ret is not []:\n+\t\t\tif self.filters.type == \"Revenue\":\n+\t\t\t\ttotal_row = frappe._dict({\"name\": \"Total Deferred Income\"})\n+\t\t\telif self.filters.type == \"Expense\":\n+\t\t\t\ttotal_row = frappe._dict({\"name\": \"Total Deferred Expense\"})\n+\n+\t\t\tfor idx, period in enumerate(self.period_list, 0):\n+\t\t\t\ttotal_row[period.key] = self.period_total[idx].total\n+\t\t\tret.append(total_row)\n+\n+\t\treturn ret\n+\n+\tdef prepare_chart(self):\n+\t\tchart = {\n+\t\t\t\"data\": {\n+\t\t\t\t\"labels\": [period.label for period in self.period_list],\n+\t\t\t\t\"datasets\": [\n+\t\t\t\t\t{\n+\t\t\t\t\t\t\"name\": \"Actual Posting\",\n+\t\t\t\t\t\t\"chartType\": \"bar\",\n+\t\t\t\t\t\t\"values\": [x.actual for x in self.period_total],\n+\t\t\t\t\t},\n+\t\t\t\t\t{\"name\": \"Expected\", \"chartType\": \"line\", \"values\": [x.total for x in self.period_total],},\n+\t\t\t\t],\n+\t\t\t},\n+\t\t\t\"type\": \"axis-mixed\",\n+\t\t\t\"height\": 500,\n+\t\t\t\"axisOptions\": {\"xAxisMode\": \"Tick\", \"xIsSeries\": True},\n+\t\t\t\"barOptions\": {\"stacked\": False, \"spaceRatio\": 0.5},\n+\t\t}\n+\n+\t\treturn chart\n+\n+\tdef run(self, *args, **kwargs):\n+\t\t\"\"\"\n+\t\tRun report and generate data\n+\t\t\"\"\"\n+\t\tself.deferred_invoices.clear()\n+\t\tself.get_period_list()\n+\t\tself.get_invoices()\n+\n+\t\tif self.filters.with_upcoming_postings:\n+\t\t\tself.estimate_future()\n+\t\tself.calculate_revenue_and_expense()\n+\n+\n+def execute(filters=None):\n+\treport = Deferred_Revenue_and_Expense_Report(filters=filters)\n+\treport.run()\n+\n+\tcolumns = report.get_columns()\n+\tdata = report.generate_report_data()\n+\tmessage = []\n+\tchart = report.prepare_chart()\n+\n+\treturn columns, data, message, chart\ndiff --git a/erpnext/accounts/report/deferred_revenue_and_expense/test_deferred_revenue_and_expense.py b/erpnext/accounts/report/deferred_revenue_and_expense/test_deferred_revenue_and_expense.py\nnew file mode 100644\nindex 000000000000..379bf0d2724f\n--- /dev/null\n+++ b/erpnext/accounts/report/deferred_revenue_and_expense/test_deferred_revenue_and_expense.py\n@@ -0,0 +1,253 @@\n+import datetime\n+import unittest\n+\n+import frappe\n+from frappe import qb\n+from frappe.utils import add_months, nowdate\n+\n+from erpnext.accounts.doctype.account.test_account import create_account\n+from erpnext.accounts.doctype.purchase_invoice.test_purchase_invoice import make_purchase_invoice\n+from erpnext.accounts.doctype.sales_invoice.test_sales_invoice import create_sales_invoice\n+from erpnext.accounts.report.deferred_revenue_and_expense.deferred_revenue_and_expense import (\n+\tDeferred_Revenue_and_Expense_Report,\n+)\n+from erpnext.buying.doctype.supplier.test_supplier import create_supplier\n+from erpnext.stock.doctype.item.test_item import create_item\n+\n+\n+class TestDeferredRevenueAndExpense(unittest.TestCase):\n+\t@classmethod\n+\tdef setUpClass(self):\n+\t\tclear_old_entries()\n+\t\tcreate_company()\n+\n+\tdef test_deferred_revenue(self):\n+\t\t# created deferred expense accounts, if not found\n+\t\tdeferred_revenue_account = create_account(\n+\t\t\taccount_name=\"Deferred Revenue\",\n+\t\t\tparent_account=\"Current Liabilities - _CD\",\n+\t\t\tcompany=\"_Test Company DR\",\n+\t\t)\n+\n+\t\tacc_settings = frappe.get_doc(\"Accounts Settings\", \"Accounts Settings\")\n+\t\tacc_settings.book_deferred_entries_based_on = \"Months\"\n+\t\tacc_settings.save()\n+\n+\t\tcustomer = frappe.new_doc(\"Customer\")\n+\t\tcustomer.customer_name = \"_Test Customer DR\"\n+\t\tcustomer.type = \"Individual\"\n+\t\tcustomer.insert()\n+\n+\t\titem = create_item(\n+\t\t\t\"_Test Internet Subscription\",\n+\t\t\tis_stock_item=0,\n+\t\t\twarehouse=\"All Warehouses - _CD\",\n+\t\t\tcompany=\"_Test Company DR\",\n+\t\t)\n+\t\titem.enable_deferred_revenue = 1\n+\t\titem.deferred_revenue_account = deferred_revenue_account\n+\t\titem.no_of_months = 3\n+\t\titem.save()\n+\n+\t\tsi = create_sales_invoice(\n+\t\t\titem=item.name,\n+\t\t\tcompany=\"_Test Company DR\",\n+\t\t\tcustomer=\"_Test Customer DR\",\n+\t\t\tdebit_to=\"Debtors - _CD\",\n+\t\t\tposting_date=\"2021-05-01\",\n+\t\t\tparent_cost_center=\"Main - _CD\",\n+\t\t\tcost_center=\"Main - _CD\",\n+\t\t\tdo_not_submit=True,\n+\t\t\trate=300,\n+\t\t\tprice_list_rate=300,\n+\t\t)\n+\t\tsi.items[0].enable_deferred_revenue = 1\n+\t\tsi.items[0].service_start_date = \"2021-05-01\"\n+\t\tsi.items[0].service_end_date = \"2021-08-01\"\n+\t\tsi.items[0].deferred_revenue_account = deferred_revenue_account\n+\t\tsi.items[0].income_account = \"Sales - _CD\"\n+\t\tsi.save()\n+\t\tsi.submit()\n+\n+\t\tpda = frappe.get_doc(\n+\t\t\tdict(\n+\t\t\t\tdoctype=\"Process Deferred Accounting\",\n+\t\t\t\tposting_date=nowdate(),\n+\t\t\t\tstart_date=\"2021-05-01\",\n+\t\t\t\tend_date=\"2021-08-01\",\n+\t\t\t\ttype=\"Income\",\n+\t\t\t\tcompany=\"_Test Company DR\",\n+\t\t\t)\n+\t\t)\n+\t\tpda.insert()\n+\t\tpda.submit()\n+\n+\t\t# execute report\n+\t\tfiscal_year = frappe.get_doc(\"Fiscal Year\", frappe.defaults.get_user_default(\"fiscal_year\"))\n+\t\tself.filters = frappe._dict(\n+\t\t\t{\n+\t\t\t\t\"company\": frappe.defaults.get_user_default(\"Company\"),\n+\t\t\t\t\"filter_based_on\": \"Date Range\",\n+\t\t\t\t\"period_start_date\": \"2021-05-01\",\n+\t\t\t\t\"period_end_date\": \"2021-08-01\",\n+\t\t\t\t\"from_fiscal_year\": fiscal_year.year,\n+\t\t\t\t\"to_fiscal_year\": fiscal_year.year,\n+\t\t\t\t\"periodicity\": \"Monthly\",\n+\t\t\t\t\"type\": \"Revenue\",\n+\t\t\t\t\"with_upcoming_postings\": False,\n+\t\t\t}\n+\t\t)\n+\n+\t\treport = Deferred_Revenue_and_Expense_Report(filters=self.filters)\n+\t\treport.run()\n+\t\texpected = [\n+\t\t\t{\"key\": \"may_2021\", \"total\": 100.0, \"actual\": 100.0},\n+\t\t\t{\"key\": \"jun_2021\", \"total\": 100.0, \"actual\": 100.0},\n+\t\t\t{\"key\": \"jul_2021\", \"total\": 100.0, \"actual\": 100.0},\n+\t\t\t{\"key\": \"aug_2021\", \"total\": 0, \"actual\": 0},\n+\t\t]\n+\t\tself.assertEqual(report.period_total, expected)\n+\n+\tdef test_deferred_expense(self):\n+\t\t# created deferred expense accounts, if not found\n+\t\tdeferred_expense_account = create_account(\n+\t\t\taccount_name=\"Deferred Expense\",\n+\t\t\tparent_account=\"Current Assets - _CD\",\n+\t\t\tcompany=\"_Test Company DR\",\n+\t\t)\n+\n+\t\tacc_settings = frappe.get_doc(\"Accounts Settings\", \"Accounts Settings\")\n+\t\tacc_settings.book_deferred_entries_based_on = \"Months\"\n+\t\tacc_settings.save()\n+\n+\t\tsupplier = create_supplier(\n+\t\t\tsupplier_name=\"_Test Furniture Supplier\", supplier_group=\"Local\", supplier_type=\"Company\"\n+\t\t)\n+\n+\t\titem = create_item(\n+\t\t\t\"_Test Office Desk\",\n+\t\t\tis_stock_item=0,\n+\t\t\twarehouse=\"All Warehouses - _CD\",\n+\t\t\tcompany=\"_Test Company DR\",\n+\t\t)\n+\t\titem.enable_deferred_expense = 1\n+\t\titem.deferred_expense_account = deferred_expense_account\n+\t\titem.no_of_months_exp = 3\n+\t\titem.save()\n+\n+\t\tpi = make_purchase_invoice(\n+\t\t\titem=item.name,\n+\t\t\tcompany=\"_Test Company DR\",\n+\t\t\tsupplier=\"_Test Furniture Supplier\",\n+\t\t\tis_return=False,\n+\t\t\tupdate_stock=False,\n+\t\t\tposting_date=frappe.utils.datetime.date(2021, 5, 1),\n+\t\t\tparent_cost_center=\"Main - _CD\",\n+\t\t\tcost_center=\"Main - _CD\",\n+\t\t\tdo_not_save=True,\n+\t\t\trate=300,\n+\t\t\tprice_list_rate=300,\n+\t\t\twarehouse=\"All Warehouses - _CD\",\n+\t\t\tqty=1,\n+\t\t)\n+\t\tpi.set_posting_time = True\n+\t\tpi.items[0].enable_deferred_expense = 1\n+\t\tpi.items[0].service_start_date = \"2021-05-01\"\n+\t\tpi.items[0].service_end_date = \"2021-08-01\"\n+\t\tpi.items[0].deferred_expense_account = deferred_expense_account\n+\t\tpi.items[0].expense_account = \"Office Maintenance Expenses - _CD\"\n+\t\tpi.save()\n+\t\tpi.submit()\n+\n+\t\tpda = frappe.get_doc(\n+\t\t\tdict(\n+\t\t\t\tdoctype=\"Process Deferred Accounting\",\n+\t\t\t\tposting_date=nowdate(),\n+\t\t\t\tstart_date=\"2021-05-01\",\n+\t\t\t\tend_date=\"2021-08-01\",\n+\t\t\t\ttype=\"Expense\",\n+\t\t\t\tcompany=\"_Test Company DR\",\n+\t\t\t)\n+\t\t)\n+\t\tpda.insert()\n+\t\tpda.submit()\n+\n+\t\t# execute report\n+\t\tfiscal_year = frappe.get_doc(\"Fiscal Year\", frappe.defaults.get_user_default(\"fiscal_year\"))\n+\t\tself.filters = frappe._dict(\n+\t\t\t{\n+\t\t\t\t\"company\": frappe.defaults.get_user_default(\"Company\"),\n+\t\t\t\t\"filter_based_on\": \"Date Range\",\n+\t\t\t\t\"period_start_date\": \"2021-05-01\",\n+\t\t\t\t\"period_end_date\": \"2021-08-01\",\n+\t\t\t\t\"from_fiscal_year\": fiscal_year.year,\n+\t\t\t\t\"to_fiscal_year\": fiscal_year.year,\n+\t\t\t\t\"periodicity\": \"Monthly\",\n+\t\t\t\t\"type\": \"Expense\",\n+\t\t\t\t\"with_upcoming_postings\": False,\n+\t\t\t}\n+\t\t)\n+\n+\t\treport = Deferred_Revenue_and_Expense_Report(filters=self.filters)\n+\t\treport.run()\n+\t\texpected = [\n+\t\t\t{\"key\": \"may_2021\", \"total\": -100.0, \"actual\": -100.0},\n+\t\t\t{\"key\": \"jun_2021\", \"total\": -100.0, \"actual\": -100.0},\n+\t\t\t{\"key\": \"jul_2021\", \"total\": -100.0, \"actual\": -100.0},\n+\t\t\t{\"key\": \"aug_2021\", \"total\": 0, \"actual\": 0},\n+\t\t]\n+\t\tself.assertEqual(report.period_total, expected)\n+\n+\n+def create_company():\n+\tcompany = frappe.db.exists(\"Company\", \"_Test Company DR\")\n+\tif not company:\n+\t\tcompany = frappe.new_doc(\"Company\")\n+\t\tcompany.company_name = \"_Test Company DR\"\n+\t\tcompany.default_currency = \"INR\"\n+\t\tcompany.chart_of_accounts = \"Standard\"\n+\t\tcompany.insert()\n+\n+\n+def clear_old_entries():\n+\titem = qb.DocType(\"Item\")\n+\taccount = qb.DocType(\"Account\")\n+\tcustomer = qb.DocType(\"Customer\")\n+\tsupplier = qb.DocType(\"Supplier\")\n+\tsinv = qb.DocType(\"Sales Invoice\")\n+\tsinv_item = qb.DocType(\"Sales Invoice Item\")\n+\tpinv = qb.DocType(\"Purchase Invoice\")\n+\tpinv_item = qb.DocType(\"Purchase Invoice Item\")\n+\n+\tqb.from_(account).delete().where(\n+\t\t(account.account_name == \"Deferred Revenue\")\n+\t\t| (account.account_name == \"Deferred Expense\") & (account.company == \"_Test Company DR\")\n+\t).run()\n+\tqb.from_(item).delete().where(\n+\t\t(item.item_code == \"_Test Internet Subscription\") | (item.item_code == \"_Test Office Rent\")\n+\t).run()\n+\tqb.from_(customer).delete().where(customer.customer_name == \"_Test Customer DR\").run()\n+\tqb.from_(supplier).delete().where(supplier.supplier_name == \"_Test Furniture Supplier\").run()\n+\n+\t# delete existing invoices with deferred items\n+\tdeferred_invoices = (\n+\t\tqb.from_(sinv)\n+\t\t.join(sinv_item)\n+\t\t.on(sinv.name == sinv_item.parent)\n+\t\t.select(sinv.name)\n+\t\t.where(sinv_item.enable_deferred_revenue == 1)\n+\t\t.run()\n+\t)\n+\tif deferred_invoices:\n+\t\tqb.from_(sinv).delete().where(sinv.name.isin(deferred_invoices)).run()\n+\n+\tdeferred_invoices = (\n+\t\tqb.from_(pinv)\n+\t\t.join(pinv_item)\n+\t\t.on(pinv.name == pinv_item.parent)\n+\t\t.select(pinv.name)\n+\t\t.where(pinv_item.enable_deferred_expense == 1)\n+\t\t.run()\n+\t)\n+\tif deferred_invoices:\n+\t\tqb.from_(pinv).delete().where(pinv.name.isin(deferred_invoices)).run()\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,436 @@\n+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n+# License: MIT. See LICENSE\n+\n+import frappe\n+from frappe import qb\n+from frappe.query_builder import Column, functions\n+from frappe.utils import add_days, date_diff, flt, get_first_day, get_last_day, rounded\n+\n+from erpnext.accounts.report.financial_statements import get_period_list\n+\n+\n+class Deferred_Item(object):\n+\t\"\"\"\n+\tHelper class for processing items with deferred revenue/expense\n+\t\"\"\"\n+\n+\tdef __init__(self, item, inv, gle_entries):\n+\t\tself.name = item\n+\t\tself.parent = inv.name\n+\t\tself.item_name = gle_entries[0].item_name\n+\t\tself.service_start_date = gle_entries[0].service_start_date\n+\t\tself.service_end_date = gle_entries[0].service_end_date\n+\t\tself.base_net_amount = gle_entries[0].base_net_amount\n+\t\tself.filters = inv.filters\n+\t\tself.period_list = inv.period_list\n+\n+\t\tif gle_entries[0].deferred_revenue_account:\n+\t\t\tself.type = \"Deferred Sale Item\"\n+\t\t\tself.deferred_account = gle_entries[0].deferred_revenue_account\n+\t\telif gle_entries[0].deferred_expense_account:\n+\t\t\tself.type = \"Deferred Purchase Item\"\n+\t\t\tself.deferred_account = gle_entries[0].deferred_expense_account\n+\n+\t\tself.gle_entries = []\n+\t\t# holds period wise total for item\n+\t\tself.period_total = []\n+\t\tself.last_entry_date = self.service_start_date\n+\n+\t\tif gle_entries:\n+\t\t\tself.gle_entries = gle_entries\n+\t\t\tfor x in self.gle_entries:\n+\t\t\t\tif self.get_amount(x):\n+\t\t\t\t\tself.last_entry_date = x.gle_posting_date\n+\n+\tdef report_data(self):\n+\t\t\"\"\"\n+\t\tGenerate report data for output\n+\t\t\"\"\"\n+\t\tret_data = frappe._dict({\"name\": self.item_name})\n+\t\tfor period in self.period_total:\n+\t\t\tret_data[period.key] = period.total\n+\t\t\tret_data.indent = 1\n+\t\treturn ret_data\n+\n+\tdef get_amount(self, entry):\n+\t\t\"\"\"\n+\t\tFor a given GL/Journal posting, get balance based on item type\n+\t\t\"\"\"\n+\t\tif self.type == \"Deferred Sale Item\":\n+\t\t\treturn entry.debit - entry.credit\n+\t\telif self.type == \"Deferred Purchase Item\":\n+\t\t\treturn -(entry.credit - entry.debit)\n+\t\treturn 0\n+\n+\tdef get_item_total(self):\n+\t\t\"\"\"\n+\t\tHelper method - calculate booked amount. Includes simulated postings as well\n+\t\t\"\"\"\n+\t\ttotal = 0\n+\t\tfor gle_posting in self.gle_entries:\n+\t\t\ttotal += self.get_amount(gle_posting)\n+\n+\t\treturn total\n+\n+\tdef calculate_amount(self, start_date, end_date):\n+\t\t\"\"\"\n+\t\tstart_date, end_date - datetime.datetime.date\n+\t\treturn - estimated amount to post for given period\n+\t\tCalculated based on already booked amount and item service period\n+\t\t\"\"\"\n+\t\ttotal_months = (\n+\t\t\t(self.service_end_date.year - self.service_start_date.year) * 12\n+\t\t\t+ (self.service_end_date.month - self.service_start_date.month)\n+\t\t\t+ 1\n+\t\t)\n+\n+\t\tprorate = date_diff(self.service_end_date, self.service_start_date) / date_diff(\n+\t\t\tget_last_day(self.service_end_date), get_first_day(self.service_start_date)\n+\t\t)\n+\n+\t\tactual_months = rounded(total_months * prorate, 1)\n+\n+\t\talready_booked_amount = self.get_item_total()\n+\t\tbase_amount = self.base_net_amount / actual_months\n+\n+\t\tif base_amount + already_booked_amount > self.base_net_amount:\n+\t\t\tbase_amount = self.base_net_amount - already_booked_amount\n+\n+\t\tif not (get_first_day(start_date) == start_date and get_last_day(end_date) == end_date):\n+\t\t\tpartial_month = flt(date_diff(end_date, start_date)) / flt(\n+\t\t\t\tdate_diff(get_last_day(end_date), get_first_day(start_date))\n+\t\t\t)\n+\t\t\tbase_amount *= rounded(partial_month, 1)\n+\n+\t\treturn base_amount\n+\n+\tdef make_dummy_gle(self, name, date, amount):\n+\t\t\"\"\"\n+\t\treturn - frappe._dict() of a dummy gle entry\n+\t\t\"\"\"\n+\t\tentry = frappe._dict(\n+\t\t\t{\"name\": name, \"gle_posting_date\": date, \"debit\": 0, \"credit\": 0, \"posted\": \"not\"}\n+\t\t)\n+\t\tif self.type == \"Deferred Sale Item\":\n+\t\t\tentry.debit = amount\n+\t\telif self.type == \"Deferred Purchase Item\":\n+\t\t\tentry.credit = amount\n+\t\treturn entry\n+\n+\tdef simulate_future_posting(self):\n+\t\t\"\"\"\n+\t\tsimulate future posting by creating dummy gl entries. starts from the last posting date.\n+\t\t\"\"\"\n+\t\tif add_days(self.last_entry_date, 1) < self.period_list[-1].to_date:\n+\t\t\tself.estimate_for_period_list = get_period_list(\n+\t\t\t\tself.filters.from_fiscal_year,\n+\t\t\t\tself.filters.to_fiscal_year,\n+\t\t\t\tadd_days(self.last_entry_date, 1),\n+\t\t\t\tself.period_list[-1].to_date,\n+\t\t\t\t\"Date Range\",\n+\t\t\t\t\"Monthly\",\n+\t\t\t\tcompany=self.filters.company,\n+\t\t\t)\n+\t\t\tfor period in self.estimate_for_period_list:\n+\t\t\t\tamount = self.calculate_amount(period.from_date, period.to_date)\n+\t\t\t\tgle = self.make_dummy_gle(period.key, period.to_date, amount)\n+\t\t\t\tself.gle_entries.append(gle)\n+\n+\tdef calculate_item_revenue_expense_for_period(self):\n+\t\t\"\"\"\n+\t\tcalculate item postings for each period and update period_total list\n+\t\t\"\"\"\n+\t\tfor period in self.period_list:\n+\t\t\tperiod_sum = 0\n+\t\t\tactual = 0\n+\t\t\tfor posting in self.gle_entries:\n+\t\t\t\t# if period.from_date <= posting.posting_date <= period.to_date:\n+\t\t\t\tif period.from_date <= posting.gle_posting_date <= period.to_date:\n+\t\t\t\t\tperiod_sum += self.get_amount(posting)\n+\t\t\t\t\tif posting.posted == \"posted\":\n+\t\t\t\t\t\tactual += self.get_amount(posting)\n+\n+\t\t\tself.period_total.append(\n+\t\t\t\tfrappe._dict({\"key\": period.key, \"total\": period_sum, \"actual\": actual})\n+\t\t\t)\n+\t\treturn self.period_total\n+\n+\n+class Deferred_Invoice(object):\n+\tdef __init__(self, invoice, items, filters, period_list):\n+\t\t\"\"\"\n+\t\tHelper class for processing invoices with deferred revenue/expense items\n+\t\tinvoice - string : invoice name\n+\t\titems - list : frappe._dict() with item details. Refer Deferred_Item for required fields\n+\t\t\"\"\"\n+\t\tself.name = invoice\n+\t\tself.posting_date = items[0].posting_date\n+\t\tself.filters = filters\n+\t\tself.period_list = period_list\n+\t\t# holds period wise total for invoice\n+\t\tself.period_total = []\n+\n+\t\tif items[0].deferred_revenue_account:\n+\t\t\tself.type = \"Sales\"\n+\t\telif items[0].deferred_expense_account:\n+\t\t\tself.type = \"Purchase\"\n+\n+\t\tself.items = []\n+\t\t# for each uniq items\n+\t\tself.uniq_items = set([x.item for x in items])\n+\t\tfor item in self.uniq_items:\n+\t\t\tself.items.append(Deferred_Item(item, self, [x for x in items if x.item == item]))\n+\n+\tdef get_postings(self):",
"line": null,
"original_line": 184,
"original_start_line": null,
"path": "erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py",
"start_line": null,
"text": "@user1:\nSeems like a dead function, not called from anywhere\n\n@author:\nremoved"
}
] |
64f68d5e94e70a6c457bee2fab95bda928d722c1
|
diff --git a/erpnext/accounts/report/deferred_revenue_and_expense/__init__.py b/erpnext/accounts/report/deferred_revenue_and_expense/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.js b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.js
new file mode 100644
index 000000000000..0056b9e8f564
--- /dev/null
+++ b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.js
@@ -0,0 +1,114 @@
+// Copyright (c) 2016, Frappe Technologies Pvt. Ltd. and contributors
+// For license information, please see license.txt
+/* eslint-disable */
+
+function get_filters() {
+ let filters = [
+ {
+ "fieldname":"company",
+ "label": __("Company"),
+ "fieldtype": "Link",
+ "options": "Company",
+ "default": frappe.defaults.get_user_default("Company"),
+ "reqd": 1
+ },
+ {
+ "fieldname":"filter_based_on",
+ "label": __("Filter Based On"),
+ "fieldtype": "Select",
+ "options": ["Fiscal Year", "Date Range"],
+ "default": ["Fiscal Year"],
+ "reqd": 1,
+ on_change: function() {
+ let filter_based_on = frappe.query_report.get_filter_value('filter_based_on');
+ frappe.query_report.toggle_filter_display('from_fiscal_year', filter_based_on === 'Date Range');
+ frappe.query_report.toggle_filter_display('to_fiscal_year', filter_based_on === 'Date Range');
+ frappe.query_report.toggle_filter_display('period_start_date', filter_based_on === 'Fiscal Year');
+ frappe.query_report.toggle_filter_display('period_end_date', filter_based_on === 'Fiscal Year');
+
+ frappe.query_report.refresh();
+ }
+ },
+ {
+ "fieldname":"period_start_date",
+ "label": __("Start Date"),
+ "fieldtype": "Date",
+ "hidden": 1,
+ "reqd": 1
+ },
+ {
+ "fieldname":"period_end_date",
+ "label": __("End Date"),
+ "fieldtype": "Date",
+ "hidden": 1,
+ "reqd": 1
+ },
+ {
+ "fieldname":"from_fiscal_year",
+ "label": __("Start Year"),
+ "fieldtype": "Link",
+ "options": "Fiscal Year",
+ "default": frappe.defaults.get_user_default("fiscal_year"),
+ "reqd": 1
+ },
+ {
+ "fieldname":"to_fiscal_year",
+ "label": __("End Year"),
+ "fieldtype": "Link",
+ "options": "Fiscal Year",
+ "default": frappe.defaults.get_user_default("fiscal_year"),
+ "reqd": 1
+ },
+ {
+ "fieldname": "periodicity",
+ "label": __("Periodicity"),
+ "fieldtype": "Select",
+ "options": [
+ { "value": "Monthly", "label": __("Monthly") },
+ { "value": "Quarterly", "label": __("Quarterly") },
+ { "value": "Half-Yearly", "label": __("Half-Yearly") },
+ { "value": "Yearly", "label": __("Yearly") }
+ ],
+ "default": "Monthly",
+ "reqd": 1
+ },
+ {
+ "fieldname": "type",
+ "label": __("Invoice Type"),
+ "fieldtype": "Select",
+ "options": [
+ { "value": "Revenue", "label": __("Revenue") },
+ { "value": "Expense", "label": __("Expense") }
+ ],
+ "default": "Revenue",
+ "reqd": 1
+ },
+ {
+ "fieldname" : "with_upcoming_postings",
+ "label": __("Show with upcoming revenue/expense"),
+ "fieldtype": "Check",
+ "default": 1
+ }
+ ]
+
+ return filters;
+}
+
+frappe.query_reports["Deferred Revenue and Expense"] = {
+ "filters": get_filters(),
+ "formatter": function(value, row, column, data, default_formatter){
+ return default_formatter(value, row, column, data);
+ },
+ onload: function(report){
+ let fiscal_year = frappe.defaults.get_user_default("fiscal_year");
+
+ frappe.model.with_doc("Fiscal Year", fiscal_year, function(r) {
+ var fy = frappe.model.get_doc("Fiscal Year", fiscal_year);
+ frappe.query_report.set_filter_value({
+ period_start_date: fy.year_start_date,
+ period_end_date: fy.year_end_date
+ });
+ });
+ }
+};
+
diff --git a/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.json b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.json
new file mode 100644
index 000000000000..c7dfb3b7142a
--- /dev/null
+++ b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.json
@@ -0,0 +1,32 @@
+{
+ "add_total_row": 0,
+ "columns": [],
+ "creation": "2021-12-10 19:27:14.654220",
+ "disable_prepared_report": 0,
+ "disabled": 0,
+ "docstatus": 0,
+ "doctype": "Report",
+ "filters": [],
+ "idx": 0,
+ "is_standard": "Yes",
+ "modified": "2021-12-10 19:27:14.654220",
+ "modified_by": "Administrator",
+ "module": "Accounts",
+ "name": "Deferred Revenue and Expense",
+ "owner": "Administrator",
+ "prepared_report": 0,
+ "ref_doctype": "GL Entry",
+ "report_name": "Deferred Revenue and Expense",
+ "report_type": "Script Report",
+ "roles": [
+ {
+ "role": "Accounts User"
+ },
+ {
+ "role": "Accounts Manager"
+ },
+ {
+ "role": "Auditor"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py
new file mode 100644
index 000000000000..a4842c1844f0
--- /dev/null
+++ b/erpnext/accounts/report/deferred_revenue_and_expense/deferred_revenue_and_expense.py
@@ -0,0 +1,440 @@
+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors
+# License: MIT. See LICENSE
+
+import frappe
+from frappe import _, qb
+from frappe.query_builder import Column, functions
+from frappe.utils import add_days, date_diff, flt, get_first_day, get_last_day, rounded
+
+from erpnext.accounts.report.financial_statements import get_period_list
+
+
+class Deferred_Item(object):
+ """
+ Helper class for processing items with deferred revenue/expense
+ """
+
+ def __init__(self, item, inv, gle_entries):
+ self.name = item
+ self.parent = inv.name
+ self.item_name = gle_entries[0].item_name
+ self.service_start_date = gle_entries[0].service_start_date
+ self.service_end_date = gle_entries[0].service_end_date
+ self.base_net_amount = gle_entries[0].base_net_amount
+ self.filters = inv.filters
+ self.period_list = inv.period_list
+
+ if gle_entries[0].deferred_revenue_account:
+ self.type = "Deferred Sale Item"
+ self.deferred_account = gle_entries[0].deferred_revenue_account
+ elif gle_entries[0].deferred_expense_account:
+ self.type = "Deferred Purchase Item"
+ self.deferred_account = gle_entries[0].deferred_expense_account
+
+ self.gle_entries = []
+ # holds period wise total for item
+ self.period_total = []
+ self.last_entry_date = self.service_start_date
+
+ if gle_entries:
+ self.gle_entries = gle_entries
+ for x in self.gle_entries:
+ if self.get_amount(x):
+ self.last_entry_date = x.gle_posting_date
+
+ def report_data(self):
+ """
+ Generate report data for output
+ """
+ ret_data = frappe._dict({"name": self.item_name})
+ for period in self.period_total:
+ ret_data[period.key] = period.total
+ ret_data.indent = 1
+ return ret_data
+
+ def get_amount(self, entry):
+ """
+ For a given GL/Journal posting, get balance based on item type
+ """
+ if self.type == "Deferred Sale Item":
+ return entry.debit - entry.credit
+ elif self.type == "Deferred Purchase Item":
+ return -(entry.credit - entry.debit)
+ return 0
+
+ def get_item_total(self):
+ """
+ Helper method - calculate booked amount. Includes simulated postings as well
+ """
+ total = 0
+ for gle_posting in self.gle_entries:
+ total += self.get_amount(gle_posting)
+
+ return total
+
+ def calculate_amount(self, start_date, end_date):
+ """
+ start_date, end_date - datetime.datetime.date
+ return - estimated amount to post for given period
+ Calculated based on already booked amount and item service period
+ """
+ total_months = (
+ (self.service_end_date.year - self.service_start_date.year) * 12
+ + (self.service_end_date.month - self.service_start_date.month)
+ + 1
+ )
+
+ prorate = date_diff(self.service_end_date, self.service_start_date) / date_diff(
+ get_last_day(self.service_end_date), get_first_day(self.service_start_date)
+ )
+
+ actual_months = rounded(total_months * prorate, 1)
+
+ already_booked_amount = self.get_item_total()
+ base_amount = self.base_net_amount / actual_months
+
+ if base_amount + already_booked_amount > self.base_net_amount:
+ base_amount = self.base_net_amount - already_booked_amount
+
+ if not (get_first_day(start_date) == start_date and get_last_day(end_date) == end_date):
+ partial_month = flt(date_diff(end_date, start_date)) / flt(
+ date_diff(get_last_day(end_date), get_first_day(start_date))
+ )
+ base_amount *= rounded(partial_month, 1)
+
+ return base_amount
+
+ def make_dummy_gle(self, name, date, amount):
+ """
+ return - frappe._dict() of a dummy gle entry
+ """
+ entry = frappe._dict(
+ {"name": name, "gle_posting_date": date, "debit": 0, "credit": 0, "posted": "not"}
+ )
+ if self.type == "Deferred Sale Item":
+ entry.debit = amount
+ elif self.type == "Deferred Purchase Item":
+ entry.credit = amount
+ return entry
+
+ def simulate_future_posting(self):
+ """
+ simulate future posting by creating dummy gl entries. starts from the last posting date.
+ """
+ if add_days(self.last_entry_date, 1) < self.period_list[-1].to_date:
+ self.estimate_for_period_list = get_period_list(
+ self.filters.from_fiscal_year,
+ self.filters.to_fiscal_year,
+ add_days(self.last_entry_date, 1),
+ self.period_list[-1].to_date,
+ "Date Range",
+ "Monthly",
+ company=self.filters.company,
+ )
+ for period in self.estimate_for_period_list:
+ amount = self.calculate_amount(period.from_date, period.to_date)
+ gle = self.make_dummy_gle(period.key, period.to_date, amount)
+ self.gle_entries.append(gle)
+
+ def calculate_item_revenue_expense_for_period(self):
+ """
+ calculate item postings for each period and update period_total list
+ """
+ for period in self.period_list:
+ period_sum = 0
+ actual = 0
+ for posting in self.gle_entries:
+ # if period.from_date <= posting.posting_date <= period.to_date:
+ if period.from_date <= posting.gle_posting_date <= period.to_date:
+ period_sum += self.get_amount(posting)
+ if posting.posted == "posted":
+ actual += self.get_amount(posting)
+
+ self.period_total.append(
+ frappe._dict({"key": period.key, "total": period_sum, "actual": actual})
+ )
+ return self.period_total
+
+
+class Deferred_Invoice(object):
+ def __init__(self, invoice, items, filters, period_list):
+ """
+ Helper class for processing invoices with deferred revenue/expense items
+ invoice - string : invoice name
+ items - list : frappe._dict() with item details. Refer Deferred_Item for required fields
+ """
+ self.name = invoice
+ self.posting_date = items[0].posting_date
+ self.filters = filters
+ self.period_list = period_list
+ # holds period wise total for invoice
+ self.period_total = []
+
+ if items[0].deferred_revenue_account:
+ self.type = "Sales"
+ elif items[0].deferred_expense_account:
+ self.type = "Purchase"
+
+ self.items = []
+ # for each uniq items
+ self.uniq_items = set([x.item for x in items])
+ for item in self.uniq_items:
+ self.items.append(Deferred_Item(item, self, [x for x in items if x.item == item]))
+
+ def calculate_invoice_revenue_expense_for_period(self):
+ """
+ calculate deferred revenue/expense for all items in invoice
+ """
+ # initialize period_total list for invoice
+ for period in self.period_list:
+ self.period_total.append(frappe._dict({"key": period.key, "total": 0, "actual": 0}))
+
+ for item in self.items:
+ item_total = item.calculate_item_revenue_expense_for_period()
+ # update invoice total
+ for idx, period in enumerate(self.period_list, 0):
+ self.period_total[idx].total += item_total[idx].total
+ self.period_total[idx].actual += item_total[idx].actual
+ return self.period_total
+
+ def estimate_future(self):
+ """
+ create dummy GL entries for upcoming months for all items in invoice
+ """
+ [item.simulate_future_posting() for item in self.items]
+
+ def report_data(self):
+ """
+ generate report data for invoice, includes invoice total
+ """
+ ret_data = []
+ inv_total = frappe._dict({"name": self.name})
+ for x in self.period_total:
+ inv_total[x.key] = x.total
+ inv_total.indent = 0
+ ret_data.append(inv_total)
+ list(map(lambda item: ret_data.append(item.report_data()), self.items))
+ return ret_data
+
+
+class Deferred_Revenue_and_Expense_Report(object):
+ def __init__(self, filters=None):
+ """
+ Initialize deferred revenue/expense report with user provided filters or system defaults, if none is provided
+ """
+
+ # If no filters are provided, get user defaults
+ if not filters:
+ fiscal_year = frappe.get_doc("Fiscal Year", frappe.defaults.get_user_default("fiscal_year"))
+ self.filters = frappe._dict(
+ {
+ "company": frappe.defaults.get_user_default("Company"),
+ "filter_based_on": "Fiscal Year",
+ "period_start_date": fiscal_year.year_start_date,
+ "period_end_date": fiscal_year.year_end_date,
+ "from_fiscal_year": fiscal_year.year,
+ "to_fiscal_year": fiscal_year.year,
+ "periodicity": "Monthly",
+ "type": "Revenue",
+ "with_upcoming_postings": True,
+ }
+ )
+ else:
+ self.filters = frappe._dict(filters)
+
+ self.period_list = None
+ self.deferred_invoices = []
+ # holds period wise total for report
+ self.period_total = []
+
+ def get_period_list(self):
+ """
+ Figure out selected period based on filters
+ """
+ self.period_list = get_period_list(
+ self.filters.from_fiscal_year,
+ self.filters.to_fiscal_year,
+ self.filters.period_start_date,
+ self.filters.period_end_date,
+ self.filters.filter_based_on,
+ self.filters.periodicity,
+ company=self.filters.company,
+ )
+
+ def get_invoices(self):
+ """
+ Get all sales and purchase invoices which has deferred revenue/expense items
+ """
+ gle = qb.DocType("GL Entry")
+ # column doesn't have an alias option
+ posted = Column("posted")
+
+ if self.filters.type == "Revenue":
+ inv = qb.DocType("Sales Invoice")
+ inv_item = qb.DocType("Sales Invoice Item")
+ deferred_flag_field = inv_item["enable_deferred_revenue"]
+ deferred_account_field = inv_item["deferred_revenue_account"]
+
+ elif self.filters.type == "Expense":
+ inv = qb.DocType("Purchase Invoice")
+ inv_item = qb.DocType("Purchase Invoice Item")
+ deferred_flag_field = inv_item["enable_deferred_expense"]
+ deferred_account_field = inv_item["deferred_expense_account"]
+
+ query = (
+ qb.from_(inv_item)
+ .join(inv)
+ .on(inv.name == inv_item.parent)
+ .join(gle)
+ .on((inv_item.name == gle.voucher_detail_no) & (deferred_account_field == gle.account))
+ .select(
+ inv.name.as_("doc"),
+ inv.posting_date,
+ inv_item.name.as_("item"),
+ inv_item.item_name,
+ inv_item.service_start_date,
+ inv_item.service_end_date,
+ inv_item.base_net_amount,
+ deferred_account_field,
+ gle.posting_date.as_("gle_posting_date"),
+ functions.Sum(gle.debit).as_("debit"),
+ functions.Sum(gle.credit).as_("credit"),
+ posted,
+ )
+ .where(
+ (inv.docstatus == 1)
+ & (deferred_flag_field == 1)
+ & (
+ (
+ (self.period_list[0].from_date >= inv_item.service_start_date)
+ & (inv_item.service_end_date >= self.period_list[0].from_date)
+ )
+ | (
+ (inv_item.service_start_date >= self.period_list[0].from_date)
+ & (inv_item.service_start_date <= self.period_list[-1].to_date)
+ )
+ )
+ )
+ .groupby(inv.name, inv_item.name, gle.posting_date)
+ .orderby(gle.posting_date)
+ )
+ self.invoices = query.run(as_dict=True)
+
+ uniq_invoice = set([x.doc for x in self.invoices])
+ for inv in uniq_invoice:
+ self.deferred_invoices.append(
+ Deferred_Invoice(
+ inv, [x for x in self.invoices if x.doc == inv], self.filters, self.period_list
+ )
+ )
+
+ def estimate_future(self):
+ """
+ For all Invoices estimate upcoming postings
+ """
+ for x in self.deferred_invoices:
+ x.estimate_future()
+
+ def calculate_revenue_and_expense(self):
+ """
+ calculate the deferred revenue/expense for all invoices
+ """
+ # initialize period_total list for report
+ for period in self.period_list:
+ self.period_total.append(frappe._dict({"key": period.key, "total": 0, "actual": 0}))
+
+ for inv in self.deferred_invoices:
+ inv_total = inv.calculate_invoice_revenue_expense_for_period()
+ # calculate total for whole report
+ for idx, period in enumerate(self.period_list, 0):
+ self.period_total[idx].total += inv_total[idx].total
+ self.period_total[idx].actual += inv_total[idx].actual
+
+ def get_columns(self):
+ columns = []
+ columns.append({"label": _("Name"), "fieldname": "name", "fieldtype": "Data", "read_only": 1})
+ for period in self.period_list:
+ columns.append(
+ {
+ "label": _(period.label),
+ "fieldname": period.key,
+ "fieldtype": "Currency",
+ "read_only": 1,
+ })
+ return columns
+
+ def generate_report_data(self):
+ """
+ Generate report data for all invoices. Adds total rows for revenue and expense
+ """
+ ret = []
+
+ for inv in self.deferred_invoices:
+ ret += inv.report_data()
+
+ # empty row for padding
+ ret += [{}]
+
+ # add total row
+ if ret is not []:
+ if self.filters.type == "Revenue":
+ total_row = frappe._dict({"name": "Total Deferred Income"})
+ elif self.filters.type == "Expense":
+ total_row = frappe._dict({"name": "Total Deferred Expense"})
+
+ for idx, period in enumerate(self.period_list, 0):
+ total_row[period.key] = self.period_total[idx].total
+ ret.append(total_row)
+
+ return ret
+
+ def prepare_chart(self):
+ chart = {
+ "data": {
+ "labels": [period.label for period in self.period_list],
+ "datasets": [
+ {
+ "name": "Actual Posting",
+ "chartType": "bar",
+ "values": [x.actual for x in self.period_total],
+ }
+ ],
+ },
+ "type": "axis-mixed",
+ "height": 500,
+ "axisOptions": {"xAxisMode": "Tick", "xIsSeries": True},
+ "barOptions": {"stacked": False, "spaceRatio": 0.5},
+ }
+
+ if self.filters.with_upcoming_postings:
+ chart["data"]["datasets"].append({
+ "name": "Expected",
+ "chartType": "line",
+ "values": [x.total for x in self.period_total]
+ })
+
+ return chart
+
+ def run(self, *args, **kwargs):
+ """
+ Run report and generate data
+ """
+ self.deferred_invoices.clear()
+ self.get_period_list()
+ self.get_invoices()
+
+ if self.filters.with_upcoming_postings:
+ self.estimate_future()
+ self.calculate_revenue_and_expense()
+
+
+def execute(filters=None):
+ report = Deferred_Revenue_and_Expense_Report(filters=filters)
+ report.run()
+
+ columns = report.get_columns()
+ data = report.generate_report_data()
+ message = []
+ chart = report.prepare_chart()
+
+ return columns, data, message, chart
diff --git a/erpnext/accounts/report/deferred_revenue_and_expense/test_deferred_revenue_and_expense.py b/erpnext/accounts/report/deferred_revenue_and_expense/test_deferred_revenue_and_expense.py
new file mode 100644
index 000000000000..1de6fb68241a
--- /dev/null
+++ b/erpnext/accounts/report/deferred_revenue_and_expense/test_deferred_revenue_and_expense.py
@@ -0,0 +1,253 @@
+import unittest
+
+import frappe
+from frappe import qb
+from frappe.utils import nowdate
+
+from erpnext.accounts.doctype.account.test_account import create_account
+from erpnext.accounts.doctype.purchase_invoice.test_purchase_invoice import make_purchase_invoice
+from erpnext.accounts.doctype.sales_invoice.test_sales_invoice import create_sales_invoice
+from erpnext.accounts.report.deferred_revenue_and_expense.deferred_revenue_and_expense import (
+ Deferred_Revenue_and_Expense_Report,
+)
+from erpnext.buying.doctype.supplier.test_supplier import create_supplier
+from erpnext.stock.doctype.item.test_item import create_item
+
+
+class TestDeferredRevenueAndExpense(unittest.TestCase):
+ @classmethod
+ def setUpClass(self):
+ clear_old_entries()
+ create_company()
+
+ def test_deferred_revenue(self):
+ # created deferred expense accounts, if not found
+ deferred_revenue_account = create_account(
+ account_name="Deferred Revenue",
+ parent_account="Current Liabilities - _CD",
+ company="_Test Company DR",
+ )
+
+ acc_settings = frappe.get_doc("Accounts Settings", "Accounts Settings")
+ acc_settings.book_deferred_entries_based_on = "Months"
+ acc_settings.save()
+
+ customer = frappe.new_doc("Customer")
+ customer.customer_name = "_Test Customer DR"
+ customer.type = "Individual"
+ customer.insert()
+
+ item = create_item(
+ "_Test Internet Subscription",
+ is_stock_item=0,
+ warehouse="All Warehouses - _CD",
+ company="_Test Company DR",
+ )
+ item.enable_deferred_revenue = 1
+ item.deferred_revenue_account = deferred_revenue_account
+ item.no_of_months = 3
+ item.save()
+
+ si = create_sales_invoice(
+ item=item.name,
+ company="_Test Company DR",
+ customer="_Test Customer DR",
+ debit_to="Debtors - _CD",
+ posting_date="2021-05-01",
+ parent_cost_center="Main - _CD",
+ cost_center="Main - _CD",
+ do_not_submit=True,
+ rate=300,
+ price_list_rate=300,
+ )
+ si.items[0].enable_deferred_revenue = 1
+ si.items[0].service_start_date = "2021-05-01"
+ si.items[0].service_end_date = "2021-08-01"
+ si.items[0].deferred_revenue_account = deferred_revenue_account
+ si.items[0].income_account = "Sales - _CD"
+ si.save()
+ si.submit()
+
+ pda = frappe.get_doc(
+ dict(
+ doctype="Process Deferred Accounting",
+ posting_date=nowdate(),
+ start_date="2021-05-01",
+ end_date="2021-08-01",
+ type="Income",
+ company="_Test Company DR",
+ )
+ )
+ pda.insert()
+ pda.submit()
+
+ # execute report
+ fiscal_year = frappe.get_doc("Fiscal Year", frappe.defaults.get_user_default("fiscal_year"))
+ self.filters = frappe._dict(
+ {
+ "company": frappe.defaults.get_user_default("Company"),
+ "filter_based_on": "Date Range",
+ "period_start_date": "2021-05-01",
+ "period_end_date": "2021-08-01",
+ "from_fiscal_year": fiscal_year.year,
+ "to_fiscal_year": fiscal_year.year,
+ "periodicity": "Monthly",
+ "type": "Revenue",
+ "with_upcoming_postings": False,
+ }
+ )
+
+ report = Deferred_Revenue_and_Expense_Report(filters=self.filters)
+ report.run()
+ expected = [
+ {"key": "may_2021", "total": 100.0, "actual": 100.0},
+ {"key": "jun_2021", "total": 100.0, "actual": 100.0},
+ {"key": "jul_2021", "total": 100.0, "actual": 100.0},
+ {"key": "aug_2021", "total": 0, "actual": 0},
+ ]
+ self.assertEqual(report.period_total, expected)
+
+ def test_deferred_expense(self):
+ # created deferred expense accounts, if not found
+ deferred_expense_account = create_account(
+ account_name="Deferred Expense",
+ parent_account="Current Assets - _CD",
+ company="_Test Company DR",
+ )
+
+ acc_settings = frappe.get_doc("Accounts Settings", "Accounts Settings")
+ acc_settings.book_deferred_entries_based_on = "Months"
+ acc_settings.save()
+
+ supplier = create_supplier(
+ supplier_name="_Test Furniture Supplier", supplier_group="Local", supplier_type="Company"
+ )
+ supplier.save()
+
+ item = create_item(
+ "_Test Office Desk",
+ is_stock_item=0,
+ warehouse="All Warehouses - _CD",
+ company="_Test Company DR",
+ )
+ item.enable_deferred_expense = 1
+ item.deferred_expense_account = deferred_expense_account
+ item.no_of_months_exp = 3
+ item.save()
+
+ pi = make_purchase_invoice(
+ item=item.name,
+ company="_Test Company DR",
+ supplier="_Test Furniture Supplier",
+ is_return=False,
+ update_stock=False,
+ posting_date=frappe.utils.datetime.date(2021, 5, 1),
+ parent_cost_center="Main - _CD",
+ cost_center="Main - _CD",
+ do_not_save=True,
+ rate=300,
+ price_list_rate=300,
+ warehouse="All Warehouses - _CD",
+ qty=1,
+ )
+ pi.set_posting_time = True
+ pi.items[0].enable_deferred_expense = 1
+ pi.items[0].service_start_date = "2021-05-01"
+ pi.items[0].service_end_date = "2021-08-01"
+ pi.items[0].deferred_expense_account = deferred_expense_account
+ pi.items[0].expense_account = "Office Maintenance Expenses - _CD"
+ pi.save()
+ pi.submit()
+
+ pda = frappe.get_doc(
+ dict(
+ doctype="Process Deferred Accounting",
+ posting_date=nowdate(),
+ start_date="2021-05-01",
+ end_date="2021-08-01",
+ type="Expense",
+ company="_Test Company DR",
+ )
+ )
+ pda.insert()
+ pda.submit()
+
+ # execute report
+ fiscal_year = frappe.get_doc("Fiscal Year", frappe.defaults.get_user_default("fiscal_year"))
+ self.filters = frappe._dict(
+ {
+ "company": frappe.defaults.get_user_default("Company"),
+ "filter_based_on": "Date Range",
+ "period_start_date": "2021-05-01",
+ "period_end_date": "2021-08-01",
+ "from_fiscal_year": fiscal_year.year,
+ "to_fiscal_year": fiscal_year.year,
+ "periodicity": "Monthly",
+ "type": "Expense",
+ "with_upcoming_postings": False,
+ }
+ )
+
+ report = Deferred_Revenue_and_Expense_Report(filters=self.filters)
+ report.run()
+ expected = [
+ {"key": "may_2021", "total": -100.0, "actual": -100.0},
+ {"key": "jun_2021", "total": -100.0, "actual": -100.0},
+ {"key": "jul_2021", "total": -100.0, "actual": -100.0},
+ {"key": "aug_2021", "total": 0, "actual": 0},
+ ]
+ self.assertEqual(report.period_total, expected)
+
+
+def create_company():
+ company = frappe.db.exists("Company", "_Test Company DR")
+ if not company:
+ company = frappe.new_doc("Company")
+ company.company_name = "_Test Company DR"
+ company.default_currency = "INR"
+ company.chart_of_accounts = "Standard"
+ company.insert()
+
+
+def clear_old_entries():
+ item = qb.DocType("Item")
+ account = qb.DocType("Account")
+ customer = qb.DocType("Customer")
+ supplier = qb.DocType("Supplier")
+ sinv = qb.DocType("Sales Invoice")
+ sinv_item = qb.DocType("Sales Invoice Item")
+ pinv = qb.DocType("Purchase Invoice")
+ pinv_item = qb.DocType("Purchase Invoice Item")
+
+ qb.from_(account).delete().where(
+ (account.account_name == "Deferred Revenue")
+ | (account.account_name == "Deferred Expense") & (account.company == "_Test Company DR")
+ ).run()
+ qb.from_(item).delete().where(
+ (item.item_code == "_Test Internet Subscription") | (item.item_code == "_Test Office Rent")
+ ).run()
+ qb.from_(customer).delete().where(customer.customer_name == "_Test Customer DR").run()
+ qb.from_(supplier).delete().where(supplier.supplier_name == "_Test Furniture Supplier").run()
+
+ # delete existing invoices with deferred items
+ deferred_invoices = (
+ qb.from_(sinv)
+ .join(sinv_item)
+ .on(sinv.name == sinv_item.parent)
+ .select(sinv.name)
+ .where(sinv_item.enable_deferred_revenue == 1)
+ .run()
+ )
+ if deferred_invoices:
+ qb.from_(sinv).delete().where(sinv.name.isin(deferred_invoices)).run()
+
+ deferred_invoices = (
+ qb.from_(pinv)
+ .join(pinv_item)
+ .on(pinv.name == pinv_item.parent)
+ .select(pinv.name)
+ .where(pinv_item.enable_deferred_expense == 1)
+ .run()
+ )
+ if deferred_invoices:
+ qb.from_(pinv).delete().where(pinv.name.isin(deferred_invoices)).run()
|
{
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-26949@6e6be15
|
frappe/erpnext
|
Python
| 26,949
|
fix: Pricing Rule on Transaction Based on Coupon
|
> Pricing Rule creates for coupon-based get applied for all transition
Fixes #26948
|
2021-08-13T20:54:51Z
|
Transaction Pricing Rule get applied even if it a based-on coupon
## Description of the issue
- Pricing Rule created for coupon gets applied to all transactions without coupon is applied
## Steps to reproduce the issue
1. Create Pricing Rule for Coupon Based select Apply on Transaction, Price or Product Discount - Price
2. In Any Document [Quotation,..] which applies the pricing rule, by default discount is applied to all without the Coupon.
|
[
{
"body": "\r\n## Description of the issue\r\n\r\n- Pricing Rule created for coupon gets applied to all transactions without coupon is applied\r\n\r\n## Steps to reproduce the issue\r\n\r\n1. Create Pricing Rule for Coupon Based select Apply on Transaction, Price or Product Discount - Price\r\n2. In Any Document [Quotation,..] which applies the pricing rule, by default discount is applied to all without the Coupon.\r\n",
"number": 26948,
"title": "Transaction Pricing Rule get applied even if it a based-on coupon"
}
] |
a9852a54830a68f388956b972933dfac8aec78f9
|
{
"head_commit": "6e6be156d028f571134223699ce281d134421b10",
"head_commit_message": "Merge branch 'develop' into develop",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/pricing_rule/utils.py b/erpnext/accounts/doctype/pricing_rule/utils.py\nindex 94abf3b3c06f..96a7672c495a 100644\n--- a/erpnext/accounts/doctype/pricing_rule/utils.py\n+++ b/erpnext/accounts/doctype/pricing_rule/utils.py\n@@ -475,6 +475,10 @@ def apply_pricing_rule_on_transaction(doc):\n \t\t\t\t\t\tfrappe.msgprint(_(\"User has not applied rule on the invoice {0}\")\n \t\t\t\t\t\t\t.format(doc.name))\n \t\t\t\t\telse:\n+\t\t\t\t\t\tif d.coupon_code_based:\n+\t\t\t\t\t\t\tif not doc.get('coupon_code') or doc.get('coupon_code') is None:\n+\t\t\t\t\t\t\t\tdoc.set(field, 0)\n+\t\t\t\t\t\t\t\tcontinue\n \t\t\t\t\t\tdoc.set(field, d.get(pr_field))\n \n \t\t\t\tdoc.calculate_taxes_and_totals()\n"
}
|
[
{
"diff_hunk": "@@ -475,6 +475,10 @@ def apply_pricing_rule_on_transaction(doc):\n \t\t\t\t\t\tfrappe.msgprint(_(\"User has not applied rule on the invoice {0}\")\n \t\t\t\t\t\t\t.format(doc.name))\n \t\t\t\t\telse:\n+\t\t\t\t\t\tif d.coupon_code_based:\n+\t\t\t\t\t\t\tif not doc.get('coupon_code') or doc.get('coupon_code') is None:\n+\t\t\t\t\t\t\t\tdoc.set(field, 0)\n+\t\t\t\t\t\t\t\tcontinue\n \t\t\t\t\t\tdoc.set(field, d.get(pr_field))",
"line": null,
"original_line": 482,
"original_start_line": 478,
"path": "erpnext/accounts/doctype/pricing_rule/utils.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\t\t\t\t\tif not d.coupon_code_based:\r\n\t\t\t\t\t\t\tdoc.set(field, d.get(pr_field))\r\n\t\t\t\t\t\telif doc.get('coupon_code'):\r\n\t\t\t\t\t\t\t# coupon code based pricing rule\r\n\t\t\t\t\t\t\tcoupon_code_pricing_rule = frappe.db.get_value('Coupon Code', doc.get('coupon_code'), 'pricing_rule')\r\n\t\t\t\t\t\t\tif coupon_code_pricing_rule == d.name:\r\n\t\t\t\t\t\t\t\t# if selected coupon code is linked with pricing rule\r\n\t\t\t\t\t\t\t\tdoc.set(field, d.get(pr_field))\r\n\t\t\t\t\t\t\telse:\r\n\t\t\t\t\t\t\t\t# reset discount if not linked\r\n\t\t\t\t\t\t\t\tdoc.set(field, 0)\r\n\t\t\t\t\t\telse:\r\n\t\t\t\t\t\t\t# if coupon code based but no coupon code selected\r\n\t\t\t\t\t\t\tdoc.set(field, 0)\r\n```"
}
] |
bd71999d25687b062fcfec892a54dc57458a1ba5
|
diff --git a/erpnext/accounts/doctype/pricing_rule/utils.py b/erpnext/accounts/doctype/pricing_rule/utils.py
index 94abf3b3c06f..5467cb0bc5b0 100644
--- a/erpnext/accounts/doctype/pricing_rule/utils.py
+++ b/erpnext/accounts/doctype/pricing_rule/utils.py
@@ -475,7 +475,20 @@ def apply_pricing_rule_on_transaction(doc):
frappe.msgprint(_("User has not applied rule on the invoice {0}")
.format(doc.name))
else:
- doc.set(field, d.get(pr_field))
+ if not d.coupon_code_based:
+ doc.set(field, d.get(pr_field))
+ elif doc.get('coupon_code'):
+ # coupon code based pricing rule
+ coupon_code_pricing_rule = frappe.db.get_value('Coupon Code', doc.get('coupon_code'), 'pricing_rule')
+ if coupon_code_pricing_rule == d.name:
+ # if selected coupon code is linked with pricing rule
+ doc.set(field, d.get(pr_field))
+ else:
+ # reset discount if not linked
+ doc.set(field, 0)
+ else:
+ # if coupon code based but no coupon code selected
+ doc.set(field, 0)
doc.calculate_taxes_and_totals()
elif d.price_or_product_discount == 'Product':
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
frappe__erpnext-36668@fdf6e1b
|
frappe/erpnext
|
Python
| 36,668
|
fix: accounting dimensions required while creating POS Profile
|
**Issue:**
<img width="957" alt="image" src="https://github.com/frappe/erpnext/assets/65544983/e2c20632-3b56-4668-8d97-a53fd4ba4e29">
When closing a POS whilst having a Accounting Dimension mandatory for either P&L or BS, we get the above error.
<img width="1366" alt="image" src="https://github.com/frappe/erpnext/assets/65544983/0ce763bd-2734-424d-92cd-e8053210a327">
**Solution:**
The accounting dimensions which are enabled and mandatory for either P&L or BS, are now required while creating a POS Profile.
These values from POS Profile are then fetched while closing POS hence removing the error.
**Result:**
We are now able to close POS without any issue.
|
2023-08-16T06:22:10Z
|
POS Invoice merging doesn't pick the default accounting dimensions set in Accounting Dimension, or on the POS Profile
### Information about bug
1. Create an Accounting Dimension, e.g. Department
2. Tick Mandatory For Profit and Loss Account
3. Set Default Dimension value in the Dimension Defaults
4. In addition or as an alternative, set the Default Dimension value in the POS Profile, under Accounting Dimensions
5. Create a POS Invoice in POS, then close shift to merge the POS invoices into sales invoices
6. Alternatively create a sales invoice in POS Awesome, and try to submit it
- In 5 and 6 above, each scenario raises, Accounting Dimension is required for 'Profit and Loss Account' XXX
### Module
selling
### Version
ERPNext: v14.31.2 (HEAD)
Frappe Framework: v14.41.0 (HEAD)
### Installation method
FrappeCloud
### Relevant log output / Stack trace / Full Error Message.
```shell
request.js:457 Traceback (most recent call last):
File "apps/frappe/frappe/app.py", line 94, in application
response = frappe.api.handle()
File "apps/frappe/frappe/api.py", line 54, in handle
return frappe.handler.handle()
File "apps/frappe/frappe/handler.py", line 47, in handle
data = execute_cmd(cmd)
File "apps/frappe/frappe/handler.py", line 85, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "apps/frappe/frappe/__init__.py", line 1619, in call
return fn(*args, **newargs)
File "apps/frappe/frappe/desk/form/save.py", line 28, in savedocs
doc.save()
File "apps/frappe/frappe/model/document.py", line 305, in save
return self._save(*args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 356, in _save
self.run_post_save_methods()
File "apps/frappe/frappe/model/document.py", line 1085, in run_post_save_methods
self.run_method("on_submit")
File "apps/frappe/frappe/model/document.py", line 914, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1267, in composer
return composed(self, method, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1249, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
File "apps/frappe/frappe/model/document.py", line 911, in fn
return method_object(*args, **kwargs)
File "apps/erpnext/erpnext/accounts/doctype/pos_closing_entry/pos_closing_entry.py", line 93, in on_submit
consolidate_pos_invoices(closing_entry=self)
File "apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py", line 346, in consolidate_pos_invoices
create_merge_logs(invoice_by_customer, closing_entry)
File "apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py", line 431, in create_merge_logs
merge_log.submit()
File "apps/frappe/frappe/model/document.py", line 1005, in submit
return self._submit()
File "apps/frappe/frappe/model/document.py", line 984, in _submit
return self.save()
File "apps/frappe/frappe/model/document.py", line 305, in save
return self._save(*args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 356, in _save
self.run_post_save_methods()
File "apps/frappe/frappe/model/document.py", line 1085, in run_post_save_methods
self.run_method("on_submit")
File "apps/frappe/frappe/model/document.py", line 914, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1267, in composer
return composed(self, method, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1249, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
File "apps/frappe/frappe/model/document.py", line 911, in fn
return method_object(*args, **kwargs)
File "apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py", line 96, in on_submit
sales_invoice = self.process_merging_into_sales_invoice(sales)
File "apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py", line 120, in process_merging_into_sales_invoice
sales_invoice.submit()
File "apps/frappe/frappe/model/document.py", line 1005, in submit
return self._submit()
File "apps/frappe/frappe/model/document.py", line 984, in _submit
return self.save()
File "apps/frappe/frappe/model/document.py", line 305, in save
return self._save(*args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 356, in _save
self.run_post_save_methods()
File "apps/frappe/frappe/model/document.py", line 1085, in run_post_save_methods
self.run_method("on_submit")
File "apps/frappe/frappe/model/document.py", line 914, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1267, in composer
return composed(self, method, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1249, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
File "apps/frappe/frappe/model/document.py", line 911, in fn
return method_object(*args, **kwargs)
File "apps/erpnext/erpnext/accounts/doctype/sales_invoice/sales_invoice.py", line 269, in on_submit
self.make_gl_entries()
File "apps/erpnext/erpnext/accounts/doctype/sales_invoice/sales_invoice.py", line 1043, in make_gl_entries
make_gl_entries(
File "apps/erpnext/erpnext/accounts/general_ledger.py", line 42, in make_gl_entries
save_entries(gl_map, adv_adj, update_outstanding, from_repost)
File "apps/erpnext/erpnext/accounts/general_ledger.py", line 305, in save_entries
make_entry(entry, adv_adj, update_outstanding, from_repost)
File "apps/erpnext/erpnext/accounts/general_ledger.py", line 316, in make_entry
gle.submit()
File "apps/frappe/frappe/model/document.py", line 1005, in submit
return self._submit()
File "apps/frappe/frappe/model/document.py", line 984, in _submit
return self.save()
File "apps/frappe/frappe/model/document.py", line 305, in save
return self._save(*args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 327, in _save
return self.insert()
File "apps/frappe/frappe/model/document.py", line 285, in insert
self.run_post_save_methods()
File "apps/frappe/frappe/model/document.py", line 1084, in run_post_save_methods
self.run_method("on_update")
File "apps/frappe/frappe/model/document.py", line 914, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1267, in composer
return composed(self, method, *args, **kwargs)
File "apps/frappe/frappe/model/document.py", line 1249, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
File "apps/frappe/frappe/model/document.py", line 911, in fn
return method_object(*args, **kwargs)
File "apps/erpnext/erpnext/accounts/doctype/gl_entry/gl_entry.py", line 56, in on_update
self.validate_dimensions_for_pl_and_bs()
File "apps/erpnext/erpnext/accounts/doctype/gl_entry/gl_entry.py", line 141, in validate_dimensions_for_pl_and_bs
frappe.throw(
File "apps/frappe/frappe/__init__.py", line 533, in throw
msgprint(
File "apps/frappe/frappe/__init__.py", line 501, in msgprint
_raise_exception()
File "apps/frappe/frappe/__init__.py", line 450, in _raise_exception
raise raise_exception(msg)
frappe.exceptions.ValidationError: Accounting Dimension <b>Department</b> is required for 'Profit and Loss' account 4110 - Sales - VFL.
```
|
[
{
"body": "### Information about bug\n\n1. Create an Accounting Dimension, e.g. Department\r\n2. Tick Mandatory For Profit and Loss Account\r\n3. Set Default Dimension value in the Dimension Defaults\r\n4. In addition or as an alternative, set the Default Dimension value in the POS Profile, under Accounting Dimensions\r\n5. Create a POS Invoice in POS, then close shift to merge the POS invoices into sales invoices\r\n6. Alternatively create a sales invoice in POS Awesome, and try to submit it\r\n\r\n- In 5 and 6 above, each scenario raises, Accounting Dimension is required for 'Profit and Loss Account' XXX\n\n### Module\n\nselling\n\n### Version\n\nERPNext: v14.31.2 (HEAD)\r\nFrappe Framework: v14.41.0 (HEAD)\n\n### Installation method\n\nFrappeCloud\n\n### Relevant log output / Stack trace / Full Error Message.\n\n```shell\nrequest.js:457 Traceback (most recent call last):\r\n File \"apps/frappe/frappe/app.py\", line 94, in application\r\n response = frappe.api.handle()\r\n File \"apps/frappe/frappe/api.py\", line 54, in handle\r\n return frappe.handler.handle()\r\n File \"apps/frappe/frappe/handler.py\", line 47, in handle\r\n data = execute_cmd(cmd)\r\n File \"apps/frappe/frappe/handler.py\", line 85, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"apps/frappe/frappe/__init__.py\", line 1619, in call\r\n return fn(*args, **newargs)\r\n File \"apps/frappe/frappe/desk/form/save.py\", line 28, in savedocs\r\n doc.save()\r\n File \"apps/frappe/frappe/model/document.py\", line 305, in save\r\n return self._save(*args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 356, in _save\r\n self.run_post_save_methods()\r\n File \"apps/frappe/frappe/model/document.py\", line 1085, in run_post_save_methods\r\n self.run_method(\"on_submit\")\r\n File \"apps/frappe/frappe/model/document.py\", line 914, in run_method\r\n out = Document.hook(fn)(self, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1267, in composer\r\n return composed(self, method, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1249, in runner\r\n add_to_return_value(self, fn(self, *args, **kwargs))\r\n File \"apps/frappe/frappe/model/document.py\", line 911, in fn\r\n return method_object(*args, **kwargs)\r\n File \"apps/erpnext/erpnext/accounts/doctype/pos_closing_entry/pos_closing_entry.py\", line 93, in on_submit\r\n consolidate_pos_invoices(closing_entry=self)\r\n File \"apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\", line 346, in consolidate_pos_invoices\r\n create_merge_logs(invoice_by_customer, closing_entry)\r\n File \"apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\", line 431, in create_merge_logs\r\n merge_log.submit()\r\n File \"apps/frappe/frappe/model/document.py\", line 1005, in submit\r\n return self._submit()\r\n File \"apps/frappe/frappe/model/document.py\", line 984, in _submit\r\n return self.save()\r\n File \"apps/frappe/frappe/model/document.py\", line 305, in save\r\n return self._save(*args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 356, in _save\r\n self.run_post_save_methods()\r\n File \"apps/frappe/frappe/model/document.py\", line 1085, in run_post_save_methods\r\n self.run_method(\"on_submit\")\r\n File \"apps/frappe/frappe/model/document.py\", line 914, in run_method\r\n out = Document.hook(fn)(self, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1267, in composer\r\n return composed(self, method, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1249, in runner\r\n add_to_return_value(self, fn(self, *args, **kwargs))\r\n File \"apps/frappe/frappe/model/document.py\", line 911, in fn\r\n return method_object(*args, **kwargs)\r\n File \"apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\", line 96, in on_submit\r\n sales_invoice = self.process_merging_into_sales_invoice(sales)\r\n File \"apps/erpnext/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\", line 120, in process_merging_into_sales_invoice\r\n sales_invoice.submit()\r\n File \"apps/frappe/frappe/model/document.py\", line 1005, in submit\r\n return self._submit()\r\n File \"apps/frappe/frappe/model/document.py\", line 984, in _submit\r\n return self.save()\r\n File \"apps/frappe/frappe/model/document.py\", line 305, in save\r\n return self._save(*args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 356, in _save\r\n self.run_post_save_methods()\r\n File \"apps/frappe/frappe/model/document.py\", line 1085, in run_post_save_methods\r\n self.run_method(\"on_submit\")\r\n File \"apps/frappe/frappe/model/document.py\", line 914, in run_method\r\n out = Document.hook(fn)(self, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1267, in composer\r\n return composed(self, method, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1249, in runner\r\n add_to_return_value(self, fn(self, *args, **kwargs))\r\n File \"apps/frappe/frappe/model/document.py\", line 911, in fn\r\n return method_object(*args, **kwargs)\r\n File \"apps/erpnext/erpnext/accounts/doctype/sales_invoice/sales_invoice.py\", line 269, in on_submit\r\n self.make_gl_entries()\r\n File \"apps/erpnext/erpnext/accounts/doctype/sales_invoice/sales_invoice.py\", line 1043, in make_gl_entries\r\n make_gl_entries(\r\n File \"apps/erpnext/erpnext/accounts/general_ledger.py\", line 42, in make_gl_entries\r\n save_entries(gl_map, adv_adj, update_outstanding, from_repost)\r\n File \"apps/erpnext/erpnext/accounts/general_ledger.py\", line 305, in save_entries\r\n make_entry(entry, adv_adj, update_outstanding, from_repost)\r\n File \"apps/erpnext/erpnext/accounts/general_ledger.py\", line 316, in make_entry\r\n gle.submit()\r\n File \"apps/frappe/frappe/model/document.py\", line 1005, in submit\r\n return self._submit()\r\n File \"apps/frappe/frappe/model/document.py\", line 984, in _submit\r\n return self.save()\r\n File \"apps/frappe/frappe/model/document.py\", line 305, in save\r\n return self._save(*args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 327, in _save\r\n return self.insert()\r\n File \"apps/frappe/frappe/model/document.py\", line 285, in insert\r\n self.run_post_save_methods()\r\n File \"apps/frappe/frappe/model/document.py\", line 1084, in run_post_save_methods\r\n self.run_method(\"on_update\")\r\n File \"apps/frappe/frappe/model/document.py\", line 914, in run_method\r\n out = Document.hook(fn)(self, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1267, in composer\r\n return composed(self, method, *args, **kwargs)\r\n File \"apps/frappe/frappe/model/document.py\", line 1249, in runner\r\n add_to_return_value(self, fn(self, *args, **kwargs))\r\n File \"apps/frappe/frappe/model/document.py\", line 911, in fn\r\n return method_object(*args, **kwargs)\r\n File \"apps/erpnext/erpnext/accounts/doctype/gl_entry/gl_entry.py\", line 56, in on_update\r\n self.validate_dimensions_for_pl_and_bs()\r\n File \"apps/erpnext/erpnext/accounts/doctype/gl_entry/gl_entry.py\", line 141, in validate_dimensions_for_pl_and_bs\r\n frappe.throw(\r\n File \"apps/frappe/frappe/__init__.py\", line 533, in throw\r\n msgprint(\r\n File \"apps/frappe/frappe/__init__.py\", line 501, in msgprint\r\n _raise_exception()\r\n File \"apps/frappe/frappe/__init__.py\", line 450, in _raise_exception\r\n raise raise_exception(msg)\r\nfrappe.exceptions.ValidationError: Accounting Dimension <b>Department</b> is required for 'Profit and Loss' account 4110 - Sales - VFL.\n```\n",
"number": 36210,
"title": "POS Invoice merging doesn't pick the default accounting dimensions set in Accounting Dimension, or on the POS Profile"
}
] |
2f7b3bbfad76948bd00e7087854502d61d8eab07
|
{
"head_commit": "fdf6e1bd74f13ea8c7869a17c1fc2782c82a710c",
"head_commit_message": "chore: code cleanup",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py b/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py\nindex cfe5e6e80092..3a2c3cbeeb10 100644\n--- a/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py\n+++ b/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py\n@@ -265,20 +265,21 @@ def get_dimension_with_children(doctype, dimensions):\n \n @frappe.whitelist()\n def get_dimensions(with_cost_center_and_project=False):\n-\tdimension_filters = frappe.db.sql(\n-\t\t\"\"\"\n-\t\tSELECT label, fieldname, document_type\n-\t\tFROM `tabAccounting Dimension`\n-\t\tWHERE disabled = 0\n-\t\"\"\",\n-\t\tas_dict=1,\n-\t)\n \n-\tdefault_dimensions = frappe.db.sql(\n-\t\t\"\"\"SELECT p.fieldname, c.company, c.default_dimension\n-\t\tFROM `tabAccounting Dimension Detail` c, `tabAccounting Dimension` p\n-\t\tWHERE c.parent = p.name\"\"\",\n-\t\tas_dict=1,\n+\tc = frappe.qb.DocType(\"Accounting Dimension Detail\")\n+\tp = frappe.qb.DocType(\"Accounting Dimension\")\n+\tdimension_filters = (\n+\t\tfrappe.qb.from_(p)\n+\t\t.select(p.label, p.fieldname, p.document_type)\n+\t\t.where(p.disabled == 0)\n+\t\t.run(as_dict=1)\n+\t)\n+\tdefault_dimensions = (\n+\t\tfrappe.qb.from_(c)\n+\t\t.inner_join(p)\n+\t\t.on(c.parent == p.name)\n+\t\t.select(p.fieldname, c.company, c.default_dimension)\n+\t\t.run(as_dict=1)\n \t)\n \n \tif isinstance(with_cost_center_and_project, str):\ndiff --git a/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py b/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\nindex d8cbcc141bda..e3a4e2be16d6 100644\n--- a/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\n+++ b/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py\n@@ -12,6 +12,10 @@\n from frappe.utils.background_jobs import enqueue, is_job_enqueued\n from frappe.utils.scheduler import is_scheduler_inactive\n \n+from erpnext.accounts.doctype.accounting_dimension.accounting_dimension import (\n+\tget_accounting_dimensions,\n+)\n+\n \n class POSInvoiceMergeLog(Document):\n \tdef validate(self):\n@@ -83,6 +87,11 @@ def on_submit(self):\n \t\tpos_invoice_docs = [\n \t\t\tfrappe.get_cached_doc(\"POS Invoice\", d.pos_invoice) for d in self.pos_invoices\n \t\t]\n+\t\taccounting_dimensions = get_accounting_dimensions()\n+\t\tfor d in pos_invoice_docs:\n+\t\t\tfor dimension in accounting_dimensions:\n+\t\t\t\tdimension_value = frappe.db.get_value(\"POS Profile\", d.pos_profile, dimension)\n+\t\t\t\td.set(dimension, dimension_value)\n \n \t\treturns = [d for d in pos_invoice_docs if d.get(\"is_return\") == 1]\n \t\tsales = [d for d in pos_invoice_docs if d.get(\"is_return\") == 0]\n@@ -426,11 +435,9 @@ def create_merge_logs(invoice_by_customer, closing_entry=None):\n \t\t\t\t)\n \t\t\t\tmerge_log.customer = customer\n \t\t\t\tmerge_log.pos_closing_entry = closing_entry.get(\"name\") if closing_entry else None\n-\n \t\t\t\tmerge_log.set(\"pos_invoices\", _invoices)\n \t\t\t\tmerge_log.save(ignore_permissions=True)\n \t\t\t\tmerge_log.submit()\n-\n \t\tif closing_entry:\n \t\t\tclosing_entry.set_status(update=True, status=\"Submitted\")\n \t\t\tclosing_entry.db_set(\"error_message\", \"\")\ndiff --git a/erpnext/accounts/doctype/pos_profile/pos_profile.js b/erpnext/accounts/doctype/pos_profile/pos_profile.js\nindex 0a89aee8e9c7..ff27dc1a9d96 100755\n--- a/erpnext/accounts/doctype/pos_profile/pos_profile.js\n+++ b/erpnext/accounts/doctype/pos_profile/pos_profile.js\n@@ -1,5 +1,7 @@\n // Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n // License: GNU General Public License v3. See license.txt\n+frappe.provide(\"erpnext.accounts\");\n+\n \n frappe.ui.form.on('POS Profile', {\n \tsetup: function(frm) {\n@@ -135,11 +137,20 @@ frappe.ui.form.on('POS Profile', {\n \t\tif (frm.doc.company) {\n \t\t\tfrm.trigger(\"toggle_display_account_head\");\n \t\t}\n+\t\tfrappe.call({\n+\t\t\tmethod: 'erpnext.accounts.doctype.pos_profile.pos_profile.required_accounting_dimensions',\n+\t\t\tcallback: function(r) {\n+\t\t\t\tr.message.forEach((acc_dim) => {\n+\t\t\t\t\tfrm.toggle_reqd(acc_dim, true);\n+\t\t\t\t})\n+\t\t\t}\n+\t\t});\n \t},\n \n \tcompany: function(frm) {\n \t\tfrm.trigger(\"toggle_display_account_head\");\n \t\terpnext.accounts.dimensions.update_dimension(frm, frm.doctype);\n+\n \t},\n \n \ttoggle_display_account_head: function(frm) {\ndiff --git a/erpnext/accounts/doctype/pos_profile/pos_profile.py b/erpnext/accounts/doctype/pos_profile/pos_profile.py\nindex e8aee737f29d..c30617af547b 100644\n--- a/erpnext/accounts/doctype/pos_profile/pos_profile.py\n+++ b/erpnext/accounts/doctype/pos_profile/pos_profile.py\n@@ -197,6 +197,25 @@ def pos_profile_query(doctype, txt, searchfield, start, page_len, filters):\n \treturn pos_profile\n \n \[email protected]()\n+def required_accounting_dimensions():\n+\n+\tp = frappe.qb.DocType(\"Accounting Dimension\")\n+\tc = frappe.qb.DocType(\"Accounting Dimension Detail\")\n+\n+\tacc_dim_doc = (\n+\t\tfrappe.qb.from_(p)\n+\t\t.inner_join(c)\n+\t\t.on(p.name == c.parent)\n+\t\t.select(c.parent)\n+\t\t.where(c.mandatory_for_bs == 1 or c.mandatory_for_pl == 1)\n+\t\t.where(p.disabled == 0)\n+\t).run(as_dict=1)\n+\n+\tacc_dim_names = [d.parent.lower() for d in acc_dim_doc]\n+\treturn acc_dim_names\n+\n+\n @frappe.whitelist()\n def set_default_profile(pos_profile, company):\n \tmodified = now()\ndiff --git a/erpnext/accounts/report/balance_sheet/balance_sheet.js b/erpnext/accounts/report/balance_sheet/balance_sheet.js\nindex c65b9e8ccc74..ecc13d7dc8aa 100644\n--- a/erpnext/accounts/report/balance_sheet/balance_sheet.js\n+++ b/erpnext/accounts/report/balance_sheet/balance_sheet.js\n@@ -15,7 +15,6 @@ frappe.require(\"assets/erpnext/js/financial_statements.js\", function () {\n \t\tfieldtype: \"Check\",\n \t\tdefault: 1,\n \t});\n-\tconsole.log(frappe.query_reports[\"Balance Sheet\"][\"filters\"]);\n \n \tfrappe.query_reports[\"Balance Sheet\"][\"filters\"].push({\n \t\tfieldname: \"include_default_book_entries\",\n"
}
|
[
{
"diff_hunk": "@@ -83,6 +87,11 @@ def on_submit(self):\n \t\tpos_invoice_docs = [\n \t\t\tfrappe.get_cached_doc(\"POS Invoice\", d.pos_invoice) for d in self.pos_invoices\n \t\t]\n+\t\taccounting_dimensions = get_accounting_dimensions()",
"line": null,
"original_line": 90,
"original_start_line": null,
"path": "erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py",
"start_line": null,
"text": "@user1:\nWhy set in every POS invoice, this can be set just in the final consolidated invoice\n\n@author:\nfixed it"
}
] |
9d8036ff8871fa41af2725aaed7f05ff3048dc0b
|
diff --git a/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py b/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py
index cfe5e6e80092..3a2c3cbeeb10 100644
--- a/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py
+++ b/erpnext/accounts/doctype/accounting_dimension/accounting_dimension.py
@@ -265,20 +265,21 @@ def get_dimension_with_children(doctype, dimensions):
@frappe.whitelist()
def get_dimensions(with_cost_center_and_project=False):
- dimension_filters = frappe.db.sql(
- """
- SELECT label, fieldname, document_type
- FROM `tabAccounting Dimension`
- WHERE disabled = 0
- """,
- as_dict=1,
- )
- default_dimensions = frappe.db.sql(
- """SELECT p.fieldname, c.company, c.default_dimension
- FROM `tabAccounting Dimension Detail` c, `tabAccounting Dimension` p
- WHERE c.parent = p.name""",
- as_dict=1,
+ c = frappe.qb.DocType("Accounting Dimension Detail")
+ p = frappe.qb.DocType("Accounting Dimension")
+ dimension_filters = (
+ frappe.qb.from_(p)
+ .select(p.label, p.fieldname, p.document_type)
+ .where(p.disabled == 0)
+ .run(as_dict=1)
+ )
+ default_dimensions = (
+ frappe.qb.from_(c)
+ .inner_join(p)
+ .on(c.parent == p.name)
+ .select(p.fieldname, c.company, c.default_dimension)
+ .run(as_dict=1)
)
if isinstance(with_cost_center_and_project, str):
diff --git a/erpnext/accounts/doctype/accounting_dimension/test_accounting_dimension.py b/erpnext/accounts/doctype/accounting_dimension/test_accounting_dimension.py
index 25ef2ea5c2c2..cb7f5f5da789 100644
--- a/erpnext/accounts/doctype/accounting_dimension/test_accounting_dimension.py
+++ b/erpnext/accounts/doctype/accounting_dimension/test_accounting_dimension.py
@@ -84,12 +84,22 @@ def create_dimension():
frappe.set_user("Administrator")
if not frappe.db.exists("Accounting Dimension", {"document_type": "Department"}):
- frappe.get_doc(
+ dimension = frappe.get_doc(
{
"doctype": "Accounting Dimension",
"document_type": "Department",
}
- ).insert()
+ )
+ dimension.append(
+ "dimension_defaults",
+ {
+ "company": "_Test Company",
+ "reference_document": "Department",
+ "default_dimension": "_Test Department - _TC",
+ },
+ )
+ dimension.insert()
+ dimension.save()
else:
dimension = frappe.get_doc("Accounting Dimension", "Department")
dimension.disabled = 0
diff --git a/erpnext/accounts/doctype/pos_closing_entry/test_pos_closing_entry.py b/erpnext/accounts/doctype/pos_closing_entry/test_pos_closing_entry.py
index 93ba90ad9f99..62b342a3d207 100644
--- a/erpnext/accounts/doctype/pos_closing_entry/test_pos_closing_entry.py
+++ b/erpnext/accounts/doctype/pos_closing_entry/test_pos_closing_entry.py
@@ -5,6 +5,10 @@
import frappe
+from erpnext.accounts.doctype.accounting_dimension.test_accounting_dimension import (
+ create_dimension,
+ disable_dimension,
+)
from erpnext.accounts.doctype.pos_closing_entry.pos_closing_entry import (
make_closing_entry_from_opening,
)
@@ -140,6 +144,43 @@ def test_cancelling_of_pos_closing_entry(self):
pos_inv1.load_from_db()
self.assertEqual(pos_inv1.status, "Paid")
+ def test_pos_closing_for_required_accounting_dimension_in_pos_profile(self):
+ """
+ test case to check whether we can create POS Closing Entry without mandatory accounting dimension
+ """
+
+ create_dimension()
+ pos_profile = make_pos_profile(do_not_insert=1, do_not_set_accounting_dimension=1)
+
+ self.assertRaises(frappe.ValidationError, pos_profile.insert)
+
+ pos_profile.location = "Block 1"
+ pos_profile.insert()
+ self.assertTrue(frappe.db.exists("POS Profile", pos_profile.name))
+
+ test_user = init_user_and_profile(do_not_create_pos_profile=1)
+
+ opening_entry = create_opening_entry(pos_profile, test_user.name)
+ pos_inv1 = create_pos_invoice(rate=350, do_not_submit=1, pos_profile=pos_profile.name)
+ pos_inv1.append("payments", {"mode_of_payment": "Cash", "account": "Cash - _TC", "amount": 3500})
+ pos_inv1.submit()
+
+ # if in between a mandatory accounting dimension is added to the POS Profile then
+ accounting_dimension_department = frappe.get_doc("Accounting Dimension", {"name": "Department"})
+ accounting_dimension_department.dimension_defaults[0].mandatory_for_bs = 1
+ accounting_dimension_department.save()
+
+ pcv_doc = make_closing_entry_from_opening(opening_entry)
+ # will assert coz the new mandatory accounting dimension bank is not set in POS Profile
+ self.assertRaises(frappe.ValidationError, pcv_doc.submit)
+
+ accounting_dimension_department = frappe.get_doc(
+ "Accounting Dimension Detail", {"parent": "Department"}
+ )
+ accounting_dimension_department.mandatory_for_bs = 0
+ accounting_dimension_department.save()
+ disable_dimension()
+
def init_user_and_profile(**args):
user = "[email protected]"
@@ -149,6 +190,9 @@ def init_user_and_profile(**args):
test_user.add_roles(*roles)
frappe.set_user(user)
+ if args.get("do_not_create_pos_profile"):
+ return test_user
+
pos_profile = make_pos_profile(**args)
pos_profile.append("applicable_for_users", {"default": 1, "user": user})
diff --git a/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py b/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py
index b587ce603f42..d42b1e4cd1d3 100644
--- a/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py
+++ b/erpnext/accounts/doctype/pos_invoice_merge_log/pos_invoice_merge_log.py
@@ -12,6 +12,8 @@
from frappe.utils.background_jobs import enqueue, is_job_enqueued
from frappe.utils.scheduler import is_scheduler_inactive
+from erpnext.accounts.doctype.pos_profile.pos_profile import required_accounting_dimensions
+
class POSInvoiceMergeLog(Document):
def validate(self):
@@ -163,7 +165,8 @@ def merge_pos_invoice_into(self, invoice, data):
for i in items:
if (
i.item_code == item.item_code
- and not i.serial_and_batch_bundle
+ and not i.serial_no
+ and not i.batch_no
and i.uom == item.uom
and i.net_rate == item.net_rate
and i.warehouse == item.warehouse
@@ -238,6 +241,22 @@ def merge_pos_invoice_into(self, invoice, data):
invoice.disable_rounded_total = cint(
frappe.db.get_value("POS Profile", invoice.pos_profile, "disable_rounded_total")
)
+ accounting_dimensions = required_accounting_dimensions()
+ dimension_values = frappe.db.get_value(
+ "POS Profile", {"name": invoice.pos_profile}, accounting_dimensions, as_dict=1
+ )
+ for dimension in accounting_dimensions:
+ dimension_value = dimension_values.get(dimension)
+
+ if not dimension_value:
+ frappe.throw(
+ _("Please set Accounting Dimension {} in {}").format(
+ frappe.bold(frappe.unscrub(dimension)),
+ frappe.get_desk_link("POS Profile", invoice.pos_profile),
+ )
+ )
+
+ invoice.set(dimension, dimension_value)
if self.merge_invoices_based_on == "Customer Group":
invoice.flags.ignore_pos_profile = True
@@ -424,11 +443,9 @@ def create_merge_logs(invoice_by_customer, closing_entry=None):
)
merge_log.customer = customer
merge_log.pos_closing_entry = closing_entry.get("name") if closing_entry else None
-
merge_log.set("pos_invoices", _invoices)
merge_log.save(ignore_permissions=True)
merge_log.submit()
-
if closing_entry:
closing_entry.set_status(update=True, status="Submitted")
closing_entry.db_set("error_message", "")
diff --git a/erpnext/accounts/doctype/pos_profile/pos_profile.js b/erpnext/accounts/doctype/pos_profile/pos_profile.js
index 0a89aee8e9c7..ceaafaa3b12a 100755
--- a/erpnext/accounts/doctype/pos_profile/pos_profile.js
+++ b/erpnext/accounts/doctype/pos_profile/pos_profile.js
@@ -1,6 +1,5 @@
// Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
// License: GNU General Public License v3. See license.txt
-
frappe.ui.form.on('POS Profile', {
setup: function(frm) {
frm.set_query("selling_price_list", function() {
@@ -140,6 +139,7 @@ frappe.ui.form.on('POS Profile', {
company: function(frm) {
frm.trigger("toggle_display_account_head");
erpnext.accounts.dimensions.update_dimension(frm, frm.doctype);
+
},
toggle_display_account_head: function(frm) {
diff --git a/erpnext/accounts/doctype/pos_profile/pos_profile.py b/erpnext/accounts/doctype/pos_profile/pos_profile.py
index e8aee737f29d..58be2d3e5c0d 100644
--- a/erpnext/accounts/doctype/pos_profile/pos_profile.py
+++ b/erpnext/accounts/doctype/pos_profile/pos_profile.py
@@ -3,7 +3,7 @@
import frappe
-from frappe import _, msgprint
+from frappe import _, msgprint, scrub, unscrub
from frappe.model.document import Document
from frappe.utils import get_link_to_form, now
@@ -14,6 +14,21 @@ def validate(self):
self.validate_all_link_fields()
self.validate_duplicate_groups()
self.validate_payment_methods()
+ self.validate_accounting_dimensions()
+
+ def validate_accounting_dimensions(self):
+ acc_dim_names = required_accounting_dimensions()
+ for acc_dim in acc_dim_names:
+ if not self.get(acc_dim):
+ frappe.throw(
+ _(
+ "{0} is a mandatory Accounting Dimension. <br>"
+ "Please set a value for {0} in Accounting Dimensions section."
+ ).format(
+ unscrub(frappe.bold(acc_dim)),
+ ),
+ title=_("Mandatory Accounting Dimension"),
+ )
def validate_default_profile(self):
for row in self.applicable_for_users:
@@ -152,6 +167,24 @@ def get_child_nodes(group_type, root):
)
+def required_accounting_dimensions():
+
+ p = frappe.qb.DocType("Accounting Dimension")
+ c = frappe.qb.DocType("Accounting Dimension Detail")
+
+ acc_dim_doc = (
+ frappe.qb.from_(p)
+ .inner_join(c)
+ .on(p.name == c.parent)
+ .select(c.parent)
+ .where((c.mandatory_for_bs == 1) | (c.mandatory_for_pl == 1))
+ .where(p.disabled == 0)
+ ).run(as_dict=1)
+
+ acc_dim_names = [scrub(d.parent) for d in acc_dim_doc]
+ return acc_dim_names
+
+
@frappe.whitelist()
@frappe.validate_and_sanitize_search_inputs
def pos_profile_query(doctype, txt, searchfield, start, page_len, filters):
diff --git a/erpnext/accounts/doctype/pos_profile/test_pos_profile.py b/erpnext/accounts/doctype/pos_profile/test_pos_profile.py
index 788aa62701d6..b468ad3fe9bf 100644
--- a/erpnext/accounts/doctype/pos_profile/test_pos_profile.py
+++ b/erpnext/accounts/doctype/pos_profile/test_pos_profile.py
@@ -5,7 +5,10 @@
import frappe
-from erpnext.accounts.doctype.pos_profile.pos_profile import get_child_nodes
+from erpnext.accounts.doctype.pos_profile.pos_profile import (
+ get_child_nodes,
+ required_accounting_dimensions,
+)
from erpnext.stock.get_item_details import get_pos_profile
test_dependencies = ["Item"]
@@ -118,6 +121,7 @@ def make_pos_profile(**args):
"warehouse": args.warehouse or "_Test Warehouse - _TC",
"write_off_account": args.write_off_account or "_Test Write Off - _TC",
"write_off_cost_center": args.write_off_cost_center or "_Test Write Off Cost Center - _TC",
+ "location": "Block 1" if not args.do_not_set_accounting_dimension else None,
}
)
@@ -132,6 +136,7 @@ def make_pos_profile(**args):
pos_profile.append("payments", {"mode_of_payment": "Cash", "default": 1})
if not frappe.db.exists("POS Profile", args.name or "_Test POS Profile"):
- pos_profile.insert()
+ if not args.get("do_not_insert"):
+ pos_profile.insert()
return pos_profile
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
frappe__erpnext-29137@4535a7a
|
frappe/erpnext
|
Python
| 29,137
|
feat: Payment Terms Status report
|
## Payment Terms Status Report
This report aims to provide current status of payment terms based on the associated invoices against the SO.
All invoices associated with Sales Order are fetched and assigned to terms in FIFO method. Statuses shown in the report are calculated at runtime and does not affect the state in the database.
<img width="1552" alt="payment terms status" src="https://user-images.githubusercontent.com/3272205/148046071-e75363c6-8541-4144-a15e-ec63268dc7b2.png">
When Individual SO's are selected, a chart is used to show the payment and paid amount for each terms.
<img width="1552" alt="charts for Payment terms" src="https://user-images.githubusercontent.com/3272205/148046791-8a3d6e39-ebbc-498b-ac1a-a9f1f7b1f567.png">
Documentation can be found [**here**]( https://docs.erpnext.com/docs/v13/user/manual/en/accounts/payment_terms_status_report )
|
2022-01-04T13:13:00Z
|
Generate Invoice based on Payment Terms
Here are the points to be considered or challenges when creating a payment against a Payment Term:
**Challenge:**
Since the Amount in the Sales Invoice is determined based on Rate and Qty, in order to do 50% billing for an invoice (for eg.) we will have to set either qty to 50% of rate to 50%. Resetting Qty may not be ideal, as it can lead to an issue if “Update Stock” is also checked. We cannot deliver 0.50 TV. Even now, we change the item’s Qty to get the pending invoice amount in the invoice.
**Payment Terms:**
If an invoice is made against a Payment Term, then there should be no other Payment Term fetched from Sales Order. Currently, we fetch all the payment terms of the sales order into the invoice.
**Benefits**
This design will help us extra a reports like:
**Sales Order pending to be billed for a specific term**
**Sales Order outstanding as per the payment terms**
|
[
{
"body": "Here are the points to be considered or challenges when creating a payment against a Payment Term:\r\n**Challenge:** \r\nSince the Amount in the Sales Invoice is determined based on Rate and Qty, in order to do 50% billing for an invoice (for eg.) we will have to set either qty to 50% of rate to 50%. Resetting Qty may not be ideal, as it can lead to an issue if “Update Stock” is also checked. We cannot deliver 0.50 TV. Even now, we change the item’s Qty to get the pending invoice amount in the invoice.\r\n\r\n**Payment Terms:**\r\nIf an invoice is made against a Payment Term, then there should be no other Payment Term fetched from Sales Order. Currently, we fetch all the payment terms of the sales order into the invoice.\r\n\r\n**Benefits**\r\nThis design will help us extra a reports like:\r\n**Sales Order pending to be billed for a specific term**\r\n**Sales Order outstanding as per the payment terms**\r\n",
"number": 28758,
"title": "Generate Invoice based on Payment Terms"
}
] |
f89a64db486b46ac756d5ba62faee87f28baf889
|
{
"head_commit": "4535a7a301f76fa3b867902f19e806dcb01bdb75",
"head_commit_message": "test: qty and rate changed to remove need for fractional Nos",
"patch_to_review": "diff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/__init__.py b/erpnext/selling/report/payment_terms_status_for_sales_order/__init__.py\nnew file mode 100644\nindex 000000000000..e69de29bb2d1\ndiff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.js b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.js\nnew file mode 100644\nindex 000000000000..0450631a3be4\n--- /dev/null\n+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.js\n@@ -0,0 +1,84 @@\n+// Copyright (c) 2016, Frappe Technologies Pvt. Ltd. and contributors\n+// For license information, please see license.txt\n+/* eslint-disable */\n+\n+function get_filters() {\n+\tlet filters = [\n+\t\t{\n+\t\t\t\"fieldname\":\"company\",\n+\t\t\t\"label\": __(\"Company\"),\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Company\",\n+\t\t\t\"default\": frappe.defaults.get_user_default(\"Company\"),\n+\t\t\t\"reqd\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"period_start_date\",\n+\t\t\t\"label\": __(\"Start Date\"),\n+\t\t\t\"fieldtype\": \"Date\",\n+\t\t\t\"reqd\": 1,\n+\t\t\t\"default\": frappe.datetime.add_months(frappe.datetime.get_today(), -1)\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"period_end_date\",\n+\t\t\t\"label\": __(\"End Date\"),\n+\t\t\t\"fieldtype\": \"Date\",\n+\t\t\t\"reqd\": 1,\n+\t\t\t\"default\": frappe.datetime.get_today()\n+\t\t},\n+\t\t{\n+\t\t\t\"fieldname\":\"sales_order\",\n+\t\t\t\"label\": __(\"Sales Order\"),\n+\t\t\t\"fieldtype\": \"MultiSelectList\",\n+\t\t\t\"width\": 100,\n+\t\t\t\"options\": \"Sales Order\",\n+\t\t\t\"get_data\": function(txt) {\n+\t\t\t\treturn frappe.db.get_link_options(\"Sales Order\", txt, this.filters());\n+\t\t\t},\n+\t\t\t\"filters\": () => {\n+\t\t\t\treturn {\n+\t\t\t\t\tdocstatus: 1,\n+\t\t\t\t\tpayment_terms_template: ['not in', ['']],\n+\t\t\t\t\tcompany: frappe.query_report.get_filter_value(\"company\"),\n+\t\t\t\t\ttransaction_date: ['between', [frappe.query_report.get_filter_value(\"period_start_date\"), frappe.query_report.get_filter_value(\"period_end_date\")]]\n+\t\t\t\t}\n+\t\t\t},\n+\t\t\ton_change: function(){\n+\t\t\t\tfrappe.query_report.refresh();\n+\t\t\t}\n+\t\t}\n+\t]\n+\n+\treturn filters;\n+}\n+\n+frappe.query_reports[\"Payment Terms Status for Sales Order\"] = {\n+\t\"filters\": get_filters(),\n+\t\"formatter\": function(value, row, column, data, default_formatter){\n+\t\tif(column.fieldname == 'invoices' && value) {\n+\t\t\tinvoices = value.split(',');\n+\t\t\tconst invoice_formatter = (prev_value, curr_value) => {\n+\t\t\t\tif(prev_value != \"\") {\n+\t\t\t\t\treturn prev_value + \", \" + default_formatter(curr_value, row, column, data);\n+\t\t\t\t}\n+\t\t\t\telse {\n+\t\t\t\t\treturn default_formatter(curr_value, row, column, data);\n+\t\t\t\t}\n+\t\t\t}\n+\t\t\treturn invoices.reduce(invoice_formatter, \"\")\n+\t\t}\n+\t\telse if (column.fieldname == 'paid_amount' && value){\n+\t\t\tformatted_value = default_formatter(value, row, column, data);\n+\t\t\tif(value > 0) {\n+\t\t\t\tformatted_value = \"<span style='color:green;'>\" + formatted_value + \"</span>\"\n+\t\t\t}\n+\t\t\treturn formatted_value;\n+\t\t}\n+\t\telse if (column.fieldname == 'status' && value == 'Completed'){\n+\t\t\treturn \"<span style='color:green;'>\" + default_formatter(value, row, column, data) + \"</span>\";\n+\t\t}\n+\n+\t\treturn default_formatter(value, row, column, data);\n+\t},\n+\n+};\ndiff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.json b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.json\nnew file mode 100644\nindex 000000000000..850fa4dc47a4\n--- /dev/null\n+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.json\n@@ -0,0 +1,38 @@\n+{\n+ \"add_total_row\": 1,\n+ \"columns\": [],\n+ \"creation\": \"2021-12-28 10:39:34.533964\",\n+ \"disable_prepared_report\": 0,\n+ \"disabled\": 0,\n+ \"docstatus\": 0,\n+ \"doctype\": \"Report\",\n+ \"filters\": [],\n+ \"idx\": 0,\n+ \"is_standard\": \"Yes\",\n+ \"modified\": \"2021-12-30 10:42:06.058457\",\n+ \"modified_by\": \"Administrator\",\n+ \"module\": \"Selling\",\n+ \"name\": \"Payment Terms Status for Sales Order\",\n+ \"owner\": \"Administrator\",\n+ \"prepared_report\": 0,\n+ \"ref_doctype\": \"Sales Order\",\n+ \"report_name\": \"Payment Terms Status for Sales Order\",\n+ \"report_type\": \"Script Report\",\n+ \"roles\": [\n+ {\n+ \"role\": \"Sales User\"\n+ },\n+ {\n+ \"role\": \"Sales Manager\"\n+ },\n+ {\n+ \"role\": \"Maintenance User\"\n+ },\n+ {\n+ \"role\": \"Accounts User\"\n+ },\n+ {\n+ \"role\": \"Stock User\"\n+ }\n+ ]\n+}\n\\ No newline at end of file\ndiff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py\nnew file mode 100644\nindex 000000000000..aa2f757218e6\n--- /dev/null\n+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py\n@@ -0,0 +1,211 @@\n+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n+# License: MIT. See LICENSE\n+\n+import frappe\n+from frappe import _, qb, query_builder\n+from frappe.query_builder import functions\n+\n+\n+def get_columns():\n+\tcolumns = [\n+\t\t{\n+\t\t\t\"label\": _(\"Sales Order\"),\n+\t\t\t\"fieldname\": \"name\",\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Sales Order\",\n+\t\t\t\"read_only\": 1,\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Submitted\"),\n+\t\t\t\"fieldname\": \"submitted\",\n+\t\t\t\"fieldtype\": \"Date\",\n+\t\t\t\"read_only\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Payment Term\"),\n+\t\t\t\"fieldname\": \"payment_term\",\n+\t\t\t\"fieldtype\": \"Data\",\n+\t\t\t\"read_only\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Description\"),\n+\t\t\t\"fieldname\": \"description\",\n+\t\t\t\"fieldtype\": \"Data\",\n+\t\t\t\"read_only\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Due Date\"),\n+\t\t\t\"fieldname\": \"due_date\",\n+\t\t\t\"fieldtype\": \"Date\",\n+\t\t\t\"read_only\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Invoice Portion\"),\n+\t\t\t\"fieldname\": \"invoice_portion\",\n+\t\t\t\"fieldtype\": \"Percent\",\n+\t\t\t\"read_only\": 1,\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Payment Amount\"),\n+\t\t\t\"fieldname\": \"payment_amount\",\n+\t\t\t\"fieldtype\": \"Currency\",\n+\t\t\t\"read_only\": 1,\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Paid Amount\"),\n+\t\t\t\"fieldname\": \"paid_amount\",\n+\t\t\t\"fieldtype\": \"Currency\",\n+\t\t\t\"read_only\": 1\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Invoices\"),\n+\t\t\t\"fieldname\": \"invoices\",\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Sales Invoice\",\n+\t\t\t\"read_only\": 1,\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Status\"),\n+\t\t\t\"fieldname\": \"status\",\n+\t\t\t\"fieldtype\": \"Data\",\n+\t\t\t\"read_only\": 1\n+\t\t}\n+\t]\n+\treturn columns\n+\n+\n+def get_conditions(filters):\n+\t\"\"\"\n+\tConvert filter options to conditions used in query\n+\t\"\"\"\n+\tfilters = frappe._dict(filters) if filters else frappe._dict({})\n+\tconditions = frappe._dict({})\n+\n+\tconditions.company = filters.company or frappe.defaults.get_user_default(\"company\")\n+\tconditions.end_date = filters.period_end_date or frappe.utils.today()\n+\tconditions.start_date = filters.period_start_date or frappe.utils.add_months(\n+\t\tconditions.end_date, -1\n+\t)\n+\tconditions.sales_order = filters.sales_order or []\n+\n+\treturn conditions\n+\n+\n+def get_so_with_invoices(filters):\n+\t\"\"\"\n+\tGet Sales Order with payment terms template with their associated Invoices\n+\t\"\"\"\n+\tsorders = []\n+\n+\tso = qb.DocType(\"Sales Order\")\n+\tps = qb.DocType(\"Payment Schedule\")\n+\tdatediff = query_builder.CustomFunction(\"DATEDIFF\", [\"cur_date\", \"due_date\"])\n+\tifelse = query_builder.CustomFunction(\"IF\", [\"condition\", \"then\", \"else\"])\n+\n+\tconditions = get_conditions(filters)\n+\tquery_so = (\n+\t\tqb.from_(so)\n+\t\t.join(ps)\n+\t\t.on(ps.parent == so.name)\n+\t\t.select(\n+\t\t\tso.name,\n+\t\t\tso.transaction_date.as_(\"submitted\"),\n+\t\t\tifelse(datediff(ps.due_date, functions.CurDate()) < 0, \"Overdue\", \"Unpaid\").as_(\"status\"),\n+\t\t\tps.payment_term,\n+\t\t\tps.description,\n+\t\t\tps.due_date,\n+\t\t\tps.invoice_portion,\n+\t\t\tps.payment_amount,\n+\t\t\tps.paid_amount,\n+\t\t)\n+\t\t.where(\n+\t\t\t(so.docstatus == 1)\n+\t\t\t& (so.payment_terms_template != \"NULL\")\n+\t\t\t& (so.company == conditions.company)\n+\t\t\t& (so.transaction_date[conditions.start_date : conditions.end_date])\n+\t\t)\n+\t\t.orderby(so.name, so.transaction_date, ps.due_date)\n+\t)\n+\n+\tif conditions.sales_order != []:\n+\t\tquery_so = query_so.where(so.name.isin(conditions.sales_order))\n+\n+\tsorders = query_so.run(as_dict=True)\n+\n+\tinvoices = []\n+\tif sorders != []:\n+\t\tsoi = qb.DocType(\"Sales Order Item\")\n+\t\tsi = qb.DocType(\"Sales Invoice\")\n+\t\tsii = qb.DocType(\"Sales Invoice Item\")\n+\t\tquery_inv = (\n+\t\t\tqb.from_(sii)\n+\t\t\t.right_join(si)\n+\t\t\t.on(si.name == sii.parent)\n+\t\t\t.inner_join(soi)\n+\t\t\t.on(soi.name == sii.so_detail)\n+\t\t\t.select(sii.sales_order, sii.parent.as_(\"invoice\"), si.base_net_total.as_(\"invoice_amount\"))\n+\t\t\t.where((sii.sales_order.isin([x.name for x in sorders])) & (si.docstatus == 1))\n+\t\t\t.groupby(sii.parent)\n+\t\t)\n+\t\tinvoices = query_inv.run(as_dict=True)\n+\n+\treturn sorders, invoices\n+\n+\n+def set_payment_terms_statuses(sales_orders, invoices):\n+\t\"\"\"\n+\tcompute status for payment terms with associated sales invoice using FIFO\n+\t\"\"\"\n+\n+\tfor so in sales_orders:\n+\t\tfor inv in [x for x in invoices if x.sales_order == so.name and x.invoice_amount > 0]:\n+\t\t\tif so.payment_amount - so.paid_amount > 0:\n+\t\t\t\tamount = so.payment_amount - so.paid_amount\n+\t\t\t\tif inv.invoice_amount >= amount:\n+\t\t\t\t\tinv.invoice_amount -= amount\n+\t\t\t\t\tso.paid_amount += amount\n+\t\t\t\t\tif so.invoices:\n+\t\t\t\t\t\tso.invoices = so.invoices + \",\" + inv.invoice\n+\t\t\t\t\telse:\n+\t\t\t\t\t\tso.invoices = inv.invoice\n+\t\t\t\t\tso.status = \"Completed\"\n+\t\t\t\t\tbreak\n+\t\t\t\telse:\n+\t\t\t\t\tso.paid_amount += inv.invoice_amount\n+\t\t\t\t\tinv.invoice_amount = 0\n+\t\t\t\t\tif so.invoices:\n+\t\t\t\t\t\tso.invoices = so.invoices + \",\" + inv.invoice\n+\t\t\t\t\telse:\n+\t\t\t\t\t\tso.invoices = inv.invoice\n+\t\t\t\t\tso.status = \"Partly Paid\"\n+\n+\treturn sales_orders, invoices\n+\n+\n+def prepare_chart(s_orders):\n+\tif len(set([x.name for x in s_orders])) == 1:\n+\t\tchart = {\n+\t\t\t\"data\": {\n+\t\t\t\t\"labels\": [term.payment_term for term in s_orders],\n+\t\t\t\t\"datasets\": [\n+\t\t\t\t\t{\"name\": \"Payment Amount\", \"values\": [x.payment_amount for x in s_orders],},\n+\t\t\t\t\t{\"name\": \"Paid Amount\", \"values\": [x.paid_amount for x in s_orders],},\n+\t\t\t\t],\n+\t\t\t},\n+\t\t\t\"type\": \"bar\",\n+\t\t}\n+\t\treturn chart\n+\n+\n+def execute(filters=None):\n+\tcolumns = get_columns()\n+\tsales_orders, so_invoices = get_so_with_invoices(filters)\n+\tsales_orders, so_invoices = set_payment_terms_statuses(sales_orders, so_invoices)\n+\n+\tprepare_chart(sales_orders)\n+\n+\tdata = sales_orders\n+\tmessage = []\n+\tchart = prepare_chart(sales_orders)\n+\n+\treturn columns, data, message, chart\ndiff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/test_payment_terms_status_for_sales_order.py b/erpnext/selling/report/payment_terms_status_for_sales_order/test_payment_terms_status_for_sales_order.py\nnew file mode 100644\nindex 000000000000..5d6e91e8a50c\n--- /dev/null\n+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/test_payment_terms_status_for_sales_order.py\n@@ -0,0 +1,107 @@\n+import datetime\n+\n+import frappe\n+from frappe.utils import add_days\n+\n+from erpnext.selling.doctype.sales_order.sales_order import make_sales_invoice\n+from erpnext.selling.doctype.sales_order.test_sales_order import make_sales_order\n+from erpnext.selling.report.payment_terms_status_for_sales_order.payment_terms_status_for_sales_order import (\n+\texecute,\n+)\n+from erpnext.stock.doctype.item.test_item import create_item\n+from erpnext.tests.utils import ERPNextTestCase\n+\n+test_dependencies = [\"Sales Order\", \"Item\", \"Sales Invoice\", \"Payment Terms Template\"]\n+\n+\n+class TestPaymentTermsStatusForSalesOrder(ERPNextTestCase):\n+\tdef test_payment_terms_status(self):\n+\n+\t\ttemplate = None\n+\t\tif frappe.db.exists(\"Payment Terms Template\", \"_Test 50-50\"):\n+\t\t\ttemplate = frappe.get_doc(\"Payment Terms Template\", \"_Test 50-50\")\n+\t\telse:\n+\t\t\ttemplate = frappe.get_doc(\n+\t\t\t\t{\n+\t\t\t\t\t\"doctype\": \"Payment Terms Template\",\n+\t\t\t\t\t\"template_name\": \"_Test 50-50\",\n+\t\t\t\t\t\"terms\": [\n+\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\"doctype\": \"Payment Terms Template Detail\",\n+\t\t\t\t\t\t\t\"due_date_based_on\": \"Day(s) after invoice date\",\n+\t\t\t\t\t\t\t\"payment_term_name\": \"_Test 50% on 15 Days\",\n+\t\t\t\t\t\t\t\"description\": \"_Test 50-50\",\n+\t\t\t\t\t\t\t\"invoice_portion\": 50,\n+\t\t\t\t\t\t\t\"credit_days\": 15,\n+\t\t\t\t\t\t},\n+\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\"doctype\": \"Payment Terms Template Detail\",\n+\t\t\t\t\t\t\t\"due_date_based_on\": \"Day(s) after invoice date\",\n+\t\t\t\t\t\t\t\"payment_term_name\": \"_Test 50% on 30 Days\",\n+\t\t\t\t\t\t\t\"description\": \"_Test 50-50\",\n+\t\t\t\t\t\t\t\"invoice_portion\": 50,\n+\t\t\t\t\t\t\t\"credit_days\": 30,\n+\t\t\t\t\t\t},\n+\t\t\t\t\t],\n+\t\t\t\t}\n+\t\t\t)\n+\t\t\ttemplate.insert()\n+\n+\t\t# item = create_item(item_code=\"_Test Excavator\", is_stock_item=0, valuation_rate=1000000)\n+\t\titem = create_item(item_code=\"_Test Excavator\", is_stock_item=0)\n+\t\tso = make_sales_order(\n+\t\t\ttransaction_date=\"2021-06-15\",\n+\t\t\tdelivery_date=add_days(\"2021-06-15\", -30),\n+\t\t\titem=item.item_code,\n+\t\t\tqty=10,\n+\t\t\trate=100000,\n+\t\t\tdo_not_save=True,\n+\t\t)\n+\t\tso.po_no = \"\"\n+\t\tso.payment_terms_template = template.name\n+\t\tso.save()\n+\t\tso.submit()\n+\n+\t\t# make invoice with 60% of the total sales order value\n+\t\tsinv = make_sales_invoice(so.name)\n+\t\tsinv.items[0].qty = 6\n+\t\tsinv.insert()\n+\t\tsinv.submit()\n+\n+\t\tcolumns, data, message, chart = execute(\n+\t\t\t{\n+\t\t\t\t\"company\": \"_Test Company\",\n+\t\t\t\t\"period_start_date\": \"2021-06-01\",\n+\t\t\t\t\"period_end_date\": \"2021-06-30\",\n+\t\t\t\t\"sales_order\": [so.name],\n+\t\t\t}\n+\t\t)\n+\n+\t\texpected_value = [\n+\t\t\t{\n+\t\t\t\t\"name\": so.name,\n+\t\t\t\t\"submitted\": datetime.date(2021, 6, 15),\n+\t\t\t\t\"status\": \"Completed\",\n+\t\t\t\t\"payment_term\": None,\n+\t\t\t\t\"description\": \"_Test 50-50\",\n+\t\t\t\t\"due_date\": datetime.date(2021, 6, 30),\n+\t\t\t\t\"invoice_portion\": 50.0,\n+\t\t\t\t\"payment_amount\": 500000.0,\n+\t\t\t\t\"paid_amount\": 500000.0,\n+\t\t\t\t\"invoices\": sinv.name,\n+\t\t\t},\n+\t\t\t{\n+\t\t\t\t\"name\": so.name,\n+\t\t\t\t\"submitted\": datetime.date(2021, 6, 15),\n+\t\t\t\t\"status\": \"Partly Paid\",\n+\t\t\t\t\"payment_term\": None,\n+\t\t\t\t\"description\": \"_Test 50-50\",\n+\t\t\t\t\"due_date\": datetime.date(2021, 7, 15),\n+\t\t\t\t\"invoice_portion\": 50.0,\n+\t\t\t\t\"payment_amount\": 500000.0,\n+\t\t\t\t\"paid_amount\": 100000.0,\n+\t\t\t\t\"invoices\": sinv.name,\n+\t\t\t},\n+\t\t]\n+\n+\t\tself.assertEqual(data, expected_value)\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,211 @@\n+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n+# License: MIT. See LICENSE\n+\n+import frappe\n+from frappe import _, qb, query_builder\n+from frappe.query_builder import functions\n+\n+\n+def get_columns():\n+\tcolumns = [\n+\t\t{\n+\t\t\t\"label\": _(\"Sales Order\"),\n+\t\t\t\"fieldname\": \"name\",\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Sales Order\",\n+\t\t\t\"read_only\": 1,\n+\t\t},\n+\t\t{\n+\t\t\t\"label\": _(\"Submitted\"),",
"line": null,
"original_line": 19,
"original_start_line": null,
"path": "erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\t\t\"label\": _(\"Posting Date\"),\r\n```"
},
{
"diff_hunk": "@@ -0,0 +1,211 @@\n+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors\n+# License: MIT. See LICENSE\n+\n+import frappe\n+from frappe import _, qb, query_builder\n+from frappe.query_builder import functions\n+\n+\n+def get_columns():\n+\tcolumns = [\n+\t\t{\n+\t\t\t\"label\": _(\"Sales Order\"),\n+\t\t\t\"fieldname\": \"name\",\n+\t\t\t\"fieldtype\": \"Link\",\n+\t\t\t\"options\": \"Sales Order\",\n+\t\t\t\"read_only\": 1,",
"line": null,
"original_line": 16,
"original_start_line": null,
"path": "erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py",
"start_line": null,
"text": "@user1:\nIs read-only applicable here? I mean here columns are by default read-only or some new feature has been introduced recently\n\n@author:\nremoved."
}
] |
2f5bfcc0550d60d3e1b5c138d937dedefed36bba
|
diff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/__init__.py b/erpnext/selling/report/payment_terms_status_for_sales_order/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.js b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.js
new file mode 100644
index 000000000000..0e36b3fe3d21
--- /dev/null
+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.js
@@ -0,0 +1,84 @@
+// Copyright (c) 2022, Frappe Technologies Pvt. Ltd. and contributors
+// For license information, please see license.txt
+/* eslint-disable */
+
+function get_filters() {
+ let filters = [
+ {
+ "fieldname":"company",
+ "label": __("Company"),
+ "fieldtype": "Link",
+ "options": "Company",
+ "default": frappe.defaults.get_user_default("Company"),
+ "reqd": 1
+ },
+ {
+ "fieldname":"period_start_date",
+ "label": __("Start Date"),
+ "fieldtype": "Date",
+ "reqd": 1,
+ "default": frappe.datetime.add_months(frappe.datetime.get_today(), -1)
+ },
+ {
+ "fieldname":"period_end_date",
+ "label": __("End Date"),
+ "fieldtype": "Date",
+ "reqd": 1,
+ "default": frappe.datetime.get_today()
+ },
+ {
+ "fieldname":"sales_order",
+ "label": __("Sales Order"),
+ "fieldtype": "MultiSelectList",
+ "width": 100,
+ "options": "Sales Order",
+ "get_data": function(txt) {
+ return frappe.db.get_link_options("Sales Order", txt, this.filters());
+ },
+ "filters": () => {
+ return {
+ docstatus: 1,
+ payment_terms_template: ['not in', ['']],
+ company: frappe.query_report.get_filter_value("company"),
+ transaction_date: ['between', [frappe.query_report.get_filter_value("period_start_date"), frappe.query_report.get_filter_value("period_end_date")]]
+ }
+ },
+ on_change: function(){
+ frappe.query_report.refresh();
+ }
+ }
+ ]
+
+ return filters;
+}
+
+frappe.query_reports["Payment Terms Status for Sales Order"] = {
+ "filters": get_filters(),
+ "formatter": function(value, row, column, data, default_formatter){
+ if(column.fieldname == 'invoices' && value) {
+ invoices = value.split(',');
+ const invoice_formatter = (prev_value, curr_value) => {
+ if(prev_value != "") {
+ return prev_value + ", " + default_formatter(curr_value, row, column, data);
+ }
+ else {
+ return default_formatter(curr_value, row, column, data);
+ }
+ }
+ return invoices.reduce(invoice_formatter, "")
+ }
+ else if (column.fieldname == 'paid_amount' && value){
+ formatted_value = default_formatter(value, row, column, data);
+ if(value > 0) {
+ formatted_value = "<span style='color:green;'>" + formatted_value + "</span>"
+ }
+ return formatted_value;
+ }
+ else if (column.fieldname == 'status' && value == 'Completed'){
+ return "<span style='color:green;'>" + default_formatter(value, row, column, data) + "</span>";
+ }
+
+ return default_formatter(value, row, column, data);
+ },
+
+};
diff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.json b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.json
new file mode 100644
index 000000000000..850fa4dc47a4
--- /dev/null
+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.json
@@ -0,0 +1,38 @@
+{
+ "add_total_row": 1,
+ "columns": [],
+ "creation": "2021-12-28 10:39:34.533964",
+ "disable_prepared_report": 0,
+ "disabled": 0,
+ "docstatus": 0,
+ "doctype": "Report",
+ "filters": [],
+ "idx": 0,
+ "is_standard": "Yes",
+ "modified": "2021-12-30 10:42:06.058457",
+ "modified_by": "Administrator",
+ "module": "Selling",
+ "name": "Payment Terms Status for Sales Order",
+ "owner": "Administrator",
+ "prepared_report": 0,
+ "ref_doctype": "Sales Order",
+ "report_name": "Payment Terms Status for Sales Order",
+ "report_type": "Script Report",
+ "roles": [
+ {
+ "role": "Sales User"
+ },
+ {
+ "role": "Sales Manager"
+ },
+ {
+ "role": "Maintenance User"
+ },
+ {
+ "role": "Accounts User"
+ },
+ {
+ "role": "Stock User"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py
new file mode 100644
index 000000000000..e6a56eea3101
--- /dev/null
+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py
@@ -0,0 +1,205 @@
+# Copyright (c) 2013, Frappe Technologies Pvt. Ltd. and contributors
+# License: MIT. See LICENSE
+
+import frappe
+from frappe import _, qb, query_builder
+from frappe.query_builder import functions
+
+
+def get_columns():
+ columns = [
+ {
+ "label": _("Sales Order"),
+ "fieldname": "name",
+ "fieldtype": "Link",
+ "options": "Sales Order",
+ },
+ {
+ "label": _("Posting Date"),
+ "fieldname": "submitted",
+ "fieldtype": "Date",
+ },
+ {
+ "label": _("Payment Term"),
+ "fieldname": "payment_term",
+ "fieldtype": "Data",
+ },
+ {
+ "label": _("Description"),
+ "fieldname": "description",
+ "fieldtype": "Data",
+ },
+ {
+ "label": _("Due Date"),
+ "fieldname": "due_date",
+ "fieldtype": "Date",
+ },
+ {
+ "label": _("Invoice Portion"),
+ "fieldname": "invoice_portion",
+ "fieldtype": "Percent",
+ },
+ {
+ "label": _("Payment Amount"),
+ "fieldname": "base_payment_amount",
+ "fieldtype": "Currency",
+ "options": "currency",
+ },
+ {
+ "label": _("Paid Amount"),
+ "fieldname": "paid_amount",
+ "fieldtype": "Currency",
+ "options": "currency",
+ },
+ {
+ "label": _("Invoices"),
+ "fieldname": "invoices",
+ "fieldtype": "Link",
+ "options": "Sales Invoice",
+ },
+ {
+ "label": _("Status"),
+ "fieldname": "status",
+ "fieldtype": "Data",
+ },
+ {
+ "label": _("Currency"),
+ "fieldname": "currency",
+ "fieldtype": "Currency",
+ "hidden": 1
+ }
+ ]
+ return columns
+
+
+def get_conditions(filters):
+ """
+ Convert filter options to conditions used in query
+ """
+ filters = frappe._dict(filters) if filters else frappe._dict({})
+ conditions = frappe._dict({})
+
+ conditions.company = filters.company or frappe.defaults.get_user_default("company")
+ conditions.end_date = filters.period_end_date or frappe.utils.today()
+ conditions.start_date = filters.period_start_date or frappe.utils.add_months(
+ conditions.end_date, -1
+ )
+ conditions.sales_order = filters.sales_order or []
+
+ return conditions
+
+
+def get_so_with_invoices(filters):
+ """
+ Get Sales Order with payment terms template with their associated Invoices
+ """
+ sorders = []
+
+ so = qb.DocType("Sales Order")
+ ps = qb.DocType("Payment Schedule")
+ datediff = query_builder.CustomFunction("DATEDIFF", ["cur_date", "due_date"])
+ ifelse = query_builder.CustomFunction("IF", ["condition", "then", "else"])
+
+ conditions = get_conditions(filters)
+ query_so = (
+ qb.from_(so)
+ .join(ps)
+ .on(ps.parent == so.name)
+ .select(
+ so.name,
+ so.transaction_date.as_("submitted"),
+ ifelse(datediff(ps.due_date, functions.CurDate()) < 0, "Overdue", "Unpaid").as_("status"),
+ ps.payment_term,
+ ps.description,
+ ps.due_date,
+ ps.invoice_portion,
+ ps.base_payment_amount,
+ ps.paid_amount,
+ )
+ .where(
+ (so.docstatus == 1)
+ & (so.payment_terms_template != "NULL")
+ & (so.company == conditions.company)
+ & (so.transaction_date[conditions.start_date : conditions.end_date])
+ )
+ .orderby(so.name, so.transaction_date, ps.due_date)
+ )
+
+ if conditions.sales_order != []:
+ query_so = query_so.where(so.name.isin(conditions.sales_order))
+
+ sorders = query_so.run(as_dict=True)
+
+ invoices = []
+ if sorders != []:
+ soi = qb.DocType("Sales Order Item")
+ si = qb.DocType("Sales Invoice")
+ sii = qb.DocType("Sales Invoice Item")
+ query_inv = (
+ qb.from_(sii)
+ .right_join(si)
+ .on(si.name == sii.parent)
+ .inner_join(soi)
+ .on(soi.name == sii.so_detail)
+ .select(sii.sales_order, sii.parent.as_("invoice"), si.base_grand_total.as_("invoice_amount"))
+ .where((sii.sales_order.isin([x.name for x in sorders])) & (si.docstatus == 1))
+ .groupby(sii.parent)
+ )
+ invoices = query_inv.run(as_dict=True)
+
+ return sorders, invoices
+
+
+def set_payment_terms_statuses(sales_orders, invoices, filters):
+ """
+ compute status for payment terms with associated sales invoice using FIFO
+ """
+
+ for so in sales_orders:
+ so.currency = frappe.get_cached_value('Company', filters.get('company'), 'default_currency')
+ so.invoices = ""
+ for inv in [x for x in invoices if x.sales_order == so.name and x.invoice_amount > 0]:
+ if so.base_payment_amount - so.paid_amount > 0:
+ amount = so.base_payment_amount - so.paid_amount
+ if inv.invoice_amount >= amount:
+ inv.invoice_amount -= amount
+ so.paid_amount += amount
+ so.invoices += "," + inv.invoice
+ so.status = "Completed"
+ break
+ else:
+ so.paid_amount += inv.invoice_amount
+ inv.invoice_amount = 0
+ so.invoices += "," + inv.invoice
+ so.status = "Partly Paid"
+
+ return sales_orders, invoices
+
+
+def prepare_chart(s_orders):
+ if len(set([x.name for x in s_orders])) == 1:
+ chart = {
+ "data": {
+ "labels": [term.payment_term for term in s_orders],
+ "datasets": [
+ {"name": "Payment Amount", "values": [x.base_payment_amount for x in s_orders],},
+ {"name": "Paid Amount", "values": [x.paid_amount for x in s_orders],},
+ ],
+ },
+ "type": "bar",
+ }
+ return chart
+
+
+def execute(filters=None):
+ columns = get_columns()
+ sales_orders, so_invoices = get_so_with_invoices(filters)
+ sales_orders, so_invoices = set_payment_terms_statuses(sales_orders, so_invoices, filters)
+
+ prepare_chart(sales_orders)
+
+ data = sales_orders
+ message = []
+ chart = prepare_chart(sales_orders)
+
+ return columns, data, message, chart
diff --git a/erpnext/selling/report/payment_terms_status_for_sales_order/test_payment_terms_status_for_sales_order.py b/erpnext/selling/report/payment_terms_status_for_sales_order/test_payment_terms_status_for_sales_order.py
new file mode 100644
index 000000000000..cad41e1dc03a
--- /dev/null
+++ b/erpnext/selling/report/payment_terms_status_for_sales_order/test_payment_terms_status_for_sales_order.py
@@ -0,0 +1,198 @@
+import datetime
+
+import frappe
+from frappe.utils import add_days
+
+from erpnext.selling.doctype.sales_order.sales_order import make_sales_invoice
+from erpnext.selling.doctype.sales_order.test_sales_order import make_sales_order
+from erpnext.selling.report.payment_terms_status_for_sales_order.payment_terms_status_for_sales_order import (
+ execute,
+)
+from erpnext.stock.doctype.item.test_item import create_item
+from erpnext.tests.utils import ERPNextTestCase
+
+test_dependencies = ["Sales Order", "Item", "Sales Invoice", "Payment Terms Template"]
+
+
+class TestPaymentTermsStatusForSalesOrder(ERPNextTestCase):
+ def create_payment_terms_template(self):
+ # create template for 50-50 payments
+ template = None
+ if frappe.db.exists("Payment Terms Template", "_Test 50-50"):
+ template = frappe.get_doc("Payment Terms Template", "_Test 50-50")
+ else:
+ template = frappe.get_doc(
+ {
+ "doctype": "Payment Terms Template",
+ "template_name": "_Test 50-50",
+ "terms": [
+ {
+ "doctype": "Payment Terms Template Detail",
+ "due_date_based_on": "Day(s) after invoice date",
+ "payment_term_name": "_Test 50% on 15 Days",
+ "description": "_Test 50-50",
+ "invoice_portion": 50,
+ "credit_days": 15,
+ },
+ {
+ "doctype": "Payment Terms Template Detail",
+ "due_date_based_on": "Day(s) after invoice date",
+ "payment_term_name": "_Test 50% on 30 Days",
+ "description": "_Test 50-50",
+ "invoice_portion": 50,
+ "credit_days": 30,
+ },
+ ],
+ }
+ )
+ template.insert()
+ self.template = template
+
+ def test_payment_terms_status(self):
+ self.create_payment_terms_template()
+ item = create_item(item_code="_Test Excavator", is_stock_item=0)
+ so = make_sales_order(
+ transaction_date="2021-06-15",
+ delivery_date=add_days("2021-06-15", -30),
+ item=item.item_code,
+ qty=10,
+ rate=100000,
+ do_not_save=True,
+ )
+ so.po_no = ""
+ so.taxes_and_charges = ""
+ so.taxes = ""
+ so.payment_terms_template = self.template.name
+ so.save()
+ so.submit()
+
+ # make invoice with 60% of the total sales order value
+ sinv = make_sales_invoice(so.name)
+ sinv.taxes_and_charges = ""
+ sinv.taxes = ""
+ sinv.items[0].qty = 6
+ sinv.insert()
+ sinv.submit()
+ columns, data, message, chart = execute(
+ {
+ "company": "_Test Company",
+ "period_start_date": "2021-06-01",
+ "period_end_date": "2021-06-30",
+ "sales_order": [so.name],
+ }
+ )
+
+ expected_value = [
+ {
+ "name": so.name,
+ "submitted": datetime.date(2021, 6, 15),
+ "status": "Completed",
+ "payment_term": None,
+ "description": "_Test 50-50",
+ "due_date": datetime.date(2021, 6, 30),
+ "invoice_portion": 50.0,
+ "currency": "INR",
+ "base_payment_amount": 500000.0,
+ "paid_amount": 500000.0,
+ "invoices": ","+sinv.name,
+ },
+ {
+ "name": so.name,
+ "submitted": datetime.date(2021, 6, 15),
+ "status": "Partly Paid",
+ "payment_term": None,
+ "description": "_Test 50-50",
+ "due_date": datetime.date(2021, 7, 15),
+ "invoice_portion": 50.0,
+ "currency": "INR",
+ "base_payment_amount": 500000.0,
+ "paid_amount": 100000.0,
+ "invoices": ","+sinv.name,
+ },
+ ]
+ self.assertEqual(data, expected_value)
+
+ def create_exchange_rate(self, date):
+ # make an entry in Currency Exchange list. serves as a static exchange rate
+ if frappe.db.exists({'doctype': "Currency Exchange",'date': date,'from_currency': 'USD', 'to_currency':'INR'}):
+ return
+ else:
+ doc = frappe.get_doc({
+ 'doctype': "Currency Exchange",
+ 'date': date,
+ 'from_currency': 'USD',
+ 'to_currency': frappe.get_cached_value("Company", '_Test Company','default_currency'),
+ 'exchange_rate': 70,
+ 'for_buying': True,
+ 'for_selling': True
+ })
+ doc.insert()
+
+ def test_alternate_currency(self):
+ transaction_date = "2021-06-15"
+ self.create_payment_terms_template()
+ self.create_exchange_rate(transaction_date)
+ item = create_item(item_code="_Test Excavator", is_stock_item=0)
+ so = make_sales_order(
+ transaction_date=transaction_date,
+ currency="USD",
+ delivery_date=add_days(transaction_date, -30),
+ item=item.item_code,
+ qty=10,
+ rate=10000,
+ do_not_save=True,
+ )
+ so.po_no = ""
+ so.taxes_and_charges = ""
+ so.taxes = ""
+ so.payment_terms_template = self.template.name
+ so.save()
+ so.submit()
+
+ # make invoice with 60% of the total sales order value
+ sinv = make_sales_invoice(so.name)
+ sinv.currency = "USD"
+ sinv.taxes_and_charges = ""
+ sinv.taxes = ""
+ sinv.items[0].qty = 6
+ sinv.insert()
+ sinv.submit()
+ columns, data, message, chart = execute(
+ {
+ "company": "_Test Company",
+ "period_start_date": "2021-06-01",
+ "period_end_date": "2021-06-30",
+ "sales_order": [so.name],
+ }
+ )
+
+ # report defaults to company currency.
+ expected_value = [
+ {
+ "name": so.name,
+ "submitted": datetime.date(2021, 6, 15),
+ "status": "Completed",
+ "payment_term": None,
+ "description": "_Test 50-50",
+ "due_date": datetime.date(2021, 6, 30),
+ "invoice_portion": 50.0,
+ "currency": frappe.get_cached_value("Company", '_Test Company','default_currency'),
+ "base_payment_amount": 3500000.0,
+ "paid_amount": 3500000.0,
+ "invoices": ","+sinv.name,
+ },
+ {
+ "name": so.name,
+ "submitted": datetime.date(2021, 6, 15),
+ "status": "Partly Paid",
+ "payment_term": None,
+ "description": "_Test 50-50",
+ "due_date": datetime.date(2021, 7, 15),
+ "invoice_portion": 50.0,
+ "currency": frappe.get_cached_value("Company", '_Test Company','default_currency'),
+ "base_payment_amount": 3500000.0,
+ "paid_amount": 700000.0,
+ "invoices": ","+sinv.name,
+ },
+ ]
+ self.assertEqual(data, expected_value)
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-22736@33bf057
|
frappe/erpnext
|
Python
| 22,736
|
fix: reset homepage to home after unchecking products page
|
resets homepage back to home when "home page is products" is unchecked
fixes: #22733
|
2020-07-18T18:48:35Z
|
Turning on, then off "Homepage is Products" on the Products settings page does not revert back to Home Page
By default, ERPNext provides a landing page for the main index of the site. If you activate "Home Page is Products" under the Products Settings section, save it, then deactivate the "Home Page is Products" value, the site still acts as-if the feature is enabled. The expected behavior of this is to revert back to the landing page style for the index page.
|
I've made a fix for this. Please try and let me know if it works fine.
|
[
{
"body": "By default, ERPNext provides a landing page for the main index of the site. If you activate \"Home Page is Products\" under the Products Settings section, save it, then deactivate the \"Home Page is Products\" value, the site still acts as-if the feature is enabled. The expected behavior of this is to revert back to the landing page style for the index page.",
"number": 22733,
"title": "Turning on, then off \"Homepage is Products\" on the Products settings page does not revert back to Home Page"
}
] |
d81372a39cb527aa4efb5591df07f1029bdb6dfd
|
{
"head_commit": "33bf0574a3bcb241283838bfd10bbcae473a844b",
"head_commit_message": "chore: fix get single value call\n\nSigned-off-by: Chinmay D. Pai <[email protected]>",
"patch_to_review": "diff --git a/erpnext/portal/doctype/products_settings/products_settings.py b/erpnext/portal/doctype/products_settings/products_settings.py\nindex 82afebf2f1e5..cb0746277470 100644\n--- a/erpnext/portal/doctype/products_settings/products_settings.py\n+++ b/erpnext/portal/doctype/products_settings/products_settings.py\n@@ -11,9 +11,9 @@\n class ProductsSettings(Document):\n \tdef validate(self):\n \t\tif self.home_page_is_products:\n-\t\t\twebsite_settings = frappe.get_doc('Website Settings')\n-\t\t\twebsite_settings.home_page = 'products'\n-\t\t\twebsite_settings.save()\n+\t\t\tfrappe.db.set_value(\"Website Settings\", \"home_page\", \"products\")\n+\t\telif frappe.db.get_single_value(\"Website Settings\", \"home_page\") == 'products':\n+\t\t\tfrappe.db.set_value(\"Website Settings\", \"home_page\", \"home\")\n \n \t\tself.validate_field_filters()\n \t\tself.validate_attribute_filters()\n@@ -40,4 +40,5 @@ def home_page_is_products(doc, method):\n \thome_page_is_products = cint(frappe.db.get_single_value('Products Settings', 'home_page_is_products'))\n \tif home_page_is_products:\n \t\tdoc.home_page = 'products'\n-\n+\telif doc.home_page == \"products\":\n+\t\tdoc.home_page = 'home'\n"
}
|
[
{
"diff_hunk": "@@ -40,4 +40,5 @@ def home_page_is_products(doc, method):\n \thome_page_is_products = cint(frappe.db.get_single_value('Products Settings', 'home_page_is_products'))\n \tif home_page_is_products:\n \t\tdoc.home_page = 'products'\n-\n+\telif doc.home_page == \"products\":",
"line": null,
"original_line": 43,
"original_start_line": null,
"path": "erpnext/portal/doctype/products_settings/products_settings.py",
"start_line": null,
"text": "@user1:\nI am not sure if this block should be added. What if someone is unaware of Product Settings and is trying to set the homepage as \"products\" from Website Settings itself it will keep on changing the homepage to home and the user will get confused from where this is coming. Also currently it throws the following error if you try to set the homepage as \"products\" for this PR\r\n\r\n\r\n\n\n@author:\nSo according to this logic, whenever homepage is set to products and then later \"Home Page is Products\" is unchecked, it should reset back to home. It won't get triggered any other time. I do not see what the problem is with the logic?\r\n\r\nFor the error, as you see, I haven't changed anything other than adding a condition that checks whether the current homepage is set to products in this pull request.\n\n@author:\nthere is a problem with the logic, apologies. ill fix it."
}
] |
3e503e44040e397fc16b7260830bd8279b7043f6
|
diff --git a/erpnext/portal/doctype/products_settings/products_settings.py b/erpnext/portal/doctype/products_settings/products_settings.py
index 82afebf2f1e5..b984aeb67dfe 100644
--- a/erpnext/portal/doctype/products_settings/products_settings.py
+++ b/erpnext/portal/doctype/products_settings/products_settings.py
@@ -11,9 +11,9 @@
class ProductsSettings(Document):
def validate(self):
if self.home_page_is_products:
- website_settings = frappe.get_doc('Website Settings')
- website_settings.home_page = 'products'
- website_settings.save()
+ frappe.db.set_value("Website Settings", "home_page", "products")
+ elif frappe.db.get_single_value("Website Settings", "home_page") == 'products':
+ frappe.db.set_value("Website Settings", "home_page", "home")
self.validate_field_filters()
self.validate_attribute_filters()
@@ -40,4 +40,3 @@ def home_page_is_products(doc, method):
home_page_is_products = cint(frappe.db.get_single_value('Products Settings', 'home_page_is_products'))
if home_page_is_products:
doc.home_page = 'products'
-
|
{
"difficulty": "low",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-29082@8d10322
|
frappe/erpnext
|
Python
| 29,082
|
fix: wrong payment days in salary slip for employees joining/leaving during mid payroll dates
|
In case an employee joins/leaves mid payroll dates, their payment days would be calculated wrong.
This PR aims to consider joining and relieving dates while calculating payment days.
Closes #27943
Salary slip of an employee who joined 3 days after payroll start date and was present for the rest of the month:
<details>
<summary>Payroll Settings:</summary>

</details>
<details>
<summary>Before:</summary>
Absent days: 3

</details>
<details>
<summary>After:</summary>
Absent days: 0

</details>
Total working days remain unaffected as it it based on payroll processing dates and many components are calculated on it's basis.
|
2021-12-30T11:17:53Z
|
In Salary Slip, negative payment days issue for Left Employees.
**Description Of the Issue:**
In version 13.11.1 , When we created the salary slip against the left employees it shows negative payment days and Earning & Deduction table is not fetched for respective left employees.
**Version:**
version 13.11.1
|

DESCRIPTION:
1)The employee is relieved in the date of 09-07-2021 , payment days for that employee is two days only , instead it shows -19 (negative value of payment days).

2)Then Earnings and Deduction table is also not fetched.
|
[
{
"body": "**Description Of the Issue:**\r\nIn version 13.11.1 , When we created the salary slip against the left employees it shows negative payment days and Earning & Deduction table is not fetched for respective left employees.\r\n**Version:**\r\nversion 13.11.1\r\n",
"number": 27943,
"title": "In Salary Slip, negative payment days issue for Left Employees."
}
] |
385830bb51997fb2411a9ac0945a812b2bb47683
|
{
"head_commit": "8d103224f1b777054356ceada3c86ea287ef5eee",
"head_commit_message": "chore: optimize comparission logic",
"patch_to_review": "diff --git a/erpnext/payroll/doctype/salary_slip/salary_slip.py b/erpnext/payroll/doctype/salary_slip/salary_slip.py\nindex b035292c0b6a..542d5c809833 100644\n--- a/erpnext/payroll/doctype/salary_slip/salary_slip.py\n+++ b/erpnext/payroll/doctype/salary_slip/salary_slip.py\n@@ -321,13 +321,25 @@ def get_working_days_details(self, joining_date=None, relieving_date=None, lwp=N\n \t\t\tself.payment_days = 0\n \n \tdef get_unmarked_days(self):\n-\t\tmarked_days = frappe.get_all(\"Attendance\", filters = {\n-\t\t\t\t\t\"attendance_date\": [\"between\", [self.start_date, self.end_date]],\n-\t\t\t\t\t\"employee\": self.employee,\n-\t\t\t\t\t\"docstatus\": 1\n-\t\t\t\t}, fields = [\"COUNT(*) as marked_days\"])[0].marked_days\n+\t\tunmarked_days = self.total_working_days\n+\t\tjoining_date, relieving_date = frappe.get_cached_value(\"Employee\", self.employee,\n+\t\t\t[\"date_of_joining\", \"relieving_date\"])\n+\n+\t\tif joining_date:\n+\t\t\tif getdate(self.start_date) < joining_date <= getdate(self.end_date):\n+\t\t\t\tunmarked_days -= date_diff(joining_date, self.start_date)\n+\n+\t\tif relieving_date:\n+\t\t\tif getdate(self.start_date) <= relieving_date < getdate(self.end_date):\n+\t\t\t\tunmarked_days -= date_diff(relieving_date, self.end_date)\n+\n+\t\tunmarked_days -= frappe.get_all(\"Attendance\", filters = {\n+\t\t\t\"attendance_date\": [\"between\", [self.start_date, self.end_date]],\n+\t\t\t\"employee\": self.employee,\n+\t\t\t\"docstatus\": 1\n+\t\t}, fields = [\"COUNT(*) as marked_days\"])[0].marked_days\n \n-\t\treturn self.total_working_days - marked_days\n+\t\treturn unmarked_days\n \n \n \tdef get_payment_days(self, joining_date, relieving_date, include_holidays_in_total_working_days):\n"
}
|
[
{
"diff_hunk": "@@ -321,13 +321,25 @@ def get_working_days_details(self, joining_date=None, relieving_date=None, lwp=N\n \t\t\tself.payment_days = 0\n \n \tdef get_unmarked_days(self):\n-\t\tmarked_days = frappe.get_all(\"Attendance\", filters = {\n-\t\t\t\t\t\"attendance_date\": [\"between\", [self.start_date, self.end_date]],\n-\t\t\t\t\t\"employee\": self.employee,\n-\t\t\t\t\t\"docstatus\": 1\n-\t\t\t\t}, fields = [\"COUNT(*) as marked_days\"])[0].marked_days\n+\t\tunmarked_days = self.total_working_days\n+\t\tjoining_date, relieving_date = frappe.get_cached_value(\"Employee\", self.employee,\n+\t\t\t[\"date_of_joining\", \"relieving_date\"])\n+\n+\t\tif joining_date:\n+\t\t\tif getdate(self.start_date) < joining_date <= getdate(self.end_date):\n+\t\t\t\tunmarked_days -= date_diff(joining_date, self.start_date)\n+\n+\t\tif relieving_date:\n+\t\t\tif getdate(self.start_date) <= relieving_date < getdate(self.end_date):\n+\t\t\t\tunmarked_days -= date_diff(relieving_date, self.end_date)",
"line": null,
"original_line": 334,
"original_start_line": null,
"path": "erpnext/payroll/doctype/salary_slip/salary_slip.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n\t\t\t\tunmarked_days -= date_diff(self.end_date, relieving_date)\r\n```\r\n\r\n`date_diff` calculates difference as `end_date - start_date`:\r\n\r\nhttps://github.com/frappe/frappe/blob/acac60d061bbb1ee4957ca613c92e1526098aa3b/frappe/utils/data.py#L166-L167\r\n\r\nSo in this case since `end_date` would be after `relieving_date` its the end date\n\n@user1:\nThanks for the fix. \r\n\r\n- Can you also add separate tests for both these cases: Date of Joining and Relieving Date. \r\n- Also, the description is fairly understandable, but can you update the description with a screenshot depicting the before (negative days calculated) and after (payment days calculation working fine).\n\n@author:\nSure, will do."
}
] |
d8e42085e707db829dac1a217dd3bd2ce010431d
|
diff --git a/erpnext/hr/doctype/attendance/attendance.py b/erpnext/hr/doctype/attendance/attendance.py
index b1eaaf8b5872..b1e373e21815 100644
--- a/erpnext/hr/doctype/attendance/attendance.py
+++ b/erpnext/hr/doctype/attendance/attendance.py
@@ -174,16 +174,22 @@ def get_month_map():
def get_unmarked_days(employee, month, exclude_holidays=0):
import calendar
month_map = get_month_map()
-
today = get_datetime()
- dates_of_month = ['{}-{}-{}'.format(today.year, month_map[month], r) for r in range(1, calendar.monthrange(today.year, month_map[month])[1] + 1)]
+ joining_date, relieving_date = frappe.get_cached_value("Employee", employee, ["date_of_joining", "relieving_date"])
+ start_day = 1
+ end_day = calendar.monthrange(today.year, month_map[month])[1] + 1
+
+ if joining_date and joining_date.month == month_map[month]:
+ start_day = joining_date.day
- length = len(dates_of_month)
- month_start, month_end = dates_of_month[0], dates_of_month[length-1]
+ if relieving_date and relieving_date.month == month_map[month]:
+ end_day = relieving_date.day + 1
+ dates_of_month = ['{}-{}-{}'.format(today.year, month_map[month], r) for r in range(start_day, end_day)]
+ month_start, month_end = dates_of_month[0], dates_of_month[-1]
- records = frappe.get_all("Attendance", fields = ['attendance_date', 'employee'] , filters = [
+ records = frappe.get_all("Attendance", fields=['attendance_date', 'employee'], filters=[
["attendance_date", ">=", month_start],
["attendance_date", "<=", month_end],
["employee", "=", employee],
@@ -200,7 +206,7 @@ def get_unmarked_days(employee, month, exclude_holidays=0):
for date in dates_of_month:
date_time = get_datetime(date)
- if today.day == date_time.day and today.month == date_time.month:
+ if today.day <= date_time.day and today.month <= date_time.month:
break
if date_time not in marked_days:
unmarked_days.append(date)
diff --git a/erpnext/hr/doctype/attendance/test_attendance.py b/erpnext/hr/doctype/attendance/test_attendance.py
index a770d70ffa93..118cc987efb3 100644
--- a/erpnext/hr/doctype/attendance/test_attendance.py
+++ b/erpnext/hr/doctype/attendance/test_attendance.py
@@ -4,17 +4,104 @@
import unittest
import frappe
-from frappe.utils import nowdate
+from frappe.utils import add_days, get_first_day, getdate, nowdate
+
+from erpnext.hr.doctype.attendance.attendance import (
+ get_month_map,
+ get_unmarked_days,
+ mark_attendance,
+)
+from erpnext.hr.doctype.employee.test_employee import make_employee
+from erpnext.hr.doctype.leave_application.test_leave_application import get_first_sunday
test_records = frappe.get_test_records('Attendance')
class TestAttendance(unittest.TestCase):
def test_mark_absent(self):
- from erpnext.hr.doctype.employee.test_employee import make_employee
employee = make_employee("[email protected]")
date = nowdate()
frappe.db.delete('Attendance', {'employee':employee, 'attendance_date':date})
- from erpnext.hr.doctype.attendance.attendance import mark_attendance
attendance = mark_attendance(employee, date, 'Absent')
fetch_attendance = frappe.get_value('Attendance', {'employee':employee, 'attendance_date':date, 'status':'Absent'})
self.assertEqual(attendance, fetch_attendance)
+
+ def test_unmarked_days(self):
+ first_day = get_first_day(getdate())
+
+ employee = make_employee('[email protected]', date_of_joining=add_days(first_day, -1))
+ frappe.db.delete('Attendance', {'employee': employee})
+
+ from erpnext.payroll.doctype.salary_slip.test_salary_slip import make_holiday_list
+ holiday_list = make_holiday_list()
+ frappe.db.set_value('Employee', employee, 'holiday_list', holiday_list)
+
+ first_sunday = get_first_sunday(holiday_list)
+ mark_attendance(employee, first_day, 'Present')
+ month_name = get_month_name(first_day)
+
+ unmarked_days = get_unmarked_days(employee, month_name)
+ unmarked_days = [getdate(date) for date in unmarked_days]
+
+ # attendance already marked for the day
+ self.assertNotIn(first_day, unmarked_days)
+ # attendance unmarked
+ self.assertIn(getdate(add_days(first_day, 1)), unmarked_days)
+ # holiday considered in unmarked days
+ self.assertIn(first_sunday, unmarked_days)
+
+ def test_unmarked_days_excluding_holidays(self):
+ first_day = get_first_day(getdate())
+
+ employee = make_employee('[email protected]', date_of_joining=add_days(first_day, -1))
+ frappe.db.delete('Attendance', {'employee': employee})
+
+ from erpnext.payroll.doctype.salary_slip.test_salary_slip import make_holiday_list
+ holiday_list = make_holiday_list()
+ frappe.db.set_value('Employee', employee, 'holiday_list', holiday_list)
+
+ first_sunday = get_first_sunday(holiday_list)
+ mark_attendance(employee, first_day, 'Present')
+ month_name = get_month_name(first_day)
+
+ unmarked_days = get_unmarked_days(employee, month_name, exclude_holidays=True)
+ unmarked_days = [getdate(date) for date in unmarked_days]
+
+ # attendance already marked for the day
+ self.assertNotIn(first_day, unmarked_days)
+ # attendance unmarked
+ self.assertIn(getdate(add_days(first_day, 1)), unmarked_days)
+ # holidays not considered in unmarked days
+ self.assertNotIn(first_sunday, unmarked_days)
+
+ def test_unmarked_days_as_per_joining_and_relieving_dates(self):
+ first_day = get_first_day(getdate())
+
+ doj = add_days(first_day, 1)
+ relieving_date = add_days(first_day, 5)
+ employee = make_employee('[email protected]', date_of_joining=doj,
+ date_of_relieving=relieving_date)
+ frappe.db.delete('Attendance', {'employee': employee})
+
+ attendance_date = add_days(first_day, 2)
+ mark_attendance(employee, attendance_date, 'Present')
+ month_name = get_month_name(first_day)
+
+ unmarked_days = get_unmarked_days(employee, month_name)
+ unmarked_days = [getdate(date) for date in unmarked_days]
+
+ # attendance already marked for the day
+ self.assertNotIn(attendance_date, unmarked_days)
+ # date before doj not in unmarked days
+ self.assertNotIn(add_days(doj, -1), unmarked_days)
+ # date after relieving not in unmarked days
+ self.assertNotIn(add_days(relieving_date, 1), unmarked_days)
+
+ def tearDown(self):
+ frappe.db.rollback()
+
+
+def get_month_name(date):
+ month_number = date.month
+ for month, number in get_month_map().items():
+ if number == month_number:
+ return month
\ No newline at end of file
diff --git a/erpnext/payroll/doctype/salary_slip/salary_slip.py b/erpnext/payroll/doctype/salary_slip/salary_slip.py
index d2a39989a614..b44dbb926d2b 100644
--- a/erpnext/payroll/doctype/salary_slip/salary_slip.py
+++ b/erpnext/payroll/doctype/salary_slip/salary_slip.py
@@ -307,28 +307,59 @@ def get_working_days_details(self, joining_date=None, relieving_date=None, lwp=N
if payroll_based_on == "Attendance":
self.payment_days -= flt(absent)
- unmarked_days = self.get_unmarked_days()
consider_unmarked_attendance_as = frappe.db.get_value("Payroll Settings", None, "consider_unmarked_attendance_as") or "Present"
if payroll_based_on == "Attendance" and consider_unmarked_attendance_as =="Absent":
+ unmarked_days = self.get_unmarked_days(include_holidays_in_total_working_days)
self.absent_days += unmarked_days #will be treated as absent
self.payment_days -= unmarked_days
- if include_holidays_in_total_working_days:
- for holiday in holidays:
- if not frappe.db.exists("Attendance", {"employee": self.employee, "attendance_date": holiday, "docstatus": 1 }):
- self.payment_days += 1
else:
self.payment_days = 0
- def get_unmarked_days(self):
- marked_days = frappe.get_all("Attendance", filters = {
- "attendance_date": ["between", [self.start_date, self.end_date]],
- "employee": self.employee,
- "docstatus": 1
- }, fields = ["COUNT(*) as marked_days"])[0].marked_days
+ def get_unmarked_days(self, include_holidays_in_total_working_days):
+ unmarked_days = self.total_working_days
+ joining_date, relieving_date = frappe.get_cached_value("Employee", self.employee,
+ ["date_of_joining", "relieving_date"])
+ start_date = self.start_date
+ end_date = self.end_date
+
+ if joining_date and (getdate(self.start_date) < joining_date <= getdate(self.end_date)):
+ start_date = joining_date
+ unmarked_days = self.get_unmarked_days_based_on_doj_or_relieving(unmarked_days,
+ include_holidays_in_total_working_days, self.start_date, joining_date)
+
+ if relieving_date and (getdate(self.start_date) <= relieving_date < getdate(self.end_date)):
+ end_date = relieving_date
+ unmarked_days = self.get_unmarked_days_based_on_doj_or_relieving(unmarked_days,
+ include_holidays_in_total_working_days, relieving_date, self.end_date)
+
+ # exclude days for which attendance has been marked
+ unmarked_days -= frappe.get_all("Attendance", filters = {
+ "attendance_date": ["between", [start_date, end_date]],
+ "employee": self.employee,
+ "docstatus": 1
+ }, fields = ["COUNT(*) as marked_days"])[0].marked_days
- return self.total_working_days - marked_days
+ return unmarked_days
+
+ def get_unmarked_days_based_on_doj_or_relieving(self, unmarked_days,
+ include_holidays_in_total_working_days, start_date, end_date):
+ """
+ Exclude days before DOJ or after
+ Relieving Date from unmarked days
+ """
+ from erpnext.hr.doctype.employee.employee import is_holiday
+
+ if include_holidays_in_total_working_days:
+ unmarked_days -= date_diff(end_date, start_date)
+ else:
+ # exclude only if not holidays
+ for days in range(date_diff(end_date, start_date)):
+ date = add_days(end_date, -days)
+ if not is_holiday(self.employee, date):
+ unmarked_days -= 1
+ return unmarked_days
def get_payment_days(self, joining_date, relieving_date, include_holidays_in_total_working_days):
if not joining_date:
diff --git a/erpnext/payroll/doctype/salary_slip/test_salary_slip.py b/erpnext/payroll/doctype/salary_slip/test_salary_slip.py
index 6a5debf99845..fe15f2d3faaf 100644
--- a/erpnext/payroll/doctype/salary_slip/test_salary_slip.py
+++ b/erpnext/payroll/doctype/salary_slip/test_salary_slip.py
@@ -7,10 +7,12 @@
import frappe
from frappe.model.document import Document
+from frappe.tests.utils import change_settings
from frappe.utils import (
add_days,
add_months,
cstr,
+ date_diff,
flt,
get_first_day,
get_last_day,
@@ -21,6 +23,7 @@
import erpnext
from erpnext.accounts.utils import get_fiscal_year
+from erpnext.hr.doctype.attendance.attendance import mark_attendance
from erpnext.hr.doctype.employee.test_employee import make_employee
from erpnext.hr.doctype.leave_allocation.test_leave_allocation import create_leave_allocation
from erpnext.hr.doctype.leave_type.test_leave_type import create_leave_type
@@ -37,17 +40,17 @@ def setUp(self):
setup_test()
def tearDown(self):
+ frappe.db.rollback()
frappe.db.set_value("Payroll Settings", None, "include_holidays_in_total_working_days", 0)
frappe.set_user("Administrator")
+ @change_settings("Payroll Settings", {
+ "payroll_based_on": "Attendance",
+ "daily_wages_fraction_for_half_day": 0.75
+ })
def test_payment_days_based_on_attendance(self):
- from erpnext.hr.doctype.attendance.attendance import mark_attendance
no_of_days = self.get_no_of_days()
- # Payroll based on attendance
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Attendance")
- frappe.db.set_value("Payroll Settings", None, "daily_wages_fraction_for_half_day", 0.75)
-
emp_id = make_employee("[email protected]")
frappe.db.set_value("Employee", emp_id, {"relieving_date": None, "status": "Active"})
@@ -85,13 +88,77 @@ def test_payment_days_based_on_attendance(self):
self.assertEqual(ss.gross_pay, gross_pay)
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Leave")
+ @change_settings("Payroll Settings", {
+ "payroll_based_on": "Attendance",
+ "consider_unmarked_attendance_as": "Absent",
+ "include_holidays_in_total_working_days": True
+ })
+ def test_payment_days_for_mid_joinee_including_holidays(self):
+ from erpnext.hr.doctype.holiday_list.holiday_list import is_holiday
+
+ no_of_days = self.get_no_of_days()
+ month_start_date, month_end_date = get_first_day(nowdate()), get_last_day(nowdate())
+
+ new_emp_id = make_employee("[email protected]")
+ joining_date, relieving_date = add_days(month_start_date, 3), add_days(month_end_date, -5)
+ frappe.db.set_value("Employee", new_emp_id, {
+ "date_of_joining": joining_date,
+ "relieving_date": relieving_date,
+ "status": "Left"
+ })
+
+ holidays = 0
+
+ for days in range(date_diff(relieving_date, joining_date) + 1):
+ date = add_days(joining_date, days)
+ if not is_holiday("Salary Slip Test Holiday List", date):
+ mark_attendance(new_emp_id, date, 'Present', ignore_validate=True)
+ else:
+ holidays += 1
+
+ new_ss = make_employee_salary_slip("[email protected]", "Monthly", "Test Payment Based On Attendence")
+
+ self.assertEqual(new_ss.total_working_days, no_of_days[0])
+ self.assertEqual(new_ss.payment_days, no_of_days[0] - holidays - 8)
+
+ @change_settings("Payroll Settings", {
+ "payroll_based_on": "Attendance",
+ "consider_unmarked_attendance_as": "Absent",
+ "include_holidays_in_total_working_days": False
+ })
+ def test_payment_days_for_mid_joinee_excluding_holidays(self):
+ from erpnext.hr.doctype.holiday_list.holiday_list import is_holiday
- def test_payment_days_based_on_leave_application(self):
no_of_days = self.get_no_of_days()
+ month_start_date, month_end_date = get_first_day(nowdate()), get_last_day(nowdate())
- # Payroll based on attendance
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Leave")
+ new_emp_id = make_employee("[email protected]")
+ joining_date, relieving_date = add_days(month_start_date, 3), add_days(month_end_date, -5)
+ frappe.db.set_value("Employee", new_emp_id, {
+ "date_of_joining": joining_date,
+ "relieving_date": relieving_date,
+ "status": "Left"
+ })
+
+ holidays = 0
+
+ for days in range(date_diff(relieving_date, joining_date) + 1):
+ date = add_days(joining_date, days)
+ if not is_holiday("Salary Slip Test Holiday List", date):
+ mark_attendance(new_emp_id, date, 'Present', ignore_validate=True)
+ else:
+ holidays += 1
+
+ new_ss = make_employee_salary_slip("[email protected]", "Monthly", "Test Payment Based On Attendence")
+
+ self.assertEqual(new_ss.total_working_days, no_of_days[0] - no_of_days[1])
+ self.assertEqual(new_ss.payment_days, no_of_days[0] - holidays - 8)
+
+ @change_settings("Payroll Settings", {
+ "payroll_based_on": "Leave"
+ })
+ def test_payment_days_based_on_leave_application(self):
+ no_of_days = self.get_no_of_days()
emp_id = make_employee("[email protected]")
frappe.db.set_value("Employee", emp_id, {"relieving_date": None, "status": "Active"})
@@ -133,8 +200,9 @@ def test_payment_days_based_on_leave_application(self):
self.assertEqual(ss.payment_days, days_in_month - no_of_holidays - 4)
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Leave")
-
+ @change_settings("Payroll Settings", {
+ "payroll_based_on": "Attendance"
+ })
def test_payment_days_in_salary_slip_based_on_timesheet(self):
from erpnext.hr.doctype.attendance.attendance import mark_attendance
from erpnext.projects.doctype.timesheet.test_timesheet import (
@@ -145,9 +213,6 @@ def test_payment_days_in_salary_slip_based_on_timesheet(self):
make_salary_slip as make_salary_slip_for_timesheet,
)
- # Payroll based on attendance
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Attendance")
-
emp = make_employee("[email protected]", company="_Test Company", holiday_list="Salary Slip Test Holiday List")
frappe.db.set_value("Employee", emp, {"relieving_date": None, "status": "Active"})
@@ -185,17 +250,15 @@ def test_payment_days_in_salary_slip_based_on_timesheet(self):
self.assertEqual(salary_slip.gross_pay, flt(gross_pay, 2))
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Leave")
-
+ @change_settings("Payroll Settings", {
+ "payroll_based_on": "Attendance"
+ })
def test_component_amount_dependent_on_another_payment_days_based_component(self):
from erpnext.hr.doctype.attendance.attendance import mark_attendance
from erpnext.payroll.doctype.salary_structure.test_salary_structure import (
create_salary_structure_assignment,
)
- # Payroll based on attendance
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Attendance")
-
salary_structure = make_salary_structure_for_payment_days_based_component_dependency()
employee = make_employee("[email protected]", company="_Test Company")
@@ -238,11 +301,12 @@ def test_component_amount_dependent_on_another_payment_days_based_component(self
expected_amount = flt((flt(ss.gross_pay) - payment_days_based_comp_amount) * 0.12, precision)
self.assertEqual(actual_amount, expected_amount)
- frappe.db.set_value("Payroll Settings", None, "payroll_based_on", "Leave")
+ @change_settings("Payroll Settings", {
+ "include_holidays_in_total_working_days": 1
+ })
def test_salary_slip_with_holidays_included(self):
no_of_days = self.get_no_of_days()
- frappe.db.set_value("Payroll Settings", None, "include_holidays_in_total_working_days", 1)
make_employee("[email protected]")
frappe.db.set_value("Employee", frappe.get_value("Employee",
{"employee_name":"[email protected]"}, "name"), "relieving_date", None)
@@ -256,9 +320,11 @@ def test_salary_slip_with_holidays_included(self):
self.assertEqual(ss.earnings[1].amount, 3000)
self.assertEqual(ss.gross_pay, 78000)
+ @change_settings("Payroll Settings", {
+ "include_holidays_in_total_working_days": 0
+ })
def test_salary_slip_with_holidays_excluded(self):
no_of_days = self.get_no_of_days()
- frappe.db.set_value("Payroll Settings", None, "include_holidays_in_total_working_days", 0)
make_employee("[email protected]")
frappe.db.set_value("Employee", frappe.get_value("Employee",
{"employee_name":"[email protected]"}, "name"), "relieving_date", None)
@@ -273,14 +339,15 @@ def test_salary_slip_with_holidays_excluded(self):
self.assertEqual(ss.earnings[1].amount, 3000)
self.assertEqual(ss.gross_pay, 78000)
+ @change_settings("Payroll Settings", {
+ "include_holidays_in_total_working_days": 1
+ })
def test_payment_days(self):
from erpnext.payroll.doctype.salary_structure.test_salary_structure import (
create_salary_structure_assignment,
)
no_of_days = self.get_no_of_days()
- # Holidays not included in working days
- frappe.db.set_value("Payroll Settings", None, "include_holidays_in_total_working_days", 1)
# set joinng date in the same month
employee = make_employee("[email protected]")
@@ -338,11 +405,12 @@ def test_employee_salary_slip_read_permission(self):
frappe.set_user("[email protected]")
self.assertTrue(salary_slip_test_employee.has_permission("read"))
+ @change_settings("Payroll Settings", {
+ "email_salary_slip_to_employee": 1
+ })
def test_email_salary_slip(self):
frappe.db.sql("delete from `tabEmail Queue`")
- frappe.db.set_value("Payroll Settings", None, "email_salary_slip_to_employee", 1)
-
make_employee("[email protected]")
ss = make_employee_salary_slip("[email protected]", "Monthly", "Test Salary Slip Email")
ss.company = "_Test Company"
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
frappe__erpnext-14770@c7c897b
|
frappe/erpnext
|
Python
| 14,770
|
on delete contact update issue
|
fixes https://github.com/frappe/frappe/issues/5724
|
2018-07-02T05:53:43Z
|
Missing tabIssue
Bug report
I installed only frappe without erpnext and created a new Contact. When I tried to delete it I got the error stated below. The error appears every time when deleting Contact. The operation needs table "Issue" which is in erpnext. The error does not appear when the erpnext is installed on the site.
Steps to reproduce:
Install frappe only
Go to Contact List
Create new Contact
Delete the Contact
Error:
-----------------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/frappe/frappe-bench/apps/frappe/frappe/app.py", line 62, in application
response = frappe.handler.handle()
File "/home/frappe/frappe-bench/apps/frappe/frappe/handler.py", line 22, in handle
data = execute_cmd(cmd)
File "/home/frappe/frappe-bench/apps/frappe/frappe/handler.py", line 53, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 942, in call
return fn(*args, **newargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/client.py", line 237, in delete
frappe.delete_doc(doctype, name, ignore_missing=False)
File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 676, in delete_doc
ignore_permissions, flags, ignore_on_trash, ignore_missing)
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/delete_doc.py", line 78, in delete_doc
doc.run_method("on_trash")
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py", line 765, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py", line 1040, in composer
return composed(self, method, *args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py", line 1023, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py", line 759, in <lambda>
fn = lambda self, *args, **kwargs: getattr(self, method)(*args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/contacts/doctype/contact/contact.py", line 45, in on_trash
self.name)
File "/home/frappe/frappe-bench/apps/frappe/frappe/database.py", line 199, in sql
self._cursor.execute(query, values)
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/cursors.py", line 165, in execute
result = self._query(query)
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/cursors.py", line 321, in _query
conn.query(q)
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 860, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1061, in _read_query_result
result.read()
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1349, in read
first_packet = self.connection._read_packet()
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 1018, in _read_packet
packet.check_error()
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py", line 384, in check_error
err.raise_mysql_exception(self._data)
File "/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_exception
raise errorclass(errno, errval)
ProgrammingError: (1146, u"Table 'a7708dafc596f3dc.tabIssue' doesn't exist")
|
[
{
"body": "Bug report\r\n\r\nI installed only frappe without erpnext and created a new Contact. When I tried to delete it I got the error stated below. The error appears every time when deleting Contact. The operation needs table \"Issue\" which is in erpnext. The error does not appear when the erpnext is installed on the site.\r\n\r\nSteps to reproduce:\r\nInstall frappe only\r\nGo to Contact List\r\nCreate new Contact\r\nDelete the Contact\r\n\r\nError:\r\n-----------------------------------------------------------------------------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\", line 62, in application\r\n response = frappe.handler.handle()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/handler.py\", line 22, in handle\r\n data = execute_cmd(cmd)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/handler.py\", line 53, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py\", line 942, in call\r\n return fn(*args, **newargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/client.py\", line 237, in delete\r\n frappe.delete_doc(doctype, name, ignore_missing=False)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py\", line 676, in delete_doc\r\n ignore_permissions, flags, ignore_on_trash, ignore_missing)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/delete_doc.py\", line 78, in delete_doc\r\n doc.run_method(\"on_trash\")\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py\", line 765, in run_method\r\n out = Document.hook(fn)(self, *args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py\", line 1040, in composer\r\n return composed(self, method, *args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py\", line 1023, in runner\r\n add_to_return_value(self, fn(self, *args, **kwargs))\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py\", line 759, in <lambda>\r\n fn = lambda self, *args, **kwargs: getattr(self, method)(*args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/contacts/doctype/contact/contact.py\", line 45, in on_trash\r\n self.name)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/database.py\", line 199, in sql\r\n self._cursor.execute(query, values)\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/cursors.py\", line 165, in execute\r\n result = self._query(query)\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/cursors.py\", line 321, in _query\r\n conn.query(q)\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py\", line 860, in query\r\n self._affected_rows = self._read_query_result(unbuffered=unbuffered)\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py\", line 1061, in _read_query_result\r\n result.read()\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py\", line 1349, in read\r\n first_packet = self.connection._read_packet()\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py\", line 1018, in _read_packet\r\n packet.check_error()\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connections.py\", line 384, in check_error\r\n err.raise_mysql_exception(self._data)\r\n File \"/home/frappe/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/err.py\", line 107, in raise_mysql_exception\r\n raise errorclass(errno, errval)\r\nProgrammingError: (1146, u\"Table 'a7708dafc596f3dc.tabIssue' doesn't exist\")",
"number": 5724,
"title": "Missing tabIssue"
}
] |
afe764264dcb42837b7e1ba8e8a2d52d664b443a
|
{
"head_commit": "c7c897bbd2194c631d8c0fe047cf7a00f6466d99",
"head_commit_message": "on delete contact update isuue\n\nremove contact from issue if the deleted contact exists",
"patch_to_review": "diff --git a/erpnext/custom_hooks/__init__.py b/erpnext/custom_hooks/__init__.py\nnew file mode 100644\nindex 000000000000..e69de29bb2d1\ndiff --git a/erpnext/custom_hooks/contacts.py b/erpnext/custom_hooks/contacts.py\nnew file mode 100644\nindex 000000000000..e1db33de288a\n--- /dev/null\n+++ b/erpnext/custom_hooks/contacts.py\n@@ -0,0 +1,11 @@\n+#!/bin/env python\n+# -*- coding: utf-8 -*-\n+\n+\"\"\"Update issue.\"\"\"\n+import frappe\n+\n+\n+def update_issue(contact, method):\n+\t\"\"\"Update tabIssue\"\"\"\n+\tfrappe.db.sql(\"\"\"update `tabIssue` set contact='' where contact=%s\"\"\",\t\n+-\t\t\tcontact.name)\ndiff --git a/erpnext/hooks.py b/erpnext/hooks.py\nindex 5f1e7d4243c5..c7bf641f9c77 100644\n--- a/erpnext/hooks.py\n+++ b/erpnext/hooks.py\n@@ -206,6 +206,9 @@\n \t},\n \t('Sales Invoice', 'Purchase Invoice', 'Delivery Note'): {\n \t\t'validate': 'erpnext.regional.india.utils.set_place_of_supply'\n+\t},\n+\t\"Contact\":{\n+\t\t\"on_trash\": \"erpnext.custom_hooks.contacts.update_issue\"\n \t}\n }\n \n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,11 @@\n+#!/bin/env python",
"line": null,
"original_line": 1,
"original_start_line": null,
"path": "erpnext/custom_hooks/contacts.py",
"start_line": null,
"text": "@user1:\nput this in `issue.py`"
}
] |
137d046a5aa1a46c397dacad53ce1fb5e82d9338
|
diff --git a/erpnext/hooks.py b/erpnext/hooks.py
index 5f1e7d4243c5..b5e130fe33a0 100644
--- a/erpnext/hooks.py
+++ b/erpnext/hooks.py
@@ -206,6 +206,9 @@
},
('Sales Invoice', 'Purchase Invoice', 'Delivery Note'): {
'validate': 'erpnext.regional.india.utils.set_place_of_supply'
+ },
+ "Contact":{
+ "on_trash": "erpnext.support.doctype.issue.issue.update_issue"
}
}
diff --git a/erpnext/support/doctype/issue/issue.py b/erpnext/support/doctype/issue/issue.py
index dfcc2a8936db..ef54b20efcc5 100644
--- a/erpnext/support/doctype/issue/issue.py
+++ b/erpnext/support/doctype/issue/issue.py
@@ -130,3 +130,8 @@ def set_multiple_status(names, status):
def has_website_permission(doc, ptype, user, verbose=False):
return doc.raised_by==user
+
+
+def update_issue(contact, method):
+ """Called when Contact is deleted"""
+ frappe.db.sql("""UPDATE `tabIssue` set contact='' where contact=%s""", contact.name)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
frappe__erpnext-14127@e63e97f
|
frappe/erpnext
|
Python
| 14,127
|
Link Share Transfer to Account
|
Closes: #13415



|
2018-05-17T13:19:46Z
|
[Feature request]Link Share Transfer to Account
Can Share transfer doctype have two fields to show the accounts(Equity and Assets) related data for the account entries. During a share transfer, corresponding accounting entries take place which affect the balance sheet. We can add the necessary accounts on this doctype on which basis we can go ahead and make a JV. It should also be submittable because it is a transactional entry
If transfer_type = "Issue" or "Purchase" show both fields one with Equity accounts related data and other for Assets accounts related data
If transfer_type = "Transfer" show only one field with Equity accounts related data
|
[
{
"body": "Can Share transfer doctype have two fields to show the accounts(Equity and Assets) related data for the account entries. During a share transfer, corresponding accounting entries take place which affect the balance sheet. We can add the necessary accounts on this doctype on which basis we can go ahead and make a JV. It should also be submittable because it is a transactional entry\r\n\r\nIf transfer_type = \"Issue\" or \"Purchase\" show both fields one with Equity accounts related data and other for Assets accounts related data\r\n\r\nIf transfer_type = \"Transfer\" show only one field with Equity accounts related data",
"number": 13415,
"title": "[Feature request]Link Share Transfer to Account"
}
] |
967d04ed0df8783bf8f558ddbd957792c27ada47
|
{
"head_commit": "e63e97fa656366520d6093c4ca4f96f784c33983",
"head_commit_message": "minor changes",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/share_transfer/share_transfer.js b/erpnext/accounts/doctype/share_transfer/share_transfer.js\nindex fbf12e518d75..f280c5ddb2d6 100644\n--- a/erpnext/accounts/doctype/share_transfer/share_transfer.js\n+++ b/erpnext/accounts/doctype/share_transfer/share_transfer.js\n@@ -16,6 +16,11 @@ frappe.ui.form.on('Share Transfer', {\n \t\t\t\t};\n \t\t\t};\n \t\t});\n+\t\tif (frm.doc.docstatus == 1) {\n+\t\t\tfrm.add_custom_button(__('Make Journal Entry'), function () {\n+\t\t\t\terpnext.share_transfer.make_jv(frm);\n+\t\t\t});\n+\t\t}\n \t},\n \tno_of_shares: (frm) => {\n \t\tif (frm.doc.rate != undefined || frm.doc.rate != null){\n@@ -26,6 +31,31 @@ frappe.ui.form.on('Share Transfer', {\n \t\tif (frm.doc.no_of_shares != undefined || frm.doc.no_of_shares != null){\n \t\t\terpnext.share_transfer.update_amount(frm);\n \t\t}\n+\t},\n+\tcompany: async function(frm) {\n+\t\tif (frm.doc.company) {\n+\t\t\tlet currency = (await frappe.db.get_value(\"Company\", frm.doc.company, \"default_currency\")).message.default_currency;\n+\t\t\tfrm.set_query(\"equity_or_liability_account\", function() {\n+\t\t\t\treturn {\n+\t\t\t\t\tfilters: {\n+\t\t\t\t\t\t\"is_group\":0,\n+\t\t\t\t\t\t\"root_type\": [\"in\",[\"Equity\",\"Liability\"]],\n+\t\t\t\t\t\t\"company\": frm.doc.company,\n+\t\t\t\t\t\t\"account_currency\": currency\n+\t\t\t\t\t}\n+\t\t\t\t};\n+\t\t\t});\n+\t\t\tfrm.set_query(\"asset_account\", function() {\n+\t\t\t\treturn {\n+\t\t\t\t\tfilters: {\n+\t\t\t\t\t\t\"is_group\":0,\n+\t\t\t\t\t\t\"root_type\":\"Asset\",\n+\t\t\t\t\t\t\"company\": frm.doc.company,\n+\t\t\t\t\t\t\"account_currency\": currency\n+\t\t\t\t\t}\n+\t\t\t\t};\n+\t\t\t});\n+\t\t}\n \t}\n });\n \n@@ -33,3 +63,42 @@ erpnext.share_transfer.update_amount = function(frm) {\n \tfrm.doc.amount = frm.doc.no_of_shares * frm.doc.rate;\n \tfrm.refresh_field(\"amount\");\n };\n+\n+erpnext.share_transfer.make_jv = function (frm) {\n+\tvar account,payment_account,credit_applicant_type,credit_applicant,\n+\t\tdebit_applicant_type,debit_applicant;\n+\n+\tif (frm.doc.transfer_type == \"Transfer\") {\n+\t\taccount = frm.doc.equity_or_liability_account;\n+\t\tpayment_account = frm.doc.equity_or_liability_account;\n+\t\tcredit_applicant_type = \"Shareholder\";\n+\t\tcredit_applicant = frm.doc.to_shareholder;\n+\t\tdebit_applicant_type = \"Shareholder\";\n+\t\tdebit_applicant = frm.doc.from_shareholder;\n+\t}\n+\telse {\n+\t\taccount =(frm.doc.transfer_type == \"Issue\") ? frm.doc.asset_account : frm.doc.equity_or_liability_account;\n+\t\tpayment_account = (frm.doc.transfer_type == \"Issue\") ? frm.doc.equity_or_liability_account : frm.doc.asset_account;\n+\t\tcredit_applicant_type = (frm.doc.transfer_type == \"Issue\") ? \"Shareholder\" :\"\";\n+\t\tcredit_applicant = (frm.doc.transfer_type == \"Issue\") ? frm.doc.to_shareholder :\"\";\n+\t\tdebit_applicant_type = (frm.doc.transfer_type == \"Purchase\") ? \"Shareholder\" :\"\";\n+\t\tdebit_applicant = (frm.doc.transfer_type == \"Purchase\") ? frm.doc.from_shareholder :\"\";\n+\t}\n+\tfrappe.call({\n+\t\targs: {\n+\t\t\t\"company\": frm.doc.company,\n+\t\t\t\"account\": account,\n+\t\t\t\"amount\": frm.doc.amount,\n+\t\t\t\"payment_account\": payment_account,\n+\t\t\t\"credit_applicant_type\": credit_applicant_type,\n+\t\t\t\"credit_applicant\": credit_applicant,\n+\t\t\t\"debit_applicant_type\": debit_applicant_type,\n+\t\t\t\"debit_applicant\": debit_applicant\n+\t\t},\n+\t\tmethod: \"erpnext.accounts.doctype.share_transfer.share_transfer.make_jv_entry\",\n+\t\tcallback: function (r) {\n+\t\t\tvar doc = frappe.model.sync(r.message)[0];\n+\t\t\tfrappe.set_route(\"Form\", doc.doctype, doc.name);\n+\t\t}\n+\t});\n+};\n\\ No newline at end of file\ndiff --git a/erpnext/accounts/doctype/share_transfer/share_transfer.json b/erpnext/accounts/doctype/share_transfer/share_transfer.json\nindex 9e6f49d6b5de..2f288c3c3227 100644\n--- a/erpnext/accounts/doctype/share_transfer/share_transfer.json\n+++ b/erpnext/accounts/doctype/share_transfer/share_transfer.json\n@@ -42,6 +42,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -71,6 +72,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -101,6 +103,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -130,6 +133,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -162,6 +166,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -194,6 +199,73 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n+ \"unique\": 0\n+ }, \n+ {\n+ \"allow_bulk_edit\": 0, \n+ \"allow_on_submit\": 0, \n+ \"bold\": 0, \n+ \"collapsible\": 0, \n+ \"columns\": 0, \n+ \"depends_on\": \"eval:doc.company\", \n+ \"fieldname\": \"equity_or_liability_account\", \n+ \"fieldtype\": \"Link\", \n+ \"hidden\": 0, \n+ \"ignore_user_permissions\": 0, \n+ \"ignore_xss_filter\": 0, \n+ \"in_filter\": 0, \n+ \"in_global_search\": 0, \n+ \"in_list_view\": 0, \n+ \"in_standard_filter\": 0, \n+ \"label\": \"Equity/Liability Account\", \n+ \"length\": 0, \n+ \"no_copy\": 0, \n+ \"options\": \"Account\", \n+ \"permlevel\": 0, \n+ \"precision\": \"\", \n+ \"print_hide\": 0, \n+ \"print_hide_if_no_value\": 0, \n+ \"read_only\": 0, \n+ \"remember_last_selected_value\": 0, \n+ \"report_hide\": 0, \n+ \"reqd\": 0, \n+ \"search_index\": 0, \n+ \"set_only_once\": 0, \n+ \"translatable\": 0, \n+ \"unique\": 0\n+ }, \n+ {\n+ \"allow_bulk_edit\": 0, \n+ \"allow_on_submit\": 0, \n+ \"bold\": 0, \n+ \"collapsible\": 0, \n+ \"columns\": 0, \n+ \"depends_on\": \"eval:(doc.transfer_type != 'Transfer') && (doc.company)\", \n+ \"fieldname\": \"asset_account\", \n+ \"fieldtype\": \"Link\", \n+ \"hidden\": 0, \n+ \"ignore_user_permissions\": 0, \n+ \"ignore_xss_filter\": 0, \n+ \"in_filter\": 0, \n+ \"in_global_search\": 0, \n+ \"in_list_view\": 0, \n+ \"in_standard_filter\": 0, \n+ \"label\": \"Asset Account\", \n+ \"length\": 0, \n+ \"no_copy\": 0, \n+ \"options\": \"Account\", \n+ \"permlevel\": 0, \n+ \"precision\": \"\", \n+ \"print_hide\": 0, \n+ \"print_hide_if_no_value\": 0, \n+ \"read_only\": 0, \n+ \"remember_last_selected_value\": 0, \n+ \"report_hide\": 0, \n+ \"reqd\": 0, \n+ \"search_index\": 0, \n+ \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -223,6 +295,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -255,6 +328,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -287,6 +361,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -316,6 +391,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -347,6 +423,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -378,6 +455,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -408,6 +486,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -437,6 +516,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -467,6 +547,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -498,6 +579,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -528,6 +610,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -557,6 +640,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -588,6 +672,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -617,6 +702,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -647,6 +733,38 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n+ \"unique\": 0\n+ }, \n+ {\n+ \"allow_bulk_edit\": 0, \n+ \"allow_on_submit\": 0, \n+ \"bold\": 0, \n+ \"collapsible\": 0, \n+ \"columns\": 0, \n+ \"fieldname\": \"amended_from\", \n+ \"fieldtype\": \"Link\", \n+ \"hidden\": 0, \n+ \"ignore_user_permissions\": 0, \n+ \"ignore_xss_filter\": 0, \n+ \"in_filter\": 0, \n+ \"in_global_search\": 0, \n+ \"in_list_view\": 0, \n+ \"in_standard_filter\": 0, \n+ \"label\": \"Amended From\", \n+ \"length\": 0, \n+ \"no_copy\": 1, \n+ \"options\": \"Share Transfer\", \n+ \"permlevel\": 0, \n+ \"print_hide\": 1, \n+ \"print_hide_if_no_value\": 0, \n+ \"read_only\": 1, \n+ \"remember_last_selected_value\": 0, \n+ \"report_hide\": 0, \n+ \"reqd\": 0, \n+ \"search_index\": 0, \n+ \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }\n ], \n@@ -656,11 +774,11 @@\n \"idx\": 0, \n \"image_view\": 0, \n \"in_create\": 0, \n- \"is_submittable\": 0, \n+ \"is_submittable\": 1, \n \"issingle\": 0, \n \"istable\": 0, \n \"max_attachments\": 0, \n- \"modified\": \"2018-01-23 16:12:54.776896\", \n+ \"modified\": \"2018-05-17 15:25:36.429433\", \n \"modified_by\": \"Administrator\", \n \"module\": \"Accounts\", \n \"name\": \"Share Transfer\", \n@@ -668,7 +786,7 @@\n \"owner\": \"Administrator\", \n \"permissions\": [\n {\n- \"amend\": 0, \n+ \"amend\": 1, \n \"apply_user_permissions\": 0, \n \"cancel\": 0, \n \"create\": 1, \n@@ -684,7 +802,7 @@\n \"role\": \"System Manager\", \n \"set_user_permissions\": 0, \n \"share\": 1, \n- \"submit\": 0, \n+ \"submit\": 1, \n \"write\": 1\n }\n ], \ndiff --git a/erpnext/accounts/doctype/share_transfer/share_transfer.py b/erpnext/accounts/doctype/share_transfer/share_transfer.py\nindex 2a2d9ff0f104..50ce9f2bfd8b 100644\n--- a/erpnext/accounts/doctype/share_transfer/share_transfer.py\n+++ b/erpnext/accounts/doctype/share_transfer/share_transfer.py\n@@ -8,14 +8,15 @@\n from frappe.model.document import Document\n from frappe.model.naming import make_autoname\n from frappe.exceptions import ValidationError\n+from frappe.utils import nowdate\n \n class ShareDontExists(ValidationError): pass\n \n class ShareTransfer(Document):\n-\tdef before_save(self):\n+\tdef before_submit(self):\n \t\tif self.transfer_type == 'Issue':\n-\t\t\tcompany_doc = self.get_shareholder_doc(self.company)\n-\t\t\tcompany_doc.append('share_balance', {\n+\t\t\tshareholder = self.get_shareholder_doc(self.company)\n+\t\t\tshareholder.append('share_balance', {\n \t\t\t\t'share_type': self.share_type,\n \t\t\t\t'from_no': self.from_no,\n \t\t\t\t'to_no': self.to_no,\n@@ -25,7 +26,7 @@ def before_save(self):\n \t\t\t\t'is_company': 1,\n \t\t\t\t'current_state': 'Issued'\n \t\t\t})\n-\t\t\tcompany_doc.save()\n+\t\t\tshareholder.save()\n \n \t\t\tdoc = frappe.get_doc('Shareholder', self.to_shareholder)\n \t\t\tdoc.append('share_balance', {\n@@ -60,13 +61,13 @@ def validate(self):\n \t\tself.folio_no_validation()\n \t\tif self.transfer_type == 'Issue':\n \t\t\tif not self.get_shareholder_doc(self.company):\n-\t\t\t\tcompany_doc = frappe.get_doc({\n+\t\t\t\tshareholder = frappe.get_doc({\n \t\t\t\t\t'doctype': 'Shareholder',\n \t\t\t\t\t'title': self.company,\n \t\t\t\t\t'company': self.company,\n \t\t\t\t\t'is_company': 1\n \t\t\t\t})\n-\t\t\t\tcompany_doc.insert()\n+\t\t\t\tshareholder.insert()\n \t\t\t# validate share doesnt exist in company\n \t\t\tret_val = self.share_exists(self.get_shareholder_doc(self.company).name)\n \t\t\tif ret_val != False:\n@@ -275,3 +276,27 @@ def get_shareholder_doc(self, shareholder):\n \t\t\treturn frappe.get_doc('Shareholder', doc[0]['name'])\n \t\telse: #It will necessarily by 0 indicating it doesn't exist\n \t\t\treturn False\n+\[email protected]()\n+def make_jv_entry( company, account, amount, payment_account,\\\n+\tcredit_applicant_type, credit_applicant, debit_applicant_type, debit_applicant):\n+\tjournal_entry = frappe.new_doc('Journal Entry')\n+\tjournal_entry.voucher_type = 'Journal Entry'\n+\tjournal_entry.company = company\n+\tjournal_entry.posting_date = nowdate()\n+\taccount_amt_list = []\n+\n+\taccount_amt_list.append({\n+\t\t\"account\": account,\n+\t\t\"debit_in_account_currency\": amount,\n+\t\t\"party_type\": debit_applicant_type,\n+\t\t\"party\": debit_applicant,\n+\t\t})\n+\taccount_amt_list.append({\n+\t\t\"account\": payment_account,\n+\t\t\"credit_in_account_currency\": amount,\n+\t\t\"party_type\": credit_applicant_type,\n+\t\t\"party\": credit_applicant,\n+\t\t})\n+\tjournal_entry.set(\"accounts\", account_amt_list)\n+\treturn journal_entry.as_dict()\n\\ No newline at end of file\n"
}
|
[
{
"diff_hunk": "@@ -26,10 +31,74 @@ frappe.ui.form.on('Share Transfer', {\n \t\tif (frm.doc.no_of_shares != undefined || frm.doc.no_of_shares != null){\n \t\t\terpnext.share_transfer.update_amount(frm);\n \t\t}\n+\t},\n+\tcompany: async function(frm) {\n+\t\tif (frm.doc.company) {\n+\t\t\tlet currency = (await frappe.db.get_value(\"Company\", frm.doc.company, \"default_currency\")).message.default_currency;\n+\t\t\tfrm.set_query(\"equity_or_liability_account\", function() {\n+\t\t\t\treturn {\n+\t\t\t\t\tfilters: {\n+\t\t\t\t\t\t\"is_group\":0,\n+\t\t\t\t\t\t\"root_type\": [\"in\",[\"Equity\",\"Liability\"]],\n+\t\t\t\t\t\t\"company\": frm.doc.company,\n+\t\t\t\t\t\t\"account_currency\": currency\n+\t\t\t\t\t}\n+\t\t\t\t};\n+\t\t\t});\n+\t\t\tfrm.set_query(\"asset_account\", function() {\n+\t\t\t\treturn {\n+\t\t\t\t\tfilters: {\n+\t\t\t\t\t\t\"is_group\":0,\n+\t\t\t\t\t\t\"root_type\":\"Asset\",\n+\t\t\t\t\t\t\"company\": frm.doc.company,\n+\t\t\t\t\t\t\"account_currency\": currency\n+\t\t\t\t\t}\n+\t\t\t\t};\n+\t\t\t});\n+\t\t}\n \t}\n });\n \n erpnext.share_transfer.update_amount = function(frm) {\n \tfrm.doc.amount = frm.doc.no_of_shares * frm.doc.rate;\n \tfrm.refresh_field(\"amount\");\n };\n+\n+erpnext.share_transfer.make_jv = function (frm) {\n+\tvar account,payment_account,credit_applicant_type,credit_applicant,\n+\t\tdebit_applicant_type,debit_applicant;\n+\n+\tif (frm.doc.transfer_type == \"Transfer\") {\n+\t\taccount = frm.doc.equity_or_liability_account;\n+\t\tpayment_account = frm.doc.equity_or_liability_account;\n+\t\tcredit_applicant_type = \"Shareholder\";\n+\t\tcredit_applicant = frm.doc.to_shareholder;\n+\t\tdebit_applicant_type = \"Shareholder\";\n+\t\tdebit_applicant = frm.doc.from_shareholder;\n+\t}\n+\telse {\n+\t\taccount =(frm.doc.transfer_type == \"Issue\") ? frm.doc.asset_account : frm.doc.equity_or_liability_account;",
"line": null,
"original_line": 80,
"original_start_line": null,
"path": "erpnext/accounts/doctype/share_transfer/share_transfer.js",
"start_line": null,
"text": "@user1:\nAs this conditional statement `(frm.doc.transfer_type == \"Issue\")` is repeated 6 times, it will be always better writing a `if` condition instead for better readability."
}
] |
a49c9f25601afdc05ec8acbf7b5188a3b6691c68
|
diff --git a/erpnext/accounts/doctype/share_transfer/share_transfer.js b/erpnext/accounts/doctype/share_transfer/share_transfer.js
index fbf12e518d75..af23b2656de3 100644
--- a/erpnext/accounts/doctype/share_transfer/share_transfer.js
+++ b/erpnext/accounts/doctype/share_transfer/share_transfer.js
@@ -16,6 +16,11 @@ frappe.ui.form.on('Share Transfer', {
};
};
});
+ if (frm.doc.docstatus == 1) {
+ frm.add_custom_button(__('Make Journal Entry'), function () {
+ erpnext.share_transfer.make_jv(frm);
+ });
+ }
},
no_of_shares: (frm) => {
if (frm.doc.rate != undefined || frm.doc.rate != null){
@@ -26,6 +31,31 @@ frappe.ui.form.on('Share Transfer', {
if (frm.doc.no_of_shares != undefined || frm.doc.no_of_shares != null){
erpnext.share_transfer.update_amount(frm);
}
+ },
+ company: async function(frm) {
+ if (frm.doc.company) {
+ let currency = (await frappe.db.get_value("Company", frm.doc.company, "default_currency")).message.default_currency;
+ frm.set_query("equity_or_liability_account", function() {
+ return {
+ filters: {
+ "is_group":0,
+ "root_type": ["in",["Equity","Liability"]],
+ "company": frm.doc.company,
+ "account_currency": currency
+ }
+ };
+ });
+ frm.set_query("asset_account", function() {
+ return {
+ filters: {
+ "is_group":0,
+ "root_type":"Asset",
+ "company": frm.doc.company,
+ "account_currency": currency
+ }
+ };
+ });
+ }
}
});
@@ -33,3 +63,50 @@ erpnext.share_transfer.update_amount = function(frm) {
frm.doc.amount = frm.doc.no_of_shares * frm.doc.rate;
frm.refresh_field("amount");
};
+
+erpnext.share_transfer.make_jv = function (frm) {
+ var account, payment_account, credit_applicant_type, credit_applicant,
+ debit_applicant_type, debit_applicant;
+
+ if (frm.doc.transfer_type == "Transfer") {
+ account = frm.doc.equity_or_liability_account;
+ payment_account = frm.doc.equity_or_liability_account;
+ credit_applicant_type = "Shareholder";
+ credit_applicant = frm.doc.to_shareholder;
+ debit_applicant_type = "Shareholder";
+ debit_applicant = frm.doc.from_shareholder;
+ }
+ else if (frm.doc.transfer_type == "Issue") {
+ account = frm.doc.asset_account;
+ payment_account = frm.doc.equity_or_liability_account;
+ credit_applicant_type = "Shareholder";
+ credit_applicant = frm.doc.to_shareholder;
+ debit_applicant_type = "";
+ debit_applicant = "";
+ }
+ else {
+ account = frm.doc.equity_or_liability_account;
+ payment_account = frm.doc.asset_account;
+ credit_applicant_type = "";
+ credit_applicant = "";
+ debit_applicant_type = "Shareholder";
+ debit_applicant = frm.doc.from_shareholder;
+ }
+ frappe.call({
+ args: {
+ "company": frm.doc.company,
+ "account": account,
+ "amount": frm.doc.amount,
+ "payment_account": payment_account,
+ "credit_applicant_type": credit_applicant_type,
+ "credit_applicant": credit_applicant,
+ "debit_applicant_type": debit_applicant_type,
+ "debit_applicant": debit_applicant
+ },
+ method: "erpnext.accounts.doctype.share_transfer.share_transfer.make_jv_entry",
+ callback: function (r) {
+ var doc = frappe.model.sync(r.message)[0];
+ frappe.set_route("Form", doc.doctype, doc.name);
+ }
+ });
+};
\ No newline at end of file
diff --git a/erpnext/accounts/doctype/share_transfer/share_transfer.json b/erpnext/accounts/doctype/share_transfer/share_transfer.json
index 9e6f49d6b5de..2f288c3c3227 100644
--- a/erpnext/accounts/doctype/share_transfer/share_transfer.json
+++ b/erpnext/accounts/doctype/share_transfer/share_transfer.json
@@ -42,6 +42,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -71,6 +72,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -101,6 +103,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -130,6 +133,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -162,6 +166,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -194,6 +199,73 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
+ "unique": 0
+ },
+ {
+ "allow_bulk_edit": 0,
+ "allow_on_submit": 0,
+ "bold": 0,
+ "collapsible": 0,
+ "columns": 0,
+ "depends_on": "eval:doc.company",
+ "fieldname": "equity_or_liability_account",
+ "fieldtype": "Link",
+ "hidden": 0,
+ "ignore_user_permissions": 0,
+ "ignore_xss_filter": 0,
+ "in_filter": 0,
+ "in_global_search": 0,
+ "in_list_view": 0,
+ "in_standard_filter": 0,
+ "label": "Equity/Liability Account",
+ "length": 0,
+ "no_copy": 0,
+ "options": "Account",
+ "permlevel": 0,
+ "precision": "",
+ "print_hide": 0,
+ "print_hide_if_no_value": 0,
+ "read_only": 0,
+ "remember_last_selected_value": 0,
+ "report_hide": 0,
+ "reqd": 0,
+ "search_index": 0,
+ "set_only_once": 0,
+ "translatable": 0,
+ "unique": 0
+ },
+ {
+ "allow_bulk_edit": 0,
+ "allow_on_submit": 0,
+ "bold": 0,
+ "collapsible": 0,
+ "columns": 0,
+ "depends_on": "eval:(doc.transfer_type != 'Transfer') && (doc.company)",
+ "fieldname": "asset_account",
+ "fieldtype": "Link",
+ "hidden": 0,
+ "ignore_user_permissions": 0,
+ "ignore_xss_filter": 0,
+ "in_filter": 0,
+ "in_global_search": 0,
+ "in_list_view": 0,
+ "in_standard_filter": 0,
+ "label": "Asset Account",
+ "length": 0,
+ "no_copy": 0,
+ "options": "Account",
+ "permlevel": 0,
+ "precision": "",
+ "print_hide": 0,
+ "print_hide_if_no_value": 0,
+ "read_only": 0,
+ "remember_last_selected_value": 0,
+ "report_hide": 0,
+ "reqd": 0,
+ "search_index": 0,
+ "set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -223,6 +295,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -255,6 +328,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -287,6 +361,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -316,6 +391,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -347,6 +423,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -378,6 +455,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -408,6 +486,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -437,6 +516,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -467,6 +547,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -498,6 +579,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -528,6 +610,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -557,6 +640,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -588,6 +672,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -617,6 +702,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -647,6 +733,38 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
+ "unique": 0
+ },
+ {
+ "allow_bulk_edit": 0,
+ "allow_on_submit": 0,
+ "bold": 0,
+ "collapsible": 0,
+ "columns": 0,
+ "fieldname": "amended_from",
+ "fieldtype": "Link",
+ "hidden": 0,
+ "ignore_user_permissions": 0,
+ "ignore_xss_filter": 0,
+ "in_filter": 0,
+ "in_global_search": 0,
+ "in_list_view": 0,
+ "in_standard_filter": 0,
+ "label": "Amended From",
+ "length": 0,
+ "no_copy": 1,
+ "options": "Share Transfer",
+ "permlevel": 0,
+ "print_hide": 1,
+ "print_hide_if_no_value": 0,
+ "read_only": 1,
+ "remember_last_selected_value": 0,
+ "report_hide": 0,
+ "reqd": 0,
+ "search_index": 0,
+ "set_only_once": 0,
+ "translatable": 0,
"unique": 0
}
],
@@ -656,11 +774,11 @@
"idx": 0,
"image_view": 0,
"in_create": 0,
- "is_submittable": 0,
+ "is_submittable": 1,
"issingle": 0,
"istable": 0,
"max_attachments": 0,
- "modified": "2018-01-23 16:12:54.776896",
+ "modified": "2018-05-17 15:25:36.429433",
"modified_by": "Administrator",
"module": "Accounts",
"name": "Share Transfer",
@@ -668,7 +786,7 @@
"owner": "Administrator",
"permissions": [
{
- "amend": 0,
+ "amend": 1,
"apply_user_permissions": 0,
"cancel": 0,
"create": 1,
@@ -684,7 +802,7 @@
"role": "System Manager",
"set_user_permissions": 0,
"share": 1,
- "submit": 0,
+ "submit": 1,
"write": 1
}
],
diff --git a/erpnext/accounts/doctype/share_transfer/share_transfer.py b/erpnext/accounts/doctype/share_transfer/share_transfer.py
index 2a2d9ff0f104..50ce9f2bfd8b 100644
--- a/erpnext/accounts/doctype/share_transfer/share_transfer.py
+++ b/erpnext/accounts/doctype/share_transfer/share_transfer.py
@@ -8,14 +8,15 @@
from frappe.model.document import Document
from frappe.model.naming import make_autoname
from frappe.exceptions import ValidationError
+from frappe.utils import nowdate
class ShareDontExists(ValidationError): pass
class ShareTransfer(Document):
- def before_save(self):
+ def before_submit(self):
if self.transfer_type == 'Issue':
- company_doc = self.get_shareholder_doc(self.company)
- company_doc.append('share_balance', {
+ shareholder = self.get_shareholder_doc(self.company)
+ shareholder.append('share_balance', {
'share_type': self.share_type,
'from_no': self.from_no,
'to_no': self.to_no,
@@ -25,7 +26,7 @@ def before_save(self):
'is_company': 1,
'current_state': 'Issued'
})
- company_doc.save()
+ shareholder.save()
doc = frappe.get_doc('Shareholder', self.to_shareholder)
doc.append('share_balance', {
@@ -60,13 +61,13 @@ def validate(self):
self.folio_no_validation()
if self.transfer_type == 'Issue':
if not self.get_shareholder_doc(self.company):
- company_doc = frappe.get_doc({
+ shareholder = frappe.get_doc({
'doctype': 'Shareholder',
'title': self.company,
'company': self.company,
'is_company': 1
})
- company_doc.insert()
+ shareholder.insert()
# validate share doesnt exist in company
ret_val = self.share_exists(self.get_shareholder_doc(self.company).name)
if ret_val != False:
@@ -275,3 +276,27 @@ def get_shareholder_doc(self, shareholder):
return frappe.get_doc('Shareholder', doc[0]['name'])
else: #It will necessarily by 0 indicating it doesn't exist
return False
+
[email protected]()
+def make_jv_entry( company, account, amount, payment_account,\
+ credit_applicant_type, credit_applicant, debit_applicant_type, debit_applicant):
+ journal_entry = frappe.new_doc('Journal Entry')
+ journal_entry.voucher_type = 'Journal Entry'
+ journal_entry.company = company
+ journal_entry.posting_date = nowdate()
+ account_amt_list = []
+
+ account_amt_list.append({
+ "account": account,
+ "debit_in_account_currency": amount,
+ "party_type": debit_applicant_type,
+ "party": debit_applicant,
+ })
+ account_amt_list.append({
+ "account": payment_account,
+ "credit_in_account_currency": amount,
+ "party_type": credit_applicant_type,
+ "party": credit_applicant,
+ })
+ journal_entry.set("accounts", account_amt_list)
+ return journal_entry.as_dict()
\ No newline at end of file
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-14249@96b5b5a
|
frappe/erpnext
|
Python
| 14,249
|
Stocking out items with expired batches
|

Closes: #12843
|
2018-05-28T09:07:51Z
|
Stocking Out Expired Batched Items
As per the current design, once item is expired, then no stock transaction is allowed for it. We will have to explicitly provide an option for the expired batched items to be stocked-out, so that expense could be booked for them.
May be allow selection of Expired batches only in the Stock Entry should help?
|
[
{
"body": "As per the current design, once item is expired, then no stock transaction is allowed for it. We will have to explicitly provide an option for the expired batched items to be stocked-out, so that expense could be booked for them. \r\n\r\nMay be allow selection of Expired batches only in the Stock Entry should help?",
"number": 12843,
"title": "Stocking Out Expired Batched Items"
}
] |
30c88fe1a5d0e727e244e362fd2de7738fd6bf49
|
{
"head_commit": "96b5b5af7bbfd838bce8873dcca0712ee9af9457",
"head_commit_message": "Add the fetched item details to stock entry details",
"patch_to_review": "diff --git a/erpnext/stock/doctype/stock_entry/stock_entry.js b/erpnext/stock/doctype/stock_entry/stock_entry.js\nindex 7452da8ea1b1..be66388c9d44 100644\n--- a/erpnext/stock/doctype/stock_entry/stock_entry.js\n+++ b/erpnext/stock/doctype/stock_entry/stock_entry.js\n@@ -156,6 +156,28 @@ frappe.ui.form.on('Stock Entry', {\n \t\t\t\t})\n \t\t\t}, __(\"Get items from\"));\n \t\t}\n+\t\tif (frm.doc.docstatus===0 && frm.doc.purpose == \"Material Issue\") {\n+\t\t\tfrm.add_custom_button(__('Expired Batches'), function() {\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod: \"erpnext.stock.doctype.stock_entry.stock_entry.get_expired_batch_items\",\n+\t\t\t\t\tcallback: function(r) {\n+\t\t\t\t\t\tif (!r.exc && r.message) {\n+\t\t\t\t\t\t\tfrm.set_value(\"items\", []);\n+\t\t\t\t\t\t\tr.message.forEach(function(element) {\n+\t\t\t\t\t\t\t\tlet d = frm.add_child(\"items\");\n+\t\t\t\t\t\t\t\td.item_code = element.item;\n+\t\t\t\t\t\t\t\td.s_warehouse = element.warehouse;\n+\t\t\t\t\t\t\t\td.qty = element.qty;\n+\t\t\t\t\t\t\t\td.uom = element.stock_uom;\n+\t\t\t\t\t\t\t\td.conversion_factor = 1;\n+\t\t\t\t\t\t\t\td.transfer_qty = element.qty;\n+\t\t\t\t\t\t\t\tfrm.refresh_fields();\n+\t\t\t\t\t\t\t});\n+\t\t\t\t\t\t}\n+\t\t\t\t\t}\n+\t\t\t\t});\n+\t\t\t}, __(\"Get items from\"));\n+\t\t}\n \n \t\tif (frm.doc.company) {\n \t\t\tfrm.trigger(\"toggle_display_account_head\");\ndiff --git a/erpnext/stock/doctype/stock_entry/stock_entry.py b/erpnext/stock/doctype/stock_entry/stock_entry.py\nindex 38628548ea01..0f6f432b04c9 100644\n--- a/erpnext/stock/doctype/stock_entry/stock_entry.py\n+++ b/erpnext/stock/doctype/stock_entry/stock_entry.py\n@@ -1153,6 +1153,15 @@ def get_uom_details(item_code, uom, qty):\n \t\t}\n \treturn ret\n \[email protected]()\n+def get_expired_batch_items():\n+\treturn frappe.db.sql(\"\"\"select b.item, sum(sle.actual_qty) as qty, sle.warehouse, sle.stock_uom\\\n+\tfrom `tabBatch` b, `tabStock Ledger Entry` sle\n+\twhere b.expiry_date <= %s\n+\tand b.expiry_date is not NULL\n+\tand b.batch_id = sle.batch_no\n+\tgroup by sle.warehouse\"\"\",(nowdate()), as_dict=1)\n+\n @frappe.whitelist()\n def get_warehouse_details(args):\n \tif isinstance(args, string_types):\n"
}
|
[
{
"diff_hunk": "@@ -1153,6 +1153,15 @@ def get_uom_details(item_code, uom, qty):\n \t\t}\n \treturn ret\n \[email protected]()\n+def get_expired_batch_items():\n+\treturn frappe.db.sql(\"\"\"select b.item, sum(sle.actual_qty) as qty, sle.warehouse, sle.stock_uom\\",
"line": null,
"original_line": 1158,
"original_start_line": null,
"path": "erpnext/stock/doctype/stock_entry/stock_entry.py",
"start_line": null,
"text": "@user1:\nselect batch_no as well, it is required to set in stock_entry"
},
{
"diff_hunk": "@@ -1153,6 +1153,15 @@ def get_uom_details(item_code, uom, qty):\n \t\t}\n \treturn ret\n \[email protected]()\n+def get_expired_batch_items():\n+\treturn frappe.db.sql(\"\"\"select b.item, sum(sle.actual_qty) as qty, sle.warehouse, sle.stock_uom\\\n+\tfrom `tabBatch` b, `tabStock Ledger Entry` sle\n+\twhere b.expiry_date <= %s\n+\tand b.expiry_date is not NULL\n+\tand b.batch_id = sle.batch_no\n+\tgroup by sle.warehouse\"\"\",(nowdate()), as_dict=1)",
"line": null,
"original_line": 1163,
"original_start_line": null,
"path": "erpnext/stock/doctype/stock_entry/stock_entry.py",
"start_line": null,
"text": "@user1:\ngroup by sle.warehouse, sle.item_code, sle.batch_no"
}
] |
857ef2488b03faa2c3ff7a2ee7660256039600aa
|
diff --git a/erpnext/stock/doctype/stock_entry/stock_entry.js b/erpnext/stock/doctype/stock_entry/stock_entry.js
index 7452da8ea1b1..d9e504bc12c2 100644
--- a/erpnext/stock/doctype/stock_entry/stock_entry.js
+++ b/erpnext/stock/doctype/stock_entry/stock_entry.js
@@ -156,6 +156,29 @@ frappe.ui.form.on('Stock Entry', {
})
}, __("Get items from"));
}
+ if (frm.doc.docstatus===0 && frm.doc.purpose == "Material Issue") {
+ frm.add_custom_button(__('Expired Batches'), function() {
+ frappe.call({
+ method: "erpnext.stock.doctype.stock_entry.stock_entry.get_expired_batch_items",
+ callback: function(r) {
+ if (!r.exc && r.message) {
+ frm.set_value("items", []);
+ r.message.forEach(function(element) {
+ let d = frm.add_child("items");
+ d.item_code = element.item;
+ d.s_warehouse = element.warehouse;
+ d.qty = element.qty;
+ d.uom = element.stock_uom;
+ d.conversion_factor = 1;
+ d.batch_no = element.batch_no;
+ d.transfer_qty = element.qty;
+ frm.refresh_fields();
+ });
+ }
+ }
+ });
+ }, __("Get items from"));
+ }
if (frm.doc.company) {
frm.trigger("toggle_display_account_head");
diff --git a/erpnext/stock/doctype/stock_entry/stock_entry.py b/erpnext/stock/doctype/stock_entry/stock_entry.py
index 38628548ea01..80d0afe0befe 100644
--- a/erpnext/stock/doctype/stock_entry/stock_entry.py
+++ b/erpnext/stock/doctype/stock_entry/stock_entry.py
@@ -1153,6 +1153,15 @@ def get_uom_details(item_code, uom, qty):
}
return ret
[email protected]()
+def get_expired_batch_items():
+ return frappe.db.sql("""select b.item, sum(sle.actual_qty) as qty, sle.batch_no, sle.warehouse, sle.stock_uom\
+ from `tabBatch` b, `tabStock Ledger Entry` sle
+ where b.expiry_date <= %s
+ and b.expiry_date is not NULL
+ and b.batch_id = sle.batch_no
+ group by sle.warehouse, sle.item_code, sle.batch_no""",(nowdate()), as_dict=1)
+
@frappe.whitelist()
def get_warehouse_details(args):
if isinstance(args, string_types):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-13263@28c98a5
|
frappe/erpnext
|
Python
| 13,263
|
[Feature] Advance Account Balance in Employee Advance
|
fixes #13054

|
2018-03-10T11:48:12Z
|
Advance Account Balance in Employee Advance
Hi,
Please add a field in Employee Advance to show the balance on the Employee's Advance Account. This is information that the approver usually needs to see before making a decision to approve or not
Thanks
|
[
{
"body": "Hi,\r\n\r\nPlease add a field in Employee Advance to show the balance on the Employee's Advance Account. This is information that the approver usually needs to see before making a decision to approve or not\r\n\r\nThanks",
"number": 13054,
"title": "Advance Account Balance in Employee Advance"
}
] |
fd75d554d20921c0c5b1a213f1f82d61a1bb4333
|
{
"head_commit": "28c98a52476181ffb82795bd6166ff22c0bb38d5",
"head_commit_message": "fixes codacy",
"patch_to_review": "diff --git a/erpnext/hr/doctype/employee_advance/employee_advance.js b/erpnext/hr/doctype/employee_advance/employee_advance.js\nindex b6dd7eeb3f79..8ae24d3e4fa4 100644\n--- a/erpnext/hr/doctype/employee_advance/employee_advance.js\n+++ b/erpnext/hr/doctype/employee_advance/employee_advance.js\n@@ -81,5 +81,16 @@ frappe.ui.form.on('Employee Advance', {\n \t\t\t\tfrappe.set_route(\"Form\", doclist[0].doctype, doclist[0].name);\n \t\t\t}\n \t\t});\n+\t},\n+\n+\temployee: function (frm) {\n+\t\treturn frappe.call({\n+\t\t\tmethod: \"get_due_advance_amount\",\n+\t\t\tdoc: cur_frm.doc,\n+\t\t\tcallback: function(r) {\n+\t\t\t\tfrm.set_value(\"due_advance_amount\",r.message);\n+\t\t\t\trefresh_field(\"due_advance_amount\");\n+\t\t\t}\n+\t\t});\n \t}\n-});\n+});\n\\ No newline at end of file\ndiff --git a/erpnext/hr/doctype/employee_advance/employee_advance.json b/erpnext/hr/doctype/employee_advance/employee_advance.json\nindex 79e49b0f6149..a919d7380a88 100644\n--- a/erpnext/hr/doctype/employee_advance/employee_advance.json\n+++ b/erpnext/hr/doctype/employee_advance/employee_advance.json\n@@ -42,6 +42,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -73,6 +74,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -104,6 +106,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -133,6 +136,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -165,6 +169,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -194,6 +199,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -224,6 +230,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -253,6 +260,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -284,6 +292,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -315,6 +324,40 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n+ \"unique\": 0\n+ }, \n+ {\n+ \"allow_bulk_edit\": 0, \n+ \"allow_on_submit\": 0, \n+ \"bold\": 0, \n+ \"collapsible\": 0, \n+ \"columns\": 0, \n+ \"depends_on\": \"eval:cur_frm.doc.employee\", \n+ \"fieldname\": \"due_advance_amount\", \n+ \"fieldtype\": \"Currency\", \n+ \"hidden\": 0, \n+ \"ignore_user_permissions\": 0, \n+ \"ignore_xss_filter\": 0, \n+ \"in_filter\": 0, \n+ \"in_global_search\": 0, \n+ \"in_list_view\": 0, \n+ \"in_standard_filter\": 0, \n+ \"label\": \"Due Advance Amount\", \n+ \"length\": 0, \n+ \"no_copy\": 0, \n+ \"options\": \"Company:company:default_currency\", \n+ \"permlevel\": 0, \n+ \"precision\": \"\", \n+ \"print_hide\": 0, \n+ \"print_hide_if_no_value\": 0, \n+ \"read_only\": 1, \n+ \"remember_last_selected_value\": 0, \n+ \"report_hide\": 0, \n+ \"reqd\": 0, \n+ \"search_index\": 0, \n+ \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -346,6 +389,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -375,6 +419,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -406,6 +451,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -437,6 +483,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -467,6 +514,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -496,6 +544,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -527,6 +576,7 @@\n \"reqd\": 1, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }, \n {\n@@ -558,6 +608,7 @@\n \"reqd\": 0, \n \"search_index\": 0, \n \"set_only_once\": 0, \n+ \"translatable\": 0, \n \"unique\": 0\n }\n ], \n@@ -571,8 +622,8 @@\n \"issingle\": 0, \n \"istable\": 0, \n \"max_attachments\": 0, \n- \"modified\": \"2017-12-14 17:53:11.084810\", \n- \"modified_by\": \"[email protected]\", \n+ \"modified\": \"2018-03-10 17:09:35.969874\", \n+ \"modified_by\": \"Administrator\", \n \"module\": \"HR\", \n \"name\": \"Employee Advance\", \n \"name_case\": \"\", \ndiff --git a/erpnext/hr/doctype/employee_advance/employee_advance.py b/erpnext/hr/doctype/employee_advance/employee_advance.py\nindex 28547fb28599..03365cab0e56 100644\n--- a/erpnext/hr/doctype/employee_advance/employee_advance.py\n+++ b/erpnext/hr/doctype/employee_advance/employee_advance.py\n@@ -53,6 +53,10 @@ def set_total_advance_paid(self):\n \t\tself.set_status()\n \t\tfrappe.db.set_value(\"Employee Advance\", self.name , \"status\", self.status)\n \n+\tdef get_due_advance_amount(self):\n+\t\temployee_due_amount = frappe.get_all(\"Employee Advance\",filters={\"employee\":self.employee,\"docstatus\":1,\"posting_date\":(\"<=\",self.posting_date)},fields=[\"advance_amount\",\"paid_amount\"])\n+\t\treturn sum([(emp.advance_amount -emp.paid_amount) for emp in employee_due_amount])\n+\n \tdef update_claimed_amount(self):\n \t\tclaimed_amount = frappe.db.sql(\"\"\"\n \t\t\tselect sum(ifnull(allocated_amount, 0))\n"
}
|
[
{
"diff_hunk": "@@ -53,6 +53,10 @@ def set_total_advance_paid(self):\n \t\tself.set_status()\n \t\tfrappe.db.set_value(\"Employee Advance\", self.name , \"status\", self.status)\n \n+\tdef get_due_advance_amount(self):",
"line": null,
"original_line": 56,
"original_start_line": null,
"path": "erpnext/hr/doctype/employee_advance/employee_advance.py",
"start_line": null,
"text": "@user1:\nMake it a whitelisted function. If we write it inside the class, then we need to pass the entire doc to the server, which is unnecessary. We only need employee and posting date."
}
] |
96b2d09443706d5be90ac2eab854ade196eca468
|
diff --git a/erpnext/hr/doctype/employee_advance/employee_advance.js b/erpnext/hr/doctype/employee_advance/employee_advance.js
index b6dd7eeb3f79..a9407be435e5 100644
--- a/erpnext/hr/doctype/employee_advance/employee_advance.js
+++ b/erpnext/hr/doctype/employee_advance/employee_advance.js
@@ -81,5 +81,18 @@ frappe.ui.form.on('Employee Advance', {
frappe.set_route("Form", doclist[0].doctype, doclist[0].name);
}
});
+ },
+
+ employee: function (frm) {
+ return frappe.call({
+ method: "erpnext.hr.doctype.employee_advance.employee_advance.get_due_advance_amount",
+ args: {
+ "employee": frm.doc.employee,
+ "posting_date": frm.doc.posting_date
+ },
+ callback: function(r) {
+ frm.set_value("due_advance_amount",r.message);
+ }
+ });
}
});
diff --git a/erpnext/hr/doctype/employee_advance/employee_advance.json b/erpnext/hr/doctype/employee_advance/employee_advance.json
index 79e49b0f6149..a919d7380a88 100644
--- a/erpnext/hr/doctype/employee_advance/employee_advance.json
+++ b/erpnext/hr/doctype/employee_advance/employee_advance.json
@@ -42,6 +42,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -73,6 +74,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -104,6 +106,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -133,6 +136,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -165,6 +169,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -194,6 +199,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -224,6 +230,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -253,6 +260,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -284,6 +292,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -315,6 +324,40 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
+ "unique": 0
+ },
+ {
+ "allow_bulk_edit": 0,
+ "allow_on_submit": 0,
+ "bold": 0,
+ "collapsible": 0,
+ "columns": 0,
+ "depends_on": "eval:cur_frm.doc.employee",
+ "fieldname": "due_advance_amount",
+ "fieldtype": "Currency",
+ "hidden": 0,
+ "ignore_user_permissions": 0,
+ "ignore_xss_filter": 0,
+ "in_filter": 0,
+ "in_global_search": 0,
+ "in_list_view": 0,
+ "in_standard_filter": 0,
+ "label": "Due Advance Amount",
+ "length": 0,
+ "no_copy": 0,
+ "options": "Company:company:default_currency",
+ "permlevel": 0,
+ "precision": "",
+ "print_hide": 0,
+ "print_hide_if_no_value": 0,
+ "read_only": 1,
+ "remember_last_selected_value": 0,
+ "report_hide": 0,
+ "reqd": 0,
+ "search_index": 0,
+ "set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -346,6 +389,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -375,6 +419,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -406,6 +451,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -437,6 +483,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -467,6 +514,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -496,6 +544,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -527,6 +576,7 @@
"reqd": 1,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
},
{
@@ -558,6 +608,7 @@
"reqd": 0,
"search_index": 0,
"set_only_once": 0,
+ "translatable": 0,
"unique": 0
}
],
@@ -571,8 +622,8 @@
"issingle": 0,
"istable": 0,
"max_attachments": 0,
- "modified": "2017-12-14 17:53:11.084810",
- "modified_by": "[email protected]",
+ "modified": "2018-03-10 17:09:35.969874",
+ "modified_by": "Administrator",
"module": "HR",
"name": "Employee Advance",
"name_case": "",
diff --git a/erpnext/hr/doctype/employee_advance/employee_advance.py b/erpnext/hr/doctype/employee_advance/employee_advance.py
index 28547fb28599..50be56df6e1d 100644
--- a/erpnext/hr/doctype/employee_advance/employee_advance.py
+++ b/erpnext/hr/doctype/employee_advance/employee_advance.py
@@ -53,6 +53,7 @@ def set_total_advance_paid(self):
self.set_status()
frappe.db.set_value("Employee Advance", self.name , "status", self.status)
+
def update_claimed_amount(self):
claimed_amount = frappe.db.sql("""
select sum(ifnull(allocated_amount, 0))
@@ -60,7 +61,14 @@ def update_claimed_amount(self):
where employee_advance = %s and docstatus=1 and allocated_amount > 0
""", self.name)[0][0]
- frappe.db.set_value("Employee Advance", self.name, "claimed_amount", claimed_amount)
+ frappe.db.set_value("Employee Advance", self.name, "claimed_ amount", claimed_amount)
+
[email protected]()
+def get_due_advance_amount(employee, posting_date):
+ employee_due_amount = frappe.get_all("Employee Advance", \
+ filters = {"employee":employee, "docstatus":1, "posting_date":("<=", posting_date)}, \
+ fields = ["advance_amount", "paid_amount"])
+ return sum([(emp.advance_amount - emp.paid_amount) for emp in employee_due_amount])
@frappe.whitelist()
def make_bank_entry(dt, dn):
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-28580@a9c179f
|
frappe/erpnext
|
Python
| 28,580
|
feat: Bulk Transaction Processing
|
**Before this Feature**
Lets take an example to get better idea
1. If the user had to create multiple sales invoice from multiple sales order it had to be done one by one by going into the particular sales order and then create sales invoice against it which consumed a lot of time.Similarly this was the case for many doctype's for which creating multiple docs from multiple docs was time consuming.
2. Problem was identified from this issue #28229
**Tasks**
- [x] Notification to the user when job is completed and some minor tasks.
- [x] Add Logger For Transactions
- [x] Tests
- [x] Increase Code Coverage for codecov/project
Following operations can be performed in bulk:
<img width="934" alt="Screenshot 2021-11-26 at 1 49 25 PM" src="https://user-images.githubusercontent.com/49878143/143549419-17367ca7-82e5-4ea4-9abb-2a37656e56c1.png">
**After this Feature**
1. To tackle this problem options are added into the actions menu. so when the user selects multiple invoices/orders the actions menu button appears from which user can select the bulk operation to be performed. thus saving a lot of time.
2. Since creation of multiple invoices/orders can take a lot of time it should not block user from doing other operations. so if there are more than 10 invoices/orders it will create a background job for the same.
3. There might be situation where in while bulk creating of invoices/orders some exceptions might occur which might halt the creation of invoices/orders. There is a mechanism to bypass failing invoices/orders and continue forward creating other invoices/orders in the queue.

|
2021-11-26T08:22:00Z
|
Bulk Transaction Processing
Problem Statement:
Allow to create bulk transactions to process multiple invoices/DN from Sales Order. Likewise for other common flows (PR/PI from PO).
Expected Solution:
Use bulk actions in list view to give options to create transactions in one go. This will save time in processing multiple transactions as opposed to one by one by UI, or Excel import.
|
[
{
"body": "Problem Statement:\r\nAllow to create bulk transactions to process multiple invoices/DN from Sales Order. Likewise for other common flows (PR/PI from PO).\r\n\r\nExpected Solution:\r\nUse bulk actions in list view to give options to create transactions in one go. This will save time in processing multiple transactions as opposed to one by one by UI, or Excel import.",
"number": 28229,
"title": "Bulk Transaction Processing"
}
] |
79ab8e645931803eb5ed844683f9923ab3555ec9
|
{
"head_commit": "a9c179f0d937d02fabe7f02825a573de0988de2e",
"head_commit_message": "fix: add flags to ignore validations and exception handling correction",
"patch_to_review": "diff --git a/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js b/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js\nindex f6ff83add8c5..be6600ad9b17 100644\n--- a/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js\n+++ b/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js\n@@ -56,4 +56,42 @@ frappe.listview_settings[\"Purchase Invoice\"] = {\n \t\t\t];\n \t\t}\n \t},\n+\n+\tonload: function(listview) {\n+\t\tlistview.page.add_action_item(__(\"Purchase Receipt\"), ()=>{\n+\t\t checked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t frappe.confirm(__(\"Create {0} Purchase Receipt ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Purchase Receipt From Purchase Invoice\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} purchase receipt`,count_of_rows);\n+\t\t\t\t}\n+\t\t })\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Payment\"), ()=>{\n+\t\t checked_items = listview.get_checked_items();\n+\t\t count_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Make {0} Payment ?\", [count_of_rows]),()=>{\n+\t\t frappe.call({\n+\t\t method:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t args: {data: checked_items, to_create: \"Payment From Purchase Invoice\"}\n+\t\t }).then(r => {\n+\t\t console.log(r);\n+\t\t })\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} payment`,count_of_rows);\n+\t\t\t\t}\n+\t\t })\n+\t\t});\n+\t}\n };\ndiff --git a/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js b/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js\nindex 06e6f5118397..954e55136d99 100644\n--- a/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js\n+++ b/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js\n@@ -21,5 +21,43 @@ frappe.listview_settings['Sales Invoice'] = {\n \t\t};\n \t\treturn [__(doc.status), status_colors[doc.status], \"status,=,\"+doc.status];\n \t},\n-\tright_column: \"grand_total\"\n+\tright_column: \"grand_total\",\n+\n+\tonload: function(listview) {\n+\t\tlistview.page.add_action_item(__(\"Delivery Note\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Delivery Note ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Delivery Note From Sales Invoice\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} delivery note`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Payment\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Make {0} Payment ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Payment From Sales Invoice\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} payment`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\t}\n };\ndiff --git a/erpnext/buying/doctype/purchase_order/purchase_order_list.js b/erpnext/buying/doctype/purchase_order/purchase_order_list.js\nindex 8413eb65c3f1..dc81a79465fe 100644\n--- a/erpnext/buying/doctype/purchase_order/purchase_order_list.js\n+++ b/erpnext/buying/doctype/purchase_order/purchase_order_list.js\n@@ -29,8 +29,61 @@ frappe.listview_settings['Purchase Order'] = {\n \t\t\tlistview.call_for_selected_items(method, { \"status\": \"Closed\" });\n \t\t});\n \n-\t\tlistview.page.add_menu_item(__(\"Re-open\"), function () {\n+\t\tlistview.page.add_menu_item(__(\"Reopen\"), function () {\n \t\t\tlistview.call_for_selected_items(method, { \"status\": \"Submitted\" });\n \t\t});\n+\n+\n+\t\tlistview.page.add_action_item(__(\"Purchase Invoice\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Purchase Invoice ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Purchase Invoice From Purchase Order\"}\n+\t\t\t\t\t}).then(r => {\n+\t\t\t\t\t\tconsole.log(r);\n+\t\t\t\t\t\tfrappe.show_alert(\"Purchase Invoice Created Successfully !\",5);\n+\t\t\t\t\t})\n+\t\t\t})\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Purchase Receipt\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Purchase Receipt ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Purchase Receipt From Purchase Order\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} purchase receipt`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Advance Payment\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Make {0} Advance Payment ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Advance Payment From Purchase Order\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} advance payment`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\n \t}\n };\ndiff --git a/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py b/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py\nindex d65ab94a6d3a..171de7882dce 100644\n--- a/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py\n+++ b/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py\n@@ -142,6 +142,26 @@ def update_item(obj, target, source_parent):\n \n \treturn doclist\n \[email protected]()\n+def make_purchase_invoice(source_name, target_doc=None):\n+\tdoc = get_mapped_doc(\"Supplier Quotation\", source_name, {\n+\t\t\"Supplier Quotation\": {\n+\t\t\t\"doctype\": \"Purchase Invoice\",\n+\t\t\t\"validation\": {\n+\t\t\t\t\"docstatus\": [\"=\", 1],\n+\t\t\t}\n+\t\t},\n+\t\t\"Supplier Quotation Item\": {\n+\t\t\t\"doctype\": \"Purchase Invoice Item\"\n+\t\t},\n+\t\t\"Purchase Taxes and Charges\": {\n+\t\t\t\"doctype\": \"Purchase Taxes and Charges\"\n+\t\t}\n+\t}, target_doc)\n+\n+\treturn doc\n+\n+\n @frappe.whitelist()\n def make_quotation(source_name, target_doc=None):\n \tdoclist = get_mapped_doc(\"Supplier Quotation\", source_name, {\ndiff --git a/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js b/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js\nindex 5ab6c980d00e..8fcbe129c4f6 100644\n--- a/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js\n+++ b/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js\n@@ -8,5 +8,42 @@ frappe.listview_settings['Supplier Quotation'] = {\n \t\t} else if(doc.status===\"Expired\") {\n \t\t\treturn [__(\"Expired\"), \"gray\", \"status,=,Expired\"];\n \t\t}\n+\t},\n+\n+\tonload: function(listview){\n+\t\tlistview.page.add_action_item(__(\"Purchase Order\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Purchase Order ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Purchase Order From Supplier Quotation\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} purchase order`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t});\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Purchase Invoice\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Purchase Invoice ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Purchase Invoice From Supplier Quotation\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} purchase invoice`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t});\n+\t\t});\n \t}\n };\ndiff --git a/erpnext/selling/doctype/quotation/quotation_list.js b/erpnext/selling/doctype/quotation/quotation_list.js\nindex b631685bd19b..995f9154f359 100644\n--- a/erpnext/selling/doctype/quotation/quotation_list.js\n+++ b/erpnext/selling/doctype/quotation/quotation_list.js\n@@ -12,6 +12,41 @@ frappe.listview_settings['Quotation'] = {\n \t\t\t\t};\n \t\t\t};\n \t\t}\n+\n+\t\tlistview.page.add_action_item(__(\"Sales Order\"),()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Sales Order ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Sales Order From Quotation\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background to create ${count_of_rows} sales order`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Sales Invoice\"),()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\n+\t\t\tfrappe.confirm(__(\"Create {0} Sales Invoice ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Sales Invoice From Quotation\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\t\t\tif(count_of_rows > 10){\n+\t\t\t\tfrappe.show_alert(`Starting a background to create ${count_of_rows} sales invoice`,count_of_rows);\n+\t\t\t}\n+\n+\t\t\t})\n+\t\t});\n \t},\n \n \tget_indicator: function(doc) {\ndiff --git a/erpnext/selling/doctype/sales_order/sales_order_list.js b/erpnext/selling/doctype/sales_order/sales_order_list.js\nindex 26d96d59f299..e0b51732e3e8 100644\n--- a/erpnext/selling/doctype/sales_order/sales_order_list.js\n+++ b/erpnext/selling/doctype/sales_order/sales_order_list.js\n@@ -16,7 +16,7 @@ frappe.listview_settings['Sales Order'] = {\n \t\t\t\treturn [__(\"Overdue\"), \"red\",\n \t\t\t\t\t\"per_delivered,<,100|delivery_date,<,Today|status,!=,Closed\"];\n \t\t\t} else if (flt(doc.grand_total) === 0) {\n-\t\t\t\t// not delivered (zero-amount order)\n+\t\t\t\t// not delivered (zeroount order)\n \t\t\t\treturn [__(\"To Deliver\"), \"orange\",\n \t\t\t\t\t\"per_delivered,<,100|grand_total,=,0|status,!=,Closed\"];\n \t\t\t} else if (flt(doc.per_billed, 6) < 100) {\n@@ -48,5 +48,53 @@ frappe.listview_settings['Sales Order'] = {\n \t\t\tlistview.call_for_selected_items(method, {\"status\": \"Submitted\"});\n \t\t});\n \n+\t\tlistview.page.add_action_item(__(\"Sales Invoice\"),()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\t\t\tfrappe.confirm(__(\"Create {0} Sales Invoice ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Sales Invoice From Sales Order\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} sales invoice`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Delivery Note\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\t\t\tfrappe.confirm(__(\"Create {0} Delivery Note ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Delivery Note From Sales Order\"}\n+\t\t\t\t\t}).then(r => {\n+\t\t\t\t\t\tconsole.log(r);\n+\t\t\t\t\t})\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background to create ${count_of_rows} delivery note`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t})\n+\n+\t\tlistview.page.add_action_item(__(\"Advance Payment\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\t\t\tfrappe.confirm(__(\"Create {0} Advance Payment ?\", [count_of_rows]),()=>{\n+\t\t\t\tfrappe.call({\n+\t\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\t\targs: {data: checked_items, to_create: \"Advance Payment From Sales Order\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} Advance Payment`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t})\n+\n \t}\n };\ndiff --git a/erpnext/stock/doctype/delivery_note/delivery_note.py b/erpnext/stock/doctype/delivery_note/delivery_note.py\nindex 52684607b4ba..66acec166795 100644\n--- a/erpnext/stock/doctype/delivery_note/delivery_note.py\n+++ b/erpnext/stock/doctype/delivery_note/delivery_note.py\n@@ -580,7 +580,18 @@ def make_packing_slip(source_name, target_doc=None):\n \t\t\t\"validation\": {\n \t\t\t\t\"docstatus\": [\"=\", 0]\n \t\t\t}\n+\t\t},\n+\n+\t\t\"Delivery Note Item\":{\n+\t\t\"doctype\": \"Packing Slip Item\",\n+\t\t\"field_map\": {\n+\t\t\t\"item_code\": \"item_code\",\n+\t\t\t\"item_name\": \"item_name\",\n+\t\t\t\"description\": \"description\",\n+\t\t\t\"qty\": \"qty\",\n \t\t}\n+\t\t}\n+\n \t}, target_doc)\n \n \treturn doclist\ndiff --git a/erpnext/stock/doctype/delivery_note/delivery_note_list.js b/erpnext/stock/doctype/delivery_note/delivery_note_list.js\nindex 040289804733..5754c7c8d299 100644\n--- a/erpnext/stock/doctype/delivery_note/delivery_note_list.js\n+++ b/erpnext/stock/doctype/delivery_note/delivery_note_list.js\n@@ -14,7 +14,7 @@ frappe.listview_settings['Delivery Note'] = {\n \t\t\treturn [__(\"Completed\"), \"green\", \"per_billed,=,100\"];\n \t\t}\n \t},\n-\tonload: function (doclist) {\n+\tonload: function (listview) {\n \t\tconst action = () => {\n \t\t\tconst selected_docs = doclist.get_checked_items();\n \t\t\tconst docnames = doclist.get_checked_items(true);\n@@ -54,6 +54,42 @@ frappe.listview_settings['Delivery Note'] = {\n \t\t\t};\n \t\t};\n \n-\t\tdoclist.page.add_actions_menu_item(__('Create Delivery Trip'), action, false);\n+\t\t// doclist.page.add_actions_menu_item(__('Create Delivery Trip'), action, false);\n+\n+\t\tlistview.page.add_action_item(__('Create Delivery Trip'), action);\n+\n+\t\tlistview.page.add_action_item(__(\"Sales Invoice\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\t\t\tfrappe.confirm(__(\"Create {0} Sales Invoice ?\", [count_of_rows]),()=>{\n+\t\t\tfrappe.call({\n+\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\targs: {data: checked_items, to_create: \"Sales Invoice From Delivery Note\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} sales invoice`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n+\n+\t\tlistview.page.add_action_item(__(\"Packaging Slip From Delivery Note\"), ()=>{\n+\t\t\tchecked_items = listview.get_checked_items();\n+\t\t\tcount_of_rows = checked_items.length;\n+\t\t\tfrappe.confirm(__(\"Create {0} Packaging Slip ?\", [count_of_rows]),()=>{\n+\t\t\tfrappe.call({\n+\t\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\t\targs: {data: checked_items, to_create: \"Packaging Slip\"}\n+\t\t\t\t}).then(r => {\n+\t\t\t\t\t\tconsole.log(r);\n+\t\t\t\t})\n+\n+\t\t\t\tif(count_of_rows > 10){\n+\t\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} packing slip`,count_of_rows);\n+\t\t\t\t}\n+\t\t\t})\n+\t\t});\n \t}\n };\ndiff --git a/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js b/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js\nindex 77711de93f7e..0840b9855eb3 100644\n--- a/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js\n+++ b/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js\n@@ -13,5 +13,27 @@ frappe.listview_settings['Purchase Receipt'] = {\n \t\t} else if (flt(doc.grand_total) === 0 || flt(doc.per_billed, 2) === 100) {\n \t\t\treturn [__(\"Completed\"), \"green\", \"per_billed,=,100\"];\n \t\t}\n+\t},\n+\n+\tonload: function(listview){\n+\n+\tlistview.page.add_action_item(__(\"Purchase Invoice\"), ()=>{\n+\t\tchecked_items = listview.get_checked_items();\n+\t\tcount_of_rows = checked_items.length;\n+\n+\t\tfrappe.confirm(__(\"Create {0} Purchase Invoice ?\", [count_of_rows]),()=>{\n+\t\t\tfrappe.call({\n+\t\t\tmethod:\"erpnext.utilities.bulk_transaction.transaction_processing\",\n+\t\t\targs: {data: checked_items, to_create: \"Purchase Invoice From Purchase Receipt\"}\n+\t\t\t}).then(r => {\n+\t\t\tconsole.log(r);\n+\t\t\t})\n+\n+\t\t\tif(count_of_rows > 10){\n+\t\t\t\tfrappe.show_alert(`Starting a background job to create ${count_of_rows} purchase invoice`,count_of_rows);\n+\t\t\t}\n+\t\t})\n+\t\t});\n \t}\n+\n };\ndiff --git a/erpnext/utilities/bulk_transaction.py b/erpnext/utilities/bulk_transaction.py\nnew file mode 100644\nindex 000000000000..c4695a89bd07\n--- /dev/null\n+++ b/erpnext/utilities/bulk_transaction.py\n@@ -0,0 +1,128 @@\n+import json\n+\n+import frappe\n+\n+\[email protected]()\n+def transaction_processing(data, to_create):\n+\tdeserialized_data = json.loads(data)\n+\tlength_of_data = len(deserialized_data)\n+\n+\tif length_of_data > 10:\n+\t\t# frappe.msgprint(\"Started a background job to create {1} {0}\".format(to_create,length_of_data))\n+\t\tfrappe.enqueue(job, deserialized_data=deserialized_data, to_create=to_create)\n+\telse:\n+\t\tjob(deserialized_data, to_create)\n+\n+def job(deserialized_data, to_create):\n+\tfrom erpnext.accounts.doctype.payment_entry import payment_entry\n+\tfrom erpnext.accounts.doctype.purchase_invoice import purchase_invoice\n+\tfrom erpnext.accounts.doctype.sales_invoice import sales_invoice\n+\tfrom erpnext.buying.doctype.purchase_order import purchase_order\n+\tfrom erpnext.buying.doctype.supplier_quotation import supplier_quotation\n+\tfrom erpnext.selling.doctype.quotation import quotation\n+\tfrom erpnext.selling.doctype.sales_order import sales_order\n+\tfrom erpnext.stock.doctype.delivery_note import delivery_note\n+\tfrom erpnext.stock.doctype.purchase_receipt import purchase_receipt\n+\n+\ti = 0\n+\tfor d in deserialized_data:\n+\t\ttry:\n+\t\t\ti+=1\n+\n+\t\t\t# From Sales Order\n+\t\t\tif to_create == \"Sales Invoice From Sales Order\":\n+\t\t\t\tsi = sales_order.make_sales_invoice(d.get('name'))\n+\t\t\t\tsi.flags.ignore_validate = True\n+\t\t\t\tsi.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Delivery Note From Sales Order\":\n+\t\t\t\tdn_so = sales_order.make_delivery_note(d.get('name'))\n+\t\t\t\tsi.flags.ignore_validate = True\n+\t\t\t\tdn_so.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Advance Payment From Sales Order\":\n+\t\t\t\tap_so = payment_entry.get_payment_entry(\"Sales Order\", d.get('name'))\n+\t\t\t\tap_so.flags.ignore_validate = True\n+\t\t\t\tap_so.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Sales Invoice\n+\t\t\tif to_create == \"Delivery Note From Sales Invoice\":\n+\t\t\t\tdn_si = sales_invoice.make_delivery_note(d.get('name'))\n+\t\t\t\tdn_si.flags.ignore_validate = True\n+\t\t\t\tdn_si.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Payment Sales Invoice\":\n+\t\t\t\tp_si = payment_entry.get_payment_entry(\"Sales Invoice\", d.get('name'))\n+\t\t\t\tp_si.flags.ignore_validate = True\n+\t\t\t\tp_si.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Delivery Note\n+\t\t\tif to_create == \"Sales Invoice From Delivery Note\":\n+\t\t\t\tsi_from_dn = delivery_note.make_sales_invoice(d.get('name'))\n+\t\t\t\tsi_from_dn.flags.ignore_validate = True\n+\t\t\t\tsi_from_dn.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Packaging Slip From Delivery Note\":\n+\t\t\t\tps = delivery_note.make_packing_slip(d.get('name'))\n+\t\t\t\tps.flags.ignore_validate = True\n+\t\t\t\tps.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Quotation\n+\t\t\tif to_create == \"Sales Order From Quotation\":\n+\t\t\t\tso_qtn = quotation._make_sales_order(d.get('name'))\n+\t\t\t\tso_qtn.flags.ignore_validate = True\n+\t\t\t\tso_qtn.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Sales Invoice From Quotation\":\n+\t\t\t\tsi_qtn = quotation._make_sales_invoice(d.get('name'))\n+\t\t\t\tsi_qtn.flags.ignore_validate = True\n+\t\t\t\tsi_qtn.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Supplier Quotation\n+\t\t\tif to_create == \"Purchase Order From Supplier Quotation\":\n+\t\t\t\tpo_sq = supplier_quotation.make_purchase_order(d.get('name'))\n+\t\t\t\tpo_sq.flags.ignore_validate = True\n+\t\t\t\tpo_sq.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Purchase Invoice From Supplier Quotation\":\n+\t\t\t\t# created method to create purchase invoice from supplier quotation\n+\t\t\t\tpi_sq = supplier_quotation.make_purchase_invoice(d.get('name'))\n+\t\t\t\tpi_sq.flags.ignonre_validate = True\n+\t\t\t\tpi_sq.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Purchase Order\n+\t\t\tif to_create == \"Purchase Invoice From Purchase Order\":\n+\t\t\t\tpi_po = purchase_order.get_mapped_purchase_invoice(d.get('name'))\n+\t\t\t\tpi_po.flags.ignore_validate = True\n+\t\t\t\tpi_po.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Purchase Receipt From Purchase Order\":\n+\t\t\t\tpr_po = purchase_order.make_purchase_receipt(d.get('name'))\n+\t\t\t\tpr_po.flags.ignore_validate = True\n+\t\t\t\tpr_po.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Advance Payment From Purchase Order\":\n+\t\t\t\tap_po = payment_entry.get_payment_entry(\"Purchase Order\", d.get('name'))\n+\t\t\t\tap_po.flags.ignore_validate = True\n+\t\t\t\tap_po.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Purchase Invoice\n+\t\t\tif to_create == \"Purchase Receipt From Purchase Invoice\":\n+\t\t\t\tpr_pi = purchase_invoice.make_purchase_receipt(d.get('name'))\n+\t\t\t\tpr_pi.flags.ignore_valdiate = True\n+\t\t\t\tpr_pi.insert(ignore_mandatory=True)\n+\n+\t\t\tif to_create == \"Payment Purchase Invoice\":\n+\t\t\t\tp_pi = payment_entry.get_payment_entry(\"Purchase Invoice\", d.get(\"name\"))\n+\t\t\t\tp_pi.flags.ignore_validate = True\n+\t\t\t\tp_pi.insert(ignore_mandatory=True)\n+\n+\t\t\t# From Purchase Receipt\n+\t\t\tif to_create == \"Purchase Invoice From Purchase Receipt\":\n+\t\t\t\tpr_pi = purchase_receipt.make_purchase_invoice(d.get('name'))\n+\t\t\t\tpr_pi.flags.ignore_validate = True\n+\t\t\t\tpr_pi.insert(ignore_mandatory=True)\n+\n+\t\texcept Exception as e:\n+\t\t\tfrappe.msgprint(\"Error while creating {1} from {0}\".format(d.get('name'), to_create), title=\"Invoice Creation Failed\")\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,128 @@\n+import json\n+\n+import frappe\n+\n+\[email protected]()\n+def transaction_processing(data, to_create):\n+\tdeserialized_data = json.loads(data)\n+\tlength_of_data = len(deserialized_data)\n+\n+\tif length_of_data > 10:\n+\t\t# frappe.msgprint(\"Started a background job to create {1} {0}\".format(to_create,length_of_data))\n+\t\tfrappe.enqueue(job, deserialized_data=deserialized_data, to_create=to_create)\n+\telse:\n+\t\tjob(deserialized_data, to_create)\n+\n+def job(deserialized_data, to_create):\n+\tfrom erpnext.accounts.doctype.payment_entry import payment_entry\n+\tfrom erpnext.accounts.doctype.purchase_invoice import purchase_invoice\n+\tfrom erpnext.accounts.doctype.sales_invoice import sales_invoice\n+\tfrom erpnext.buying.doctype.purchase_order import purchase_order\n+\tfrom erpnext.buying.doctype.supplier_quotation import supplier_quotation\n+\tfrom erpnext.selling.doctype.quotation import quotation\n+\tfrom erpnext.selling.doctype.sales_order import sales_order\n+\tfrom erpnext.stock.doctype.delivery_note import delivery_note\n+\tfrom erpnext.stock.doctype.purchase_receipt import purchase_receipt\n+\n+\ti = 0\n+\tfor d in deserialized_data:\n+\t\ttry:\n+\t\t\ti+=1\n+\n+\t\t\t# From Sales Order\n+\t\t\tif to_create == \"Sales Invoice From Sales Order\":",
"line": null,
"original_line": 34,
"original_start_line": null,
"path": "erpnext/utilities/bulk_transaction.py",
"start_line": null,
"text": "@user1:\nToo much code duplication here?\r\n\r\nPython allows treating functions as first class objects, just create a mapping like this instead: \r\n\r\n\r\n`Tuple[from_doctype, to_doctype] -> Function`\r\n\r\n\r\nIf a particular function needs non-standard or extra default arguments you can create a partial function: https://docs.python.org/3/library/functools.html#functools.partial"
}
] |
36b818c0194a18579cc75ac7cb2a981640de6308
|
diff --git a/cypress/integration/test_bulk_transaction_processing.js b/cypress/integration/test_bulk_transaction_processing.js
new file mode 100644
index 000000000000..428ec5100b53
--- /dev/null
+++ b/cypress/integration/test_bulk_transaction_processing.js
@@ -0,0 +1,44 @@
+describe("Bulk Transaction Processing", () => {
+ before(() => {
+ cy.login();
+ cy.visit("/app/website");
+ });
+
+ it("Creates To Sales Order", () => {
+ cy.visit("/app/sales-order");
+ cy.url().should("include", "/sales-order");
+ cy.window()
+ .its("frappe.csrf_token")
+ .then((csrf_token) => {
+ return cy
+ .request({
+ url: "/api/method/erpnext.tests.ui_test_bulk_transaction_processing.create_records",
+ method: "POST",
+ headers: {
+ Accept: "application/json",
+ "Content-Type": "application/json",
+ "X-Frappe-CSRF-Token": csrf_token,
+ },
+ timeout: 60000,
+ })
+ .then((res) => {
+ expect(res.status).eq(200);
+ });
+ });
+ cy.wait(5000);
+ cy.get(
+ ".list-row-head > .list-header-subject > .list-row-col > .list-check-all"
+ ).check({ force: true });
+ cy.wait(3000);
+ cy.get(".actions-btn-group > .btn-primary").click({ force: true });
+ cy.wait(3000);
+ cy.get(".dropdown-menu-right > .user-action > .dropdown-item")
+ .contains("Sales Invoice")
+ .click({ force: true });
+ cy.wait(3000);
+ cy.get(".modal-content > .modal-footer > .standard-actions")
+ .contains("Yes")
+ .click({ force: true });
+ cy.contains("Creation of Sales Invoice successful");
+ });
+});
diff --git a/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js b/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js
index f6ff83add8c5..82d00308db45 100644
--- a/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js
+++ b/erpnext/accounts/doctype/purchase_invoice/purchase_invoice_list.js
@@ -56,4 +56,14 @@ frappe.listview_settings["Purchase Invoice"] = {
];
}
},
+
+ onload: function(listview) {
+ listview.page.add_action_item(__("Purchase Receipt"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Purchase Invoice", "Purchase Receipt");
+ });
+
+ listview.page.add_action_item(__("Payment"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Purchase Invoice", "Payment");
+ });
+ }
};
diff --git a/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js b/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js
index 06e6f5118397..1130284ecc5a 100644
--- a/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js
+++ b/erpnext/accounts/doctype/sales_invoice/sales_invoice_list.js
@@ -21,5 +21,15 @@ frappe.listview_settings['Sales Invoice'] = {
};
return [__(doc.status), status_colors[doc.status], "status,=,"+doc.status];
},
- right_column: "grand_total"
+ right_column: "grand_total",
+
+ onload: function(listview) {
+ listview.page.add_action_item(__("Delivery Note"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Sales Invoice", "Delivery Note");
+ });
+
+ listview.page.add_action_item(__("Payment"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Sales Invoice", "Payment");
+ });
+ }
};
diff --git a/erpnext/bulk_transaction/__init__.py b/erpnext/bulk_transaction/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/bulk_transaction/doctype/__init__.py b/erpnext/bulk_transaction/doctype/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log/__init__.py b/erpnext/bulk_transaction/doctype/bulk_transaction_log/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.js b/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.js
new file mode 100644
index 000000000000..a739cc373065
--- /dev/null
+++ b/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.js
@@ -0,0 +1,34 @@
+// Copyright (c) 2021, Frappe Technologies Pvt. Ltd. and contributors
+// For license information, please see license.txt
+
+frappe.ui.form.on('Bulk Transaction Log', {
+
+ before_load: function(frm) {
+ query(frm);
+ },
+
+ refresh: function(frm) {
+ frm.disable_save();
+ frm.add_custom_button(__('Retry Failed Transactions'), ()=>{
+ frappe.confirm(__("Retry Failing Transactions ?"), ()=>{
+ query(frm);
+ }
+ );
+ });
+ }
+});
+
+function query(frm) {
+ frappe.call({
+ method: "erpnext.bulk_transaction.doctype.bulk_transaction_log.bulk_transaction_log.retry_failing_transaction",
+ args: {
+ log_date: frm.doc.log_date
+ }
+ }).then((r) => {
+ if (r.message) {
+ frm.remove_custom_button("Retry Failed Transactions");
+ } else {
+ frappe.show_alert(__("Retrying Failed Transactions"), 5);
+ }
+ });
+}
\ No newline at end of file
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.json b/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.json
new file mode 100644
index 000000000000..da42cf1bd4bd
--- /dev/null
+++ b/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.json
@@ -0,0 +1,51 @@
+{
+ "actions": [],
+ "allow_rename": 1,
+ "creation": "2021-11-30 13:41:16.343827",
+ "doctype": "DocType",
+ "editable_grid": 1,
+ "engine": "InnoDB",
+ "field_order": [
+ "log_date",
+ "logger_data"
+ ],
+ "fields": [
+ {
+ "fieldname": "log_date",
+ "fieldtype": "Date",
+ "label": "Log Date",
+ "read_only": 1
+ },
+ {
+ "fieldname": "logger_data",
+ "fieldtype": "Table",
+ "label": "Logger Data",
+ "options": "Bulk Transaction Log Detail"
+ }
+ ],
+ "index_web_pages_for_search": 1,
+ "links": [],
+ "modified": "2022-02-03 17:23:02.935325",
+ "modified_by": "Administrator",
+ "module": "Bulk Transaction",
+ "name": "Bulk Transaction Log",
+ "owner": "Administrator",
+ "permissions": [
+ {
+ "create": 1,
+ "delete": 1,
+ "email": 1,
+ "export": 1,
+ "print": 1,
+ "read": 1,
+ "report": 1,
+ "role": "System Manager",
+ "share": 1,
+ "write": 1
+ }
+ ],
+ "sort_field": "modified",
+ "sort_order": "DESC",
+ "states": [],
+ "track_changes": 1
+}
\ No newline at end of file
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.py b/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.py
new file mode 100644
index 000000000000..de7cde5a6d37
--- /dev/null
+++ b/erpnext/bulk_transaction/doctype/bulk_transaction_log/bulk_transaction_log.py
@@ -0,0 +1,66 @@
+# Copyright (c) 2021, Frappe Technologies Pvt. Ltd. and contributors
+# For license information, please see license.txt
+
+from datetime import date
+
+import frappe
+from frappe.model.document import Document
+
+from erpnext.utilities.bulk_transaction import task, update_logger
+
+
+class BulkTransactionLog(Document):
+ pass
+
+
[email protected]()
+def retry_failing_transaction(log_date=None):
+ btp = frappe.qb.DocType("Bulk Transaction Log Detail")
+ data = (
+ frappe.qb.from_(btp)
+ .select(btp.transaction_name, btp.from_doctype, btp.to_doctype)
+ .distinct()
+ .where(btp.retried != 1)
+ .where(btp.transaction_status == "Failed")
+ .where(btp.date == log_date)
+ ).run(as_dict=True)
+
+ if data:
+ if not log_date:
+ log_date = str(date.today())
+ if len(data) > 10:
+ frappe.enqueue(job, queue="long", job_name="bulk_retry", data=data, log_date=log_date)
+ else:
+ job(data, log_date)
+ else:
+ return "No Failed Records"
+
+def job(data, log_date):
+ for d in data:
+ failed = []
+ try:
+ frappe.db.savepoint("before_creation_of_record")
+ task(d.transaction_name, d.from_doctype, d.to_doctype)
+ except Exception as e:
+ frappe.db.rollback(save_point="before_creation_of_record")
+ failed.append(e)
+ update_logger(
+ d.transaction_name,
+ e,
+ d.from_doctype,
+ d.to_doctype,
+ status="Failed",
+ log_date=log_date,
+ restarted=1
+ )
+
+ if not failed:
+ update_logger(
+ d.transaction_name,
+ None,
+ d.from_doctype,
+ d.to_doctype,
+ status="Success",
+ log_date=log_date,
+ restarted=1,
+ )
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log/test_bulk_transaction_log.py b/erpnext/bulk_transaction/doctype/bulk_transaction_log/test_bulk_transaction_log.py
new file mode 100644
index 000000000000..a78e697b6f9a
--- /dev/null
+++ b/erpnext/bulk_transaction/doctype/bulk_transaction_log/test_bulk_transaction_log.py
@@ -0,0 +1,81 @@
+# Copyright (c) 2021, Frappe Technologies Pvt. Ltd. and Contributors
+# See license.txt
+
+import unittest
+from datetime import date
+
+import frappe
+
+from erpnext.utilities.bulk_transaction import transaction_processing
+
+
+class TestBulkTransactionLog(unittest.TestCase):
+
+ def setUp(self):
+ create_company()
+ create_customer()
+ create_item()
+
+ def test_for_single_record(self):
+ so_name = create_so()
+ transaction_processing([{"name": so_name}], "Sales Order", "Sales Invoice")
+ data = frappe.db.get_list("Sales Invoice", filters = {"posting_date": date.today(), "customer": "Bulk Customer"}, fields=["*"])
+ if not data:
+ self.fail("No Sales Invoice Created !")
+
+ def test_entry_in_log(self):
+ so_name = create_so()
+ transaction_processing([{"name": so_name}], "Sales Order", "Sales Invoice")
+ doc = frappe.get_doc("Bulk Transaction Log", str(date.today()))
+ for d in doc.get("logger_data"):
+ if d.transaction_name == so_name:
+ self.assertEqual(d.transaction_name, so_name)
+ self.assertEqual(d.transaction_status, "Success")
+ self.assertEqual(d.from_doctype, "Sales Order")
+ self.assertEqual(d.to_doctype, "Sales Invoice")
+ self.assertEqual(d.retried, 0)
+
+
+
+def create_company():
+ if not frappe.db.exists('Company', '_Test Company'):
+ frappe.get_doc({
+ 'doctype': 'Company',
+ 'company_name': '_Test Company',
+ 'country': 'India',
+ 'default_currency': 'INR'
+ }).insert()
+
+def create_customer():
+ if not frappe.db.exists('Customer', 'Bulk Customer'):
+ frappe.get_doc({
+ 'doctype': 'Customer',
+ 'customer_name': 'Bulk Customer'
+ }).insert()
+
+def create_item():
+ if not frappe.db.exists("Item", "MK"):
+ frappe.get_doc({
+ "doctype": "Item",
+ "item_code": "MK",
+ "item_name": "Milk",
+ "description": "Milk",
+ "item_group": "Products"
+ }).insert()
+
+def create_so(intent=None):
+ so = frappe.new_doc("Sales Order")
+ so.customer = "Bulk Customer"
+ so.company = "_Test Company"
+ so.transaction_date = date.today()
+
+ so.set_warehouse = "Finished Goods - _TC"
+ so.append("items", {
+ "item_code": "MK",
+ "delivery_date": date.today(),
+ "qty": 10,
+ "rate": 80,
+ })
+ so.insert()
+ so.submit()
+ return so.name
\ No newline at end of file
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/__init__.py b/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/bulk_transaction_log_detail.json b/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/bulk_transaction_log_detail.json
new file mode 100644
index 000000000000..8262caa0209a
--- /dev/null
+++ b/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/bulk_transaction_log_detail.json
@@ -0,0 +1,86 @@
+{
+ "actions": [],
+ "allow_rename": 1,
+ "creation": "2021-11-30 13:38:30.926047",
+ "doctype": "DocType",
+ "editable_grid": 1,
+ "engine": "InnoDB",
+ "field_order": [
+ "transaction_name",
+ "date",
+ "time",
+ "transaction_status",
+ "error_description",
+ "from_doctype",
+ "to_doctype",
+ "retried"
+ ],
+ "fields": [
+ {
+ "fieldname": "transaction_name",
+ "fieldtype": "Dynamic Link",
+ "in_list_view": 1,
+ "label": "Name",
+ "options": "from_doctype"
+ },
+ {
+ "fieldname": "transaction_status",
+ "fieldtype": "Data",
+ "in_list_view": 1,
+ "label": "Status",
+ "read_only": 1
+ },
+ {
+ "fieldname": "error_description",
+ "fieldtype": "Long Text",
+ "label": "Error Description",
+ "read_only": 1
+ },
+ {
+ "fieldname": "from_doctype",
+ "fieldtype": "Link",
+ "label": "From Doctype",
+ "options": "DocType",
+ "read_only": 1
+ },
+ {
+ "fieldname": "to_doctype",
+ "fieldtype": "Link",
+ "label": "To Doctype",
+ "options": "DocType",
+ "read_only": 1
+ },
+ {
+ "fieldname": "date",
+ "fieldtype": "Date",
+ "in_list_view": 1,
+ "label": "Date ",
+ "read_only": 1
+ },
+ {
+ "fieldname": "time",
+ "fieldtype": "Time",
+ "label": "Time",
+ "read_only": 1
+ },
+ {
+ "fieldname": "retried",
+ "fieldtype": "Int",
+ "label": "Retried",
+ "read_only": 1
+ }
+ ],
+ "index_web_pages_for_search": 1,
+ "istable": 1,
+ "links": [],
+ "modified": "2022-02-03 19:57:31.650359",
+ "modified_by": "Administrator",
+ "module": "Bulk Transaction",
+ "name": "Bulk Transaction Log Detail",
+ "owner": "Administrator",
+ "permissions": [],
+ "sort_field": "modified",
+ "sort_order": "DESC",
+ "states": [],
+ "track_changes": 1
+}
\ No newline at end of file
diff --git a/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/bulk_transaction_log_detail.py b/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/bulk_transaction_log_detail.py
new file mode 100644
index 000000000000..67795b9d4901
--- /dev/null
+++ b/erpnext/bulk_transaction/doctype/bulk_transaction_log_detail/bulk_transaction_log_detail.py
@@ -0,0 +1,9 @@
+# Copyright (c) 2021, Frappe Technologies Pvt. Ltd. and contributors
+# For license information, please see license.txt
+
+# import frappe
+from frappe.model.document import Document
+
+
+class BulkTransactionLogDetail(Document):
+ pass
diff --git a/erpnext/buying/doctype/purchase_order/purchase_order_list.js b/erpnext/buying/doctype/purchase_order/purchase_order_list.js
index 8413eb65c3f1..d7907e4274b4 100644
--- a/erpnext/buying/doctype/purchase_order/purchase_order_list.js
+++ b/erpnext/buying/doctype/purchase_order/purchase_order_list.js
@@ -29,8 +29,22 @@ frappe.listview_settings['Purchase Order'] = {
listview.call_for_selected_items(method, { "status": "Closed" });
});
- listview.page.add_menu_item(__("Re-open"), function () {
+ listview.page.add_menu_item(__("Reopen"), function () {
listview.call_for_selected_items(method, { "status": "Submitted" });
});
+
+
+ listview.page.add_action_item(__("Purchase Invoice"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Purchase Order", "Purchase Invoice");
+ });
+
+ listview.page.add_action_item(__("Purchase Receipt"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Purchase Order", "Purchase Receipt");
+ });
+
+ listview.page.add_action_item(__("Advance Payment"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Purchase Order", "Advance Payment");
+ });
+
}
};
diff --git a/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py b/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py
index d65ab94a6d3a..171de7882dce 100644
--- a/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py
+++ b/erpnext/buying/doctype/supplier_quotation/supplier_quotation.py
@@ -142,6 +142,26 @@ def update_item(obj, target, source_parent):
return doclist
[email protected]()
+def make_purchase_invoice(source_name, target_doc=None):
+ doc = get_mapped_doc("Supplier Quotation", source_name, {
+ "Supplier Quotation": {
+ "doctype": "Purchase Invoice",
+ "validation": {
+ "docstatus": ["=", 1],
+ }
+ },
+ "Supplier Quotation Item": {
+ "doctype": "Purchase Invoice Item"
+ },
+ "Purchase Taxes and Charges": {
+ "doctype": "Purchase Taxes and Charges"
+ }
+ }, target_doc)
+
+ return doc
+
+
@frappe.whitelist()
def make_quotation(source_name, target_doc=None):
doclist = get_mapped_doc("Supplier Quotation", source_name, {
diff --git a/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js b/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js
index 5ab6c980d00e..73685caa0b44 100644
--- a/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js
+++ b/erpnext/buying/doctype/supplier_quotation/supplier_quotation_list.js
@@ -8,5 +8,15 @@ frappe.listview_settings['Supplier Quotation'] = {
} else if(doc.status==="Expired") {
return [__("Expired"), "gray", "status,=,Expired"];
}
+ },
+
+ onload: function(listview) {
+ listview.page.add_action_item(__("Purchase Order"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Supplier Quotation", "Purchase Order");
+ });
+
+ listview.page.add_action_item(__("Purchase Invoice"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Supplier Quotation", "Purchase Invoice");
+ });
}
};
diff --git a/erpnext/hooks.py b/erpnext/hooks.py
index 0e290384b4c1..d99f23ed64e7 100644
--- a/erpnext/hooks.py
+++ b/erpnext/hooks.py
@@ -341,7 +341,8 @@
"erpnext.hr.doctype.shift_type.shift_type.process_auto_attendance_for_all_shifts"
],
"hourly_long": [
- "erpnext.stock.doctype.repost_item_valuation.repost_item_valuation.repost_entries"
+ "erpnext.stock.doctype.repost_item_valuation.repost_item_valuation.repost_entries",
+ "erpnext.bulk_transaction.doctype.bulk_transaction_log.bulk_transaction_log.retry_failing_transaction"
],
"daily": [
"erpnext.stock.reorder_item.reorder_item",
diff --git a/erpnext/modules.txt b/erpnext/modules.txt
index c5705c176369..8c79ee5c9a83 100644
--- a/erpnext/modules.txt
+++ b/erpnext/modules.txt
@@ -21,4 +21,5 @@ Communication
Loan Management
Payroll
Telephony
+Bulk Transaction
E-commerce
diff --git a/erpnext/public/build.json b/erpnext/public/build.json
index 569910dd9df2..91a752c291d4 100644
--- a/erpnext/public/build.json
+++ b/erpnext/public/build.json
@@ -39,7 +39,8 @@
"public/js/utils/dimension_tree_filter.js",
"public/js/telephony.js",
"public/js/templates/call_link.html",
- "public/js/templates/node_card.html"
+ "public/js/templates/node_card.html",
+ "public/js/bulk_transaction_processing.js"
],
"js/item-dashboard.min.js": [
"stock/dashboard/item_dashboard.html",
diff --git a/erpnext/public/js/bulk_transaction_processing.js b/erpnext/public/js/bulk_transaction_processing.js
new file mode 100644
index 000000000000..101f50c64aaf
--- /dev/null
+++ b/erpnext/public/js/bulk_transaction_processing.js
@@ -0,0 +1,30 @@
+frappe.provide("erpnext.bulk_transaction_processing");
+
+$.extend(erpnext.bulk_transaction_processing, {
+ create: function(listview, from_doctype, to_doctype) {
+ let checked_items = listview.get_checked_items();
+ const doc_name = [];
+ checked_items.forEach((Item)=> {
+ if (Item.docstatus == 0) {
+ doc_name.push(Item.name);
+ }
+ });
+
+ let count_of_rows = checked_items.length;
+ frappe.confirm(__("Create {0} {1} ?", [count_of_rows, to_doctype]), ()=>{
+ if (doc_name.length == 0) {
+ frappe.call({
+ method: "erpnext.utilities.bulk_transaction.transaction_processing",
+ args: {data: checked_items, from_doctype: from_doctype, to_doctype: to_doctype}
+ }).then(()=> {
+
+ });
+ if (count_of_rows > 10) {
+ frappe.show_alert("Starting a background job to create {0} {1}", [count_of_rows, to_doctype]);
+ }
+ } else {
+ frappe.msgprint(__("Selected document must be in submitted state"));
+ }
+ });
+ }
+});
\ No newline at end of file
diff --git a/erpnext/public/js/erpnext.bundle.js b/erpnext/public/js/erpnext.bundle.js
index 5259bdcc765e..b3a68b386295 100644
--- a/erpnext/public/js/erpnext.bundle.js
+++ b/erpnext/public/js/erpnext.bundle.js
@@ -22,5 +22,6 @@ import "./call_popup/call_popup";
import "./utils/dimension_tree_filter";
import "./telephony";
import "./templates/call_link.html";
+import "./bulk_transaction_processing";
// import { sum } from 'frappe/public/utils/util.js'
diff --git a/erpnext/selling/doctype/quotation/quotation_list.js b/erpnext/selling/doctype/quotation/quotation_list.js
index b631685bd19b..4c8f9c4f84c7 100644
--- a/erpnext/selling/doctype/quotation/quotation_list.js
+++ b/erpnext/selling/doctype/quotation/quotation_list.js
@@ -12,6 +12,14 @@ frappe.listview_settings['Quotation'] = {
};
};
}
+
+ listview.page.add_action_item(__("Sales Order"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Quotation", "Sales Order");
+ });
+
+ listview.page.add_action_item(__("Sales Invoice"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Quotation", "Sales Invoice");
+ });
},
get_indicator: function(doc) {
diff --git a/erpnext/selling/doctype/sales_order/sales_order_list.js b/erpnext/selling/doctype/sales_order/sales_order_list.js
index 26d96d59f299..4691190d2a54 100644
--- a/erpnext/selling/doctype/sales_order/sales_order_list.js
+++ b/erpnext/selling/doctype/sales_order/sales_order_list.js
@@ -16,7 +16,7 @@ frappe.listview_settings['Sales Order'] = {
return [__("Overdue"), "red",
"per_delivered,<,100|delivery_date,<,Today|status,!=,Closed"];
} else if (flt(doc.grand_total) === 0) {
- // not delivered (zero-amount order)
+ // not delivered (zeroount order)
return [__("To Deliver"), "orange",
"per_delivered,<,100|grand_total,=,0|status,!=,Closed"];
} else if (flt(doc.per_billed, 6) < 100) {
@@ -48,5 +48,17 @@ frappe.listview_settings['Sales Order'] = {
listview.call_for_selected_items(method, {"status": "Submitted"});
});
+ listview.page.add_action_item(__("Sales Invoice"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Sales Order", "Sales Invoice");
+ });
+
+ listview.page.add_action_item(__("Delivery Note"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Sales Order", "Delivery Note");
+ });
+
+ listview.page.add_action_item(__("Advance Payment"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Sales Order", "Advance Payment");
+ });
+
}
};
diff --git a/erpnext/stock/doctype/delivery_note/delivery_note.py b/erpnext/stock/doctype/delivery_note/delivery_note.py
index d1e22440b962..a5a9396a7fa4 100644
--- a/erpnext/stock/doctype/delivery_note/delivery_note.py
+++ b/erpnext/stock/doctype/delivery_note/delivery_note.py
@@ -586,7 +586,18 @@ def make_packing_slip(source_name, target_doc=None):
"validation": {
"docstatus": ["=", 0]
}
+ },
+
+ "Delivery Note Item": {
+ "doctype": "Packing Slip Item",
+ "field_map": {
+ "item_code": "item_code",
+ "item_name": "item_name",
+ "description": "description",
+ "qty": "qty",
+ }
}
+
}, target_doc)
return doclist
diff --git a/erpnext/stock/doctype/delivery_note/delivery_note_list.js b/erpnext/stock/doctype/delivery_note/delivery_note_list.js
index 040289804733..9e6f3bc93217 100644
--- a/erpnext/stock/doctype/delivery_note/delivery_note_list.js
+++ b/erpnext/stock/doctype/delivery_note/delivery_note_list.js
@@ -14,7 +14,7 @@ frappe.listview_settings['Delivery Note'] = {
return [__("Completed"), "green", "per_billed,=,100"];
}
},
- onload: function (doclist) {
+ onload: function (listview) {
const action = () => {
const selected_docs = doclist.get_checked_items();
const docnames = doclist.get_checked_items(true);
@@ -54,6 +54,16 @@ frappe.listview_settings['Delivery Note'] = {
};
};
- doclist.page.add_actions_menu_item(__('Create Delivery Trip'), action, false);
+ // doclist.page.add_actions_menu_item(__('Create Delivery Trip'), action, false);
+
+ listview.page.add_action_item(__('Create Delivery Trip'), action);
+
+ listview.page.add_action_item(__("Sales Invoice"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Delivery Note", "Sales Invoice");
+ });
+
+ listview.page.add_action_item(__("Packaging Slip From Delivery Note"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Delivery Note", "Packing Slip");
+ });
}
};
diff --git a/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js b/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js
index 77711de93f7e..4029f0c127b7 100644
--- a/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js
+++ b/erpnext/stock/doctype/purchase_receipt/purchase_receipt_list.js
@@ -13,5 +13,13 @@ frappe.listview_settings['Purchase Receipt'] = {
} else if (flt(doc.grand_total) === 0 || flt(doc.per_billed, 2) === 100) {
return [__("Completed"), "green", "per_billed,=,100"];
}
+ },
+
+ onload: function(listview) {
+
+ listview.page.add_action_item(__("Purchase Invoice"), ()=>{
+ erpnext.bulk_transaction_processing.create(listview, "Purchase Receipt", "Purchase Invoice");
+ });
}
+
};
diff --git a/erpnext/tests/ui_test_bulk_transaction_processing.py b/erpnext/tests/ui_test_bulk_transaction_processing.py
new file mode 100644
index 000000000000..d78689eb5b33
--- /dev/null
+++ b/erpnext/tests/ui_test_bulk_transaction_processing.py
@@ -0,0 +1,21 @@
+import frappe
+
+from erpnext.bulk_transaction.doctype.bulk_transaction_logger.test_bulk_transaction_logger import (
+ create_company,
+ create_customer,
+ create_item,
+ create_so,
+)
+
+
[email protected]()
+def create_records():
+ create_company()
+ create_customer()
+ create_item()
+
+ gd = frappe.get_doc("Global Defaults")
+ gd.set("default_company", "Test Bulk")
+ gd.save()
+ frappe.clear_cache()
+ create_so()
\ No newline at end of file
diff --git a/erpnext/utilities/bulk_transaction.py b/erpnext/utilities/bulk_transaction.py
new file mode 100644
index 000000000000..64e2ff421845
--- /dev/null
+++ b/erpnext/utilities/bulk_transaction.py
@@ -0,0 +1,201 @@
+import json
+from datetime import date, datetime
+
+import frappe
+from frappe import _
+
+
[email protected]()
+def transaction_processing(data, from_doctype, to_doctype):
+ if isinstance(data, str):
+ deserialized_data = json.loads(data)
+
+ else:
+ deserialized_data = data
+
+ length_of_data = len(deserialized_data)
+
+ if length_of_data > 10:
+ frappe.msgprint(
+ _("Started a background job to create {1} {0}").format(to_doctype, length_of_data)
+ )
+ frappe.enqueue(
+ job,
+ deserialized_data=deserialized_data,
+ from_doctype=from_doctype,
+ to_doctype=to_doctype,
+ )
+ else:
+ job(deserialized_data, from_doctype, to_doctype)
+
+
+def job(deserialized_data, from_doctype, to_doctype):
+ failed_history = []
+ i = 0
+ for d in deserialized_data:
+ failed = []
+
+ try:
+ i += 1
+ doc_name = d.get("name")
+ frappe.db.savepoint("before_creation_state")
+ task(doc_name, from_doctype, to_doctype)
+
+ except Exception as e:
+ frappe.db.rollback(save_point="before_creation_state")
+ failed_history.append(e)
+ failed.append(e)
+ update_logger(doc_name, e, from_doctype, to_doctype, status="Failed", log_date=str(date.today()))
+ if not failed:
+ update_logger(doc_name, None, from_doctype, to_doctype, status="Success", log_date=str(date.today()))
+
+ show_job_status(failed_history, deserialized_data, to_doctype)
+
+
+def task(doc_name, from_doctype, to_doctype):
+ from erpnext.accounts.doctype.payment_entry import payment_entry
+ from erpnext.accounts.doctype.purchase_invoice import purchase_invoice
+ from erpnext.accounts.doctype.sales_invoice import sales_invoice
+ from erpnext.buying.doctype.purchase_order import purchase_order
+ from erpnext.buying.doctype.supplier_quotation import supplier_quotation
+ from erpnext.selling.doctype.quotation import quotation
+ from erpnext.selling.doctype.sales_order import sales_order
+ from erpnext.stock.doctype.delivery_note import delivery_note
+ from erpnext.stock.doctype.purchase_receipt import purchase_receipt
+
+ mapper = {
+ "Sales Order": {
+ "Sales Invoice": sales_order.make_sales_invoice,
+ "Delivery Note": sales_order.make_delivery_note,
+ "Advance Payment": payment_entry.get_payment_entry,
+ },
+ "Sales Invoice": {
+ "Delivery Note": sales_invoice.make_delivery_note,
+ "Payment": payment_entry.get_payment_entry,
+ },
+ "Delivery Note": {
+ "Sales Invoice": delivery_note.make_sales_invoice,
+ "Packing Slip": delivery_note.make_packing_slip,
+ },
+ "Quotation": {
+ "Sales Order": quotation.make_sales_order,
+ "Sales Invoice": quotation.make_sales_invoice,
+ },
+ "Supplier Quotation": {
+ "Purchase Order": supplier_quotation.make_purchase_order,
+ "Purchase Invoice": supplier_quotation.make_purchase_invoice,
+ "Advance Payment": payment_entry.get_payment_entry,
+ },
+ "Purchase Order": {
+ "Purchase Invoice": purchase_order.make_purchase_invoice,
+ "Purchase Receipt": purchase_order.make_purchase_receipt,
+ },
+ "Purhcase Invoice": {
+ "Purchase Receipt": purchase_invoice.make_purchase_receipt,
+ "Payment": payment_entry.get_payment_entry,
+ },
+ "Purchase Receipt": {"Purchase Invoice": purchase_receipt.make_purchase_invoice},
+ }
+ if to_doctype in ['Advance Payment', 'Payment']:
+ obj = mapper[from_doctype][to_doctype](from_doctype, doc_name)
+ else:
+ obj = mapper[from_doctype][to_doctype](doc_name)
+
+ obj.flags.ignore_validate = True
+ obj.insert(ignore_mandatory=True)
+
+
+def check_logger_doc_exists(log_date):
+ return frappe.db.exists("Bulk Transaction Log", log_date)
+
+
+def get_logger_doc(log_date):
+ return frappe.get_doc("Bulk Transaction Log", log_date)
+
+
+def create_logger_doc():
+ log_doc = frappe.new_doc("Bulk Transaction Log")
+ log_doc.set_new_name(set_name=str(date.today()))
+ log_doc.log_date = date.today()
+
+ return log_doc
+
+
+def append_data_to_logger(log_doc, doc_name, error, from_doctype, to_doctype, status, restarted):
+ row = log_doc.append("logger_data", {})
+ row.transaction_name = doc_name
+ row.date = date.today()
+ now = datetime.now()
+ row.time = now.strftime("%H:%M:%S")
+ row.transaction_status = status
+ row.error_description = str(error)
+ row.from_doctype = from_doctype
+ row.to_doctype = to_doctype
+ row.retried = restarted
+
+
+def update_logger(doc_name, e, from_doctype, to_doctype, status, log_date=None, restarted=0):
+ if not check_logger_doc_exists(log_date):
+ log_doc = create_logger_doc()
+ append_data_to_logger(log_doc, doc_name, e, from_doctype, to_doctype, status, restarted)
+ log_doc.insert()
+ else:
+ log_doc = get_logger_doc(log_date)
+ if record_exists(log_doc, doc_name, status):
+ append_data_to_logger(
+ log_doc, doc_name, e, from_doctype, to_doctype, status, restarted
+ )
+ log_doc.save()
+
+
+def show_job_status(failed_history, deserialized_data, to_doctype):
+ if not failed_history:
+ frappe.msgprint(
+ _("Creation of {0} successful").format(to_doctype),
+ title="Successful",
+ indicator="green",
+ )
+
+ if len(failed_history) != 0 and len(failed_history) < len(deserialized_data):
+ frappe.msgprint(
+ _("""Creation of {0} partially successful.
+ Check <b><a href="/app/bulk-transaction-log">Bulk Transaction Log</a></b>""").format(
+ to_doctype
+ ),
+ title="Partially successful",
+ indicator="orange",
+ )
+
+ if len(failed_history) == len(deserialized_data):
+ frappe.msgprint(
+ _("""Creation of {0} failed.
+ Check <b><a href="/app/bulk-transaction-log">Bulk Transaction Log</a></b>""").format(
+ to_doctype
+ ),
+ title="Failed",
+ indicator="red",
+ )
+
+
+def record_exists(log_doc, doc_name, status):
+
+ record = mark_retrired_transaction(log_doc, doc_name)
+
+ if record and status == "Failed":
+ return False
+ elif record and status == "Success":
+ return True
+ else:
+ return True
+
+
+def mark_retrired_transaction(log_doc, doc_name):
+ record = 0
+ for d in log_doc.get("logger_data"):
+ if d.transaction_name == doc_name and d.transaction_status == "Failed":
+ d.retried = 1
+ record = record + 1
+
+ log_doc.save()
+
+ return record
\ No newline at end of file
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
frappe__erpnext-24879@a2e4564
|
frappe/erpnext
|
Python
| 24,879
|
fix: Loan Repayment entry cancellation on salary slip cancel
|
Closes: https://github.com/frappe/erpnext/issues/24543
|
2021-03-15T05:13:58Z
|
Canceling and deleting a draft salary slip should reset loan repayments-v13-beta
To replicate:
- Create an employee loan, disburse, then run loan interest accrual.
- Create a salary slip via payroll entry, or manually. The salary slip will pick the loan.
- Now, delete the salary slip.
- On creating the salary slip again, it won't pick the load repayment.
|
[
{
"body": "To replicate:\r\n\r\n- Create an employee loan, disburse, then run loan interest accrual.\r\n- Create a salary slip via payroll entry, or manually. The salary slip will pick the loan.\r\n- Now, delete the salary slip.\r\n- On creating the salary slip again, it won't pick the load repayment.",
"number": 24543,
"title": "Canceling and deleting a draft salary slip should reset loan repayments-v13-beta"
}
] |
d77ea7c88bd114e8ca14ffc1f590b2febe35ed6b
|
{
"head_commit": "a2e45644644651d3e5d6be41f93d7d3087886ce2",
"head_commit_message": "fix: Prevent non sequential repayment entries",
"patch_to_review": "diff --git a/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py b/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py\nindex bac06c4e9e63..72e3050b54b7 100644\n--- a/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py\n+++ b/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py\n@@ -21,6 +21,7 @@ class LoanRepayment(AccountsController):\n \tdef validate(self):\n \t\tamounts = calculate_amounts(self.against_loan, self.posting_date)\n \t\tself.set_missing_values(amounts)\n+\t\tself.check_future_entries()\n \t\tself.validate_amount()\n \t\tself.allocate_amounts(amounts)\n \n@@ -30,6 +31,7 @@ def before_submit(self):\n \tdef on_submit(self):\n \t\tself.update_paid_amount()\n \t\tself.make_gl_entries()\n+\t\t#self.repost_future_loan_interest_accruals()\n \n \tdef on_cancel(self):\n \t\tself.mark_as_unpaid()\n@@ -63,6 +65,13 @@ def set_missing_values(self, amounts):\n \t\tif amounts.get('due_date'):\n \t\t\tself.due_date = amounts.get('due_date')\n \n+\tdef check_future_entries(self):\n+\t\tfuture_repayment_date = frappe.db.get_value(\"Loan Repayment\", {\"posting_date\": (\">\", self.posting_date),\n+\t\t\t\"docstatus\": 1}, 'posting_date')\n+\n+\t\tif future_repayment_date:\n+\t\t\tfrappe.throw(\"Repayment already made till date {0}\".format(getdate(future_repayment_date)))\n+\n \tdef validate_amount(self):\n \t\tprecision = cint(frappe.db.get_default(\"currency_precision\")) or 2\n \n@@ -265,6 +274,10 @@ def make_gl_entries(self, cancel=0, adv_adj=0):\n \t\tif gle_map:\n \t\t\tmake_gl_entries(gle_map, cancel=cancel, adv_adj=adv_adj, merge_entries=False)\n \n+\t# def repost_future_loan_interest_accruals(self):\n+\t# \tfuture_lias = frappe.db.get_all(\"Loan Interest Accrual\", {\"docstatus\": 1, \"posting_date\": (\">\", self.posting_date)})\n+\t# \tif future_lias:\n+\n def create_repayment_entry(loan, applicant, company, posting_date, loan_type,\n \tpayment_type, interest_payable, payable_principal_amount, amount_paid, penalty_amount=None):\n \n@@ -284,8 +297,7 @@ def create_repayment_entry(loan, applicant, company, posting_date, loan_type,\n \n \treturn lr\n \n-def get_accrued_interest_entries(against_loan):\n-\n+def get_accrued_interest_entries(against_loan, posting_date):\n \tunpaid_accrued_entries = frappe.db.sql(\n \t\t\"\"\"\n \t\t\tSELECT name, posting_date, interest_amount - paid_interest_amount as interest_amount,\n@@ -295,12 +307,13 @@ def get_accrued_interest_entries(against_loan):\n \t\t\t\t`tabLoan Interest Accrual`\n \t\t\tWHERE\n \t\t\t\tloan = %s\n+\t\t\tAND posting_date <= %s\n \t\t\tAND (interest_amount - paid_interest_amount > 0 OR\n \t\t\t\tpayable_principal_amount - paid_principal_amount > 0)\n \t\t\tAND\n \t\t\t\tdocstatus = 1\n \t\t\tORDER BY posting_date\n-\t\t\"\"\", (against_loan), as_dict=1)\n+\t\t\"\"\", (against_loan, posting_date), as_dict=1)\n \n \treturn unpaid_accrued_entries\n \ndiff --git a/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json b/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json\nindex 2f4fe2494564..3d0708121523 100644\n--- a/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json\n+++ b/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json\n@@ -70,7 +70,9 @@\n {\n \"fieldname\": \"loan_repayment_entry\",\n \"fieldtype\": \"Link\",\n+ \"hidden\": 1,\n \"label\": \"Loan Repayment Entry\",\n+ \"no_copy\": 1,\n \"options\": \"Loan Repayment\",\n \"read_only\": 1\n },\n@@ -83,9 +85,10 @@\n \"read_only\": 1\n }\n ],\n+ \"index_web_pages_for_search\": 1,\n \"istable\": 1,\n \"links\": [],\n- \"modified\": \"2020-04-16 13:17:04.798335\",\n+ \"modified\": \"2021-03-14 20:47:11.725818\",\n \"modified_by\": \"Administrator\",\n \"module\": \"Loan Management\",\n \"name\": \"Salary Slip Loan\",\ndiff --git a/erpnext/payroll/doctype/salary_slip/salary_slip.py b/erpnext/payroll/doctype/salary_slip/salary_slip.py\nindex 595d6974fd59..a7e53cc5aaf0 100644\n--- a/erpnext/payroll/doctype/salary_slip/salary_slip.py\n+++ b/erpnext/payroll/doctype/salary_slip/salary_slip.py\n@@ -1050,7 +1050,7 @@ def make_loan_repayment_entry(self):\n \t\t\trepayment_entry.save()\n \t\t\trepayment_entry.submit()\n \n-\t\t\tloan.loan_repayment_entry = repayment_entry.name\n+\t\t\tfrappe.db.set_value(\"Salary Slip Loan\", loan.name, \"loan_repayment_entry\", repayment_entry.name)\n \n \tdef cancel_loan_repayment_entry(self):\n \t\tfor loan in self.loans:\n"
}
|
[
{
"diff_hunk": "@@ -30,6 +31,7 @@ def before_submit(self):\n \tdef on_submit(self):\n \t\tself.update_paid_amount()\n \t\tself.make_gl_entries()\n+\t\t#self.repost_future_loan_interest_accruals()",
"line": null,
"original_line": 34,
"original_start_line": null,
"path": "erpnext/loan_management/doctype/loan_repayment/loan_repayment.py",
"start_line": null,
"text": "@user1:\nCommented code."
},
{
"diff_hunk": "@@ -265,6 +274,10 @@ def make_gl_entries(self, cancel=0, adv_adj=0):\n \t\tif gle_map:\n \t\t\tmake_gl_entries(gle_map, cancel=cancel, adv_adj=adv_adj, merge_entries=False)\n \n+\t# def repost_future_loan_interest_accruals(self):\n+\t# \tfuture_lias = frappe.db.get_all(\"Loan Interest Accrual\", {\"docstatus\": 1, \"posting_date\": (\">\", self.posting_date)})\n+\t# \tif future_lias:\n+",
"line": null,
"original_line": 280,
"original_start_line": 277,
"path": "erpnext/loan_management/doctype/loan_repayment/loan_repayment.py",
"start_line": null,
"text": "@user1:\nPlease remove commented code.\n\n@author:\nI just added this to check for test cases, this PR is not complete yet, will uncomment and complete those functions. Added a dont-merge lable"
}
] |
5af6aea9f9a94e8b77372002ddcd5180603ee079
|
diff --git a/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py b/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py
index a88e183eadae..5d57cedb4198 100644
--- a/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py
+++ b/erpnext/loan_management/doctype/loan_repayment/loan_repayment.py
@@ -21,6 +21,7 @@ class LoanRepayment(AccountsController):
def validate(self):
amounts = calculate_amounts(self.against_loan, self.posting_date)
self.set_missing_values(amounts)
+ self.check_future_entries()
self.validate_amount()
self.allocate_amounts(amounts)
@@ -69,6 +70,13 @@ def set_missing_values(self, amounts):
if amounts.get('due_date'):
self.due_date = amounts.get('due_date')
+ def check_future_entries(self):
+ future_repayment_date = frappe.db.get_value("Loan Repayment", {"posting_date": (">", self.posting_date),
+ "docstatus": 1, "against_loan": self.against_loan}, 'posting_date')
+
+ if future_repayment_date:
+ frappe.throw("Repayment already made till date {0}".format(getdate(future_repayment_date)))
+
def validate_amount(self):
precision = cint(frappe.db.get_default("currency_precision")) or 2
@@ -307,7 +315,9 @@ def create_repayment_entry(loan, applicant, company, posting_date, loan_type,
return lr
-def get_accrued_interest_entries(against_loan):
+def get_accrued_interest_entries(against_loan, posting_date=None):
+ if not posting_date:
+ posting_date = getdate()
unpaid_accrued_entries = frappe.db.sql(
"""
@@ -318,12 +328,13 @@ def get_accrued_interest_entries(against_loan):
`tabLoan Interest Accrual`
WHERE
loan = %s
+ AND posting_date <= %s
AND (interest_amount - paid_interest_amount > 0 OR
payable_principal_amount - paid_principal_amount > 0)
AND
docstatus = 1
ORDER BY posting_date
- """, (against_loan), as_dict=1)
+ """, (against_loan, posting_date), as_dict=1)
return unpaid_accrued_entries
@@ -335,7 +346,7 @@ def get_amounts(amounts, against_loan, posting_date):
against_loan_doc = frappe.get_doc("Loan", against_loan)
loan_type_details = frappe.get_doc("Loan Type", against_loan_doc.loan_type)
- accrued_interest_entries = get_accrued_interest_entries(against_loan_doc.name)
+ accrued_interest_entries = get_accrued_interest_entries(against_loan_doc.name, posting_date)
pending_accrual_entries = {}
diff --git a/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json b/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json
index 2f4fe2494564..3d0708121523 100644
--- a/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json
+++ b/erpnext/loan_management/doctype/salary_slip_loan/salary_slip_loan.json
@@ -70,7 +70,9 @@
{
"fieldname": "loan_repayment_entry",
"fieldtype": "Link",
+ "hidden": 1,
"label": "Loan Repayment Entry",
+ "no_copy": 1,
"options": "Loan Repayment",
"read_only": 1
},
@@ -83,9 +85,10 @@
"read_only": 1
}
],
+ "index_web_pages_for_search": 1,
"istable": 1,
"links": [],
- "modified": "2020-04-16 13:17:04.798335",
+ "modified": "2021-03-14 20:47:11.725818",
"modified_by": "Administrator",
"module": "Loan Management",
"name": "Salary Slip Loan",
diff --git a/erpnext/payroll/doctype/salary_slip/salary_slip.py b/erpnext/payroll/doctype/salary_slip/salary_slip.py
index aa9acd8bd098..b98732052089 100644
--- a/erpnext/payroll/doctype/salary_slip/salary_slip.py
+++ b/erpnext/payroll/doctype/salary_slip/salary_slip.py
@@ -1053,7 +1053,7 @@ def make_loan_repayment_entry(self):
repayment_entry.save()
repayment_entry.submit()
- loan.loan_repayment_entry = repayment_entry.name
+ frappe.db.set_value("Salary Slip Loan", loan.name, "loan_repayment_entry", repayment_entry.name)
def cancel_loan_repayment_entry(self):
for loan in self.loans:
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
oumi-ai__oumi-1678@48f4578
|
oumi-ai/oumi
|
Python
| 1,678
|
add debug logging capabilities to collators
|
# Description
<!--
Thank you for contributing to Oumi! Before sending your PR out for review, please take a quick read through this template.
When your PR is merged, its title will appear in our release notes. Make sure your title gives a clear description of your change!
After you've updated your title, please replace this section with a detailed description of your change. Include as much context as possible so your reviewers can easily understand *what* you're changing and *why*.
The more information you provide, the faster we can review your change!
-->
<!--↓↓↓↓↓↓↓↓↓↓ Describe your change below ↓↓↓↓↓↓↓↓↓↓-->
This PR adds a comprehensive debug-logging feature to Oumi’s data collators, giving developers clear, step-by-step visibility into how raw text becomes model-ready tokens. By setting `debug=True`, you’ll get both console output and an HTML report (organized by session) that shows:
- **Raw Examples**: Original dataset records before any processing
- **Formatted Text**: After applying prompts or templates
- **Token Breakdown**: Each token’s ID, text, and position
- **Collated Inputs**: Final batched tensors passed to the model
## Key Changes
1. **Debug Flag**
- Added a `debug: bool` parameter to all collator builders and classes.
2. **`debug_utils.py`**
- New utility module with reusable logging functions.
- `log_example_for_debugging()` records:
- **Raw example** (pre-tokenization)
- **Post-format example** (after any prompt/template)
- **Token list** (IDs, decoded strings, special tokens)
- **Final model inputs** (padded & batched tensors)
3. **Console & HTML Output**
- **Console**: Uses the existing `logger.debug()` interface.
- **HTML Reports**:
- Written to `debug_logs/<session_id>/` with a timestamp.
- A `latest.html` symlink always points to the newest report.
- Clean, sectioned layouts with preformatted JSON for easy browser inspection.
4. **Updated Collators**
- `TextCollatorWithPadding`
- `TextCompletionsCollatorWithPadding`
Both now accept `debug=True` and integrate with the new utilities.
## Usage
```python
from oumi.data_collators import TextCollatorWithPadding
collator = TextCollatorWithPadding(
tokenizer=my_tokenizer,
max_length=512,
debug=True # ← enables detailed logging & HTML report
)
```
<!--↑↑↑↑↑↑↑↑↑↑ Describe your change above ↑↑↑↑↑↑↑↑↑↑-->
## Related issues
<!--
Make sure to list any relevant related issues to your change. More often than not this will be the single issue fixed by your PR.
-->
<!--↓↓↓↓↓↓↓↓↓↓ List your related issues below ↓↓↓↓↓↓↓↓↓↓-->
Fixes #1369
<!--↑↑↑↑↑↑↑↑↑↑ List your related issues above ↑↑↑↑↑↑↑↑↑↑-->
## Before submitting
- [ ] This PR only changes documentation. (You can ignore the following checks in that case)
- [X] Did you read the [contributor guideline](https://github.com/oumi-ai/oumi/blob/main/CONTRIBUTING.md) Pull Request guidelines?
- [X] Did you link the issue(s) related to this PR in the section above?
- [x] Did you add / update tests where needed?
## Reviewers
At least one review from a member of `oumi-ai/oumi-staff` is required.
<!-- Add `oumi-ai/oumi-staff` as a reviewer when your PR is ready for review.
You are also welcome to add individual members of `oumi-ai/oumi-staff` as reviewers.
If no one has reviewed your PR after several days, feel free to add a comment tagging specific reviewers.
-->
|
2025-05-09T20:50:10Z
|
[Feature] Output tokenized example for debugging during training
### Feature request
When running training, it'd be useful to log a single example from the dataset that has been tokenized for debugging.
For example:
1. Take the first example or batch in the training set (peek the top of the dataloader?)
2. Print all the following:
a. Raw Example
b. Formatted Example
c. Tokenized Example (list of tuples of <token_id, decoded_token>)
d. Model Input (tokens with attention mask and labels)
Ideally this should either be a debug log or its output explicitly controlled by a flag.
Likely, all parts except the model input can be done when loading the dataset. The model input will need to be done inside the collator(s).
### Motivation / references
Concrete example using `HuggingFaceTB/SmolLM2-1.7B-Instruct` tokenizer, with model setup to be trained on **completions only**:
Raw Example
```
[
{'role': 'system', 'content': 'This is a system instruction.'},
{'role': 'user', 'content': 'This is a user instruction'},
{'role': 'assistant', 'content': 'This is an assistant response.'},
]
```
Formatted Example
```
<|im_start|>system
This is a system instruction.<|im_end|>
<|im_start|>user
This is a user instruction<|im_end|>
<|im_start|>assistant
This is an assistant response.<|im_end|>
```
Tokenized Example
```
[(1, '<|im_start|>'), (9690, 'system'), (198, '\n'), (1348, 'This'), (314, ' is'), (253, ' a'), (817, ' system'), (5785, ' instruction'), (30, '.'), (2, '<|im_end|>'), (198, '\n'), (1, '<|im_start|>'), (4093, 'user'), (198, '\n'), (1348, 'This'), (314, ' is'), (253, ' a'), (2914, ' user'), (5785, ' instruction'), (2, '<|im_end|>'), (198, '\n'), (1, '<|im_start|>'), (520, 'ass'), (9531, 'istant'), (198, '\n'), (1348, 'This'), (314, ' is'), (354, ' an'), (11173, ' assistant'), (2426, ' response'), (30, '.'), (2, '<|im_end|>'), (198, '\n')]
```
Model Input Example
```
{
'input': [1, 9690, 198, 1348, 314, 253, 817, 5785, 30, 2, 198, 1, 4093, 198, 1348, 314, 253, 2914, 5785, 2, 198, 1, 520, 9531, 198, 1348, 314, 354, 11173, 2426, 30, 2, 198],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'labels': [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 1348, 314, 354, 11173, 2426, 30, 2, 198],
}
```
### Your contribution
Example code:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('HuggingFaceTB/SmolLM2-1.7B-Instruct')
chat = [
{'role': 'system', 'content': 'This is a system instruction.'},
{'role': 'user', 'content': 'This is a user instruction'},
{'role': 'assistant', 'content': 'This is an assistant response.'},
]
formatted = tokenizer.apply_chat_template(chat, tokenize=False)
print(formatted)
tokens = tokenizer.apply_chat_template(chat)
decoded = [tokenizer.decode(t) for t in tokens]
print(list(zip(tokens, decoded)))
```
|
in addition to text output (`print()`) , it may also be useful to write the information into e.g., a nicely-formatted HTML file
Hi @nikg4, I would like to work on this issue if it is still active!
@aniruddh-alt of course!
|
[
{
"body": "### Feature request\n\nWhen running training, it'd be useful to log a single example from the dataset that has been tokenized for debugging.\n\nFor example:\n1. Take the first example or batch in the training set (peek the top of the dataloader?)\n2. Print all the following:\na. Raw Example\nb. Formatted Example\nc. Tokenized Example (list of tuples of <token_id, decoded_token>)\nd. Model Input (tokens with attention mask and labels)\n\nIdeally this should either be a debug log or its output explicitly controlled by a flag.\n\nLikely, all parts except the model input can be done when loading the dataset. The model input will need to be done inside the collator(s).\n\n### Motivation / references\n\nConcrete example using `HuggingFaceTB/SmolLM2-1.7B-Instruct` tokenizer, with model setup to be trained on **completions only**:\n\nRaw Example\n```\n[\n {'role': 'system', 'content': 'This is a system instruction.'},\n {'role': 'user', 'content': 'This is a user instruction'},\n {'role': 'assistant', 'content': 'This is an assistant response.'},\n]\n```\n\nFormatted Example\n```\n<|im_start|>system\nThis is a system instruction.<|im_end|>\n<|im_start|>user\nThis is a user instruction<|im_end|>\n<|im_start|>assistant\nThis is an assistant response.<|im_end|>\n```\n\nTokenized Example\n```\n[(1, '<|im_start|>'), (9690, 'system'), (198, '\\n'), (1348, 'This'), (314, ' is'), (253, ' a'), (817, ' system'), (5785, ' instruction'), (30, '.'), (2, '<|im_end|>'), (198, '\\n'), (1, '<|im_start|>'), (4093, 'user'), (198, '\\n'), (1348, 'This'), (314, ' is'), (253, ' a'), (2914, ' user'), (5785, ' instruction'), (2, '<|im_end|>'), (198, '\\n'), (1, '<|im_start|>'), (520, 'ass'), (9531, 'istant'), (198, '\\n'), (1348, 'This'), (314, ' is'), (354, ' an'), (11173, ' assistant'), (2426, ' response'), (30, '.'), (2, '<|im_end|>'), (198, '\\n')]\n```\n\nModel Input Example\n```\n{\n 'input': [1, 9690, 198, 1348, 314, 253, 817, 5785, 30, 2, 198, 1, 4093, 198, 1348, 314, 253, 2914, 5785, 2, 198, 1, 520, 9531, 198, 1348, 314, 354, 11173, 2426, 30, 2, 198],\n 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n 'labels': [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 1348, 314, 354, 11173, 2426, 30, 2, 198],\n}\n```\n\n### Your contribution\n\nExample code:\n```\nimport transformers\ntokenizer = transformers.AutoTokenizer.from_pretrained('HuggingFaceTB/SmolLM2-1.7B-Instruct')\n\nchat = [\n {'role': 'system', 'content': 'This is a system instruction.'},\n {'role': 'user', 'content': 'This is a user instruction'},\n {'role': 'assistant', 'content': 'This is an assistant response.'},\n]\n\nformatted = tokenizer.apply_chat_template(chat, tokenize=False)\nprint(formatted)\n\ntokens = tokenizer.apply_chat_template(chat)\ndecoded = [tokenizer.decode(t) for t in tokens]\nprint(list(zip(tokens, decoded)))\n```",
"number": 1369,
"title": "[Feature] Output tokenized example for debugging during training"
}
] |
91f6790e5b84d477ded019533717167dc2c03755
|
{
"head_commit": "48f457867c52da2286437aaff27e9e60f8971ecd",
"head_commit_message": "Merge branch 'main' into fix/1369-debug-token-logging",
"patch_to_review": "diff --git a/src/oumi/builders/collators.py b/src/oumi/builders/collators.py\nindex 592951a3c..46b3a92bc 100644\n--- a/src/oumi/builders/collators.py\n+++ b/src/oumi/builders/collators.py\n@@ -40,6 +40,7 @@ def build_data_collator(\n *,\n max_length: Optional[int],\n label_ignore_index: Optional[int] = constants.LABEL_IGNORE_INDEX,\n+ debug: bool = False,\n **kwargs,\n ) -> Callable:\n \"\"\"Builds a data collator based on the given collator name.\n@@ -62,6 +63,7 @@ def build_data_collator(\n PyTorch convention is to use -100 as the `ignore_index` label. Refer to\n the `ignore_index` parameter of `torch.nn.CrossEntropyLoss()`\n for more details.\n+ debug: If True, logs a single example for debugging purposes.\n **kwargs: Additional keyword arguments to pass to the collator constructor.\n \n Returns:\n@@ -91,16 +93,18 @@ def build_data_collator(\n + f\" tokenizer's model maximum length ({tokenizer.model_max_length})\"\n )\n \n+ collator = None\n if collator_name == \"text_with_padding\":\n- return TextCollatorWithPadding(\n+ collator = TextCollatorWithPadding(\n tokenizer=tokenizer,\n max_length=max_length,\n truncation=enable_truncation,\n label_ignore_index=label_ignore_index,\n+ debug=debug,\n **kwargs,\n )\n elif collator_name == \"vision_language_with_padding\":\n- return VisionLanguageCollatorWithPadding(\n+ collator = VisionLanguageCollatorWithPadding(\n tokenizer=tokenizer,\n max_length=max_length,\n truncation=enable_truncation,\n@@ -112,7 +116,7 @@ def build_data_collator(\n if not processor_name:\n raise ValueError(f\"Empty processor_name for '{collator_name}'\")\n processor_kwargs = kwargs.pop(\"processor_kwargs\", None)\n- return VisionLanguageSftCollator(\n+ collator = VisionLanguageSftCollator(\n tokenizer=tokenizer,\n processor_name=processor_name,\n processor_kwargs=processor_kwargs,\n@@ -122,17 +126,20 @@ def build_data_collator(\n **kwargs,\n )\n elif collator_name == \"text_completions_only_with_padding\":\n- return TextCompletionsCollatorWithPadding(\n+ collator = TextCompletionsCollatorWithPadding(\n tokenizer=tokenizer,\n instruction_prefix=\"<|start_header_id|>user<|end_header_id|>\\n\\n\",\n response_prefix=\"<|start_header_id|>assistant<|end_header_id|>\\n\\n\",\n+ debug=debug,\n )\n \n- raise ValueError(f\"Unknown data collator name: '{collator_name}'\")\n+ if collator is None:\n+ raise ValueError(f\"Unknown data collator name: '{collator_name}'\")\n+ return collator\n \n \n def build_collator_from_config(\n- config: TrainingConfig, tokenizer: Optional[BaseTokenizer]\n+ config: TrainingConfig, tokenizer: Optional[BaseTokenizer], debug: bool = False\n ) -> Optional[Callable]:\n \"\"\"Creates data collator if specified in config.\"\"\"\n train_split = config.data.get_split(DatasetSplit.TRAIN)\n@@ -190,5 +197,6 @@ def build_collator_from_config(\n tokenizer=tokenizer,\n max_length=config.model.model_max_length,\n label_ignore_index=label_ignore_index,\n+ debug=debug,\n **collator_kwargs,\n )\ndiff --git a/src/oumi/core/collators/text_collator_with_padding.py b/src/oumi/core/collators/text_collator_with_padding.py\nindex 671dac53b..6648699ca 100644\n--- a/src/oumi/core/collators/text_collator_with_padding.py\n+++ b/src/oumi/core/collators/text_collator_with_padding.py\n@@ -16,6 +16,7 @@\n from typing import Any, NamedTuple, Optional\n \n from oumi.core.tokenizers.base_tokenizer import BaseTokenizer\n+from oumi.utils.debug_utils import log_example_for_debugging\n from oumi.utils.logging import logger\n from oumi.utils.torch_utils import (\n create_ones_like,\n@@ -50,6 +51,7 @@ def __init__(\n truncation: bool = False,\n label_ignore_index: Optional[int] = None,\n max_variable_sized_dims: int = 1,\n+ debug: bool = False,\n ):\n \"\"\"Custom collator for text LLM training.\n \n@@ -65,6 +67,7 @@ def __init__(\n Normally, it's 1 (sequence length dimension), but can sometimes be higher\n e.g., 2 for \"cross_attention_mask\" for VLM-s with multi-image inputs.\n Negative value mean `Unlimited`.\n+ debug: Whether to log a debug example.\n \"\"\"\n self._max_length: Optional[int] = (\n int(max_length) if max_length is not None and max_length > 0 else None\n@@ -91,6 +94,10 @@ def __init__(\n self._max_input_ids_length: int = 0\n self._max_previously_logged_input_ids_length: int = 0\n self._max_variable_sized_dims: int = max_variable_sized_dims\n+ self._debug: bool = debug\n+ # Track if we've already logged an example\n+ self._has_logged_example: bool = False\n+ self._tokenizer = tokenizer # Store tokenizer for debugging\n \n def _collate_simple(\n self,\n@@ -223,6 +230,51 @@ def __call__(self, batch) -> dict[str, Any]:\n if labels_on:\n combined_batch[_LABELS_KEY] = collated_text_inputs[_LABELS_KEY]\n \n+ # If debug is on and we haven't logged an example yet, log the first example\n+ if self._debug and not self._has_logged_example and len(batch) > 0:\n+ first_input_ids = combined_batch[_INPUT_IDS_KEY][0]\n+ formatted_example = self._tokenizer.decode(\n+ first_input_ids, skip_special_tokens=False\n+ )\n+\n+ tokenized_example = [\n+ (\n+ int(tid.item() if hasattr(tid, \"item\") else tid),\n+ self._tokenizer.decode([tid])\n+ if hasattr(tid, \"item\")\n+ else self._tokenizer.decode(tid),\n+ )\n+ for tid in first_input_ids\n+ ]\n+\n+ model_input = {\n+ \"input_ids\": (\n+ first_input_ids.tolist()\n+ if hasattr(first_input_ids, \"tolist\")\n+ else first_input_ids\n+ ),\n+ \"attention_mask\": (\n+ combined_batch[_ATTENTION_MASK_KEY][0].tolist()\n+ if hasattr(combined_batch[_ATTENTION_MASK_KEY][0], \"tolist\")\n+ else combined_batch[_ATTENTION_MASK_KEY][0]\n+ ),\n+ }\n+\n+ if labels_on:\n+ model_input[\"labels\"] = (\n+ combined_batch[_LABELS_KEY][0].tolist()\n+ if hasattr(combined_batch[_LABELS_KEY][0], \"tolist\")\n+ else combined_batch[_LABELS_KEY][0]\n+ )\n+\n+ # Mark that we've logged an example to avoid logging again\n+ self._has_logged_example = True\n+ log_example_for_debugging(\n+ raw_example=batch[0],\n+ formatted_example=str(formatted_example),\n+ tokenized_example=tokenized_example,\n+ model_input=model_input,\n+ )\n return combined_batch\n \n def _update_max_lengths_and_log(self, *, max_input_ids_length: int):\ndiff --git a/src/oumi/core/collators/text_completions_collator_with_padding.py b/src/oumi/core/collators/text_completions_collator_with_padding.py\nindex f489a084c..b56e16f8b 100644\n--- a/src/oumi/core/collators/text_completions_collator_with_padding.py\n+++ b/src/oumi/core/collators/text_completions_collator_with_padding.py\n@@ -17,13 +17,18 @@\n import trl\n \n from oumi.core.tokenizers.base_tokenizer import BaseTokenizer\n+from oumi.utils.debug_utils import log_example_for_debugging\n \n _INPUT_IDS_KEY = \"input_ids\"\n \n \n class TextCompletionsCollatorWithPadding:\n def __init__(\n- self, tokenizer: BaseTokenizer, instruction_prefix: str, response_prefix: str\n+ self,\n+ tokenizer: BaseTokenizer,\n+ instruction_prefix: str,\n+ response_prefix: str,\n+ debug: bool = False,\n ):\n \"\"\"Custom collator for text LLM training.\n \n@@ -31,6 +36,7 @@ def __init__(\n tokenizer: The tokenizer used for encoding the data.\n instruction_prefix: The prefix marking the beginning of the user instruction.\n response_prefix: The prefix marking the beginning of the assistant response.\n+ debug: If True, enables debug mode for logging.\n \"\"\"\n self._default_collator = trl.DataCollatorForCompletionOnlyLM(\n tokenizer=tokenizer,\n@@ -41,6 +47,9 @@ def __init__(\n if not hasattr(tokenizer, \"pad_token_id\") or tokenizer.pad_token_id is None:\n raise RuntimeError(\"Tokenizer doesn't define `pad_token_id`.\")\n \n+ self._debug = debug\n+ self._has_logged_example = False\n+\n def _collate(self, inputs: list[Any]) -> dict[str, Any]:\n result = self._default_collator(inputs)\n return result\n@@ -64,4 +73,30 @@ def __call__(self, batch) -> dict[str, Any]:\n # Collate batch prompts.\n collated_text_inputs = self._collate(batch)\n \n+ if self._debug and not self._has_logged_example:\n+ # Log the first example for debugging\n+ raw_example = batch[0]\n+\n+ # Get the formatted text from the tokenizer's encoding\n+ formatted_example = self._default_collator.tokenizer.decode(\n+ raw_example[_INPUT_IDS_KEY], skip_special_tokens=False\n+ )\n+\n+ # Tokenize the formatted example\n+ tokenized_ids = raw_example[_INPUT_IDS_KEY]\n+ # Create tokenized example pairs\n+ tokenized_example = [\n+ (token_id, self._default_collator.tokenizer.decode([token_id]))\n+ for token_id in tokenized_ids\n+ ]\n+\n+ # Get model input (same as collated_text_inputs but for a single example)\n+ model_input = self._collate([raw_example])\n+\n+ # Log all components for debugging\n+ log_example_for_debugging(\n+ raw_example, formatted_example, tokenized_example, model_input\n+ )\n+ self._has_logged_example = True\n+\n return collated_text_inputs\ndiff --git a/src/oumi/utils/debug_utils.py b/src/oumi/utils/debug_utils.py\nnew file mode 100644\nindex 000000000..b498547bd\n--- /dev/null\n+++ b/src/oumi/utils/debug_utils.py\n@@ -0,0 +1,95 @@\n+import datetime\n+import json\n+import uuid\n+from pathlib import Path\n+from typing import Any\n+\n+from oumi.utils.logging import logger\n+\n+\n+def log_example_for_debugging(\n+ raw_example: Any,\n+ formatted_example: str,\n+ tokenized_example: list[tuple[int, str]],\n+ model_input: dict[str, Any],\n+) -> None:\n+ \"\"\"Logs an example of the data in each step for debugging purposes.\n+\n+ Args:\n+ raw_example: The raw example from the dataset.\n+ formatted_example: The formatted example after processing.\n+ tokenized_example: The tokenized example after tokenization.\n+ model_input: The final model input after collating.\n+ \"\"\"\n+ # Log to debug file\n+ logger.debug(\"Raw example: %s\", raw_example)\n+ logger.debug(\"Formatted example: %s\", formatted_example)\n+ logger.debug(\"Tokenized example: %s\", tokenized_example)\n+ logger.debug(\"Model input: %s\", model_input)\n+\n+ # Generate timestamp for the debug file\n+ timestamp = datetime.datetime.now().strftime(\"%Y-%m-%d_%H-%M-%S\")\n+ session_id = str(uuid.uuid4())[:6]\n+\n+ # Format data for HTML display\n+ def format_for_html(obj):\n+ try:\n+ return json.dumps(obj, indent=2, default=str)\n+ except Exception:\n+ return str(obj)\n+\n+ # Create simplified HTML content\n+ html_content = f\"\"\"<!DOCTYPE html>\n+<html>\n+<head>\n+ <meta charset=\"UTF-8\">\n+ <title>Debug Log {timestamp}</title>\n+ <style>\n+ body {{ font-family: sans-serif; margin: 20px; }}\n+ h1 {{ color: #333; }}\n+ .info {{ color: #666; margin-bottom: 20px; }}\n+ .section {{ margin-bottom: 30px; }}\n+ h2 {{ background: #eee; padding: 8px; }}\n+ pre {{ background: #f8f8f8; padding: 10px; overflow: auto;\n+ border: 1px solid #ddd; }}\n+ .copy-btn {{ float: right; padding: 3px 8px; background: #eee;\n+ border: 1px solid #ccc; cursor: pointer; }}\n+ </style>\n+</head>\n+<body>\n+ <h1>Oumi Debug Information</h1>\n+ <div class=\"section\">\n+ <h2>Raw Example</h2>\n+ <pre id=\"raw\">{format_for_html(raw_example)}</pre>\n+ </div>\n+\n+ <div class=\"section\">\n+ <h2>Formatted Example</h2>\n+ <pre id=\"formatted\">{formatted_example}</pre>\n+ </div>\n+\n+ <div class=\"section\">\n+ <h2>Tokenized Example</h2>\n+ <pre id=\"tokenized\">{format_for_html(tokenized_example)}</pre>\n+ </div>\n+\n+ <div class=\"section\">\n+ <h2>Model Input</h2>\n+ <pre id=\"model\">{format_for_html(model_input)}</pre>\n+ </div>\n+</body>\n+</html>\"\"\"\n+\n+ # Create output directory and write HTML files\n+ output_dir = \"debug_logs\"\n+ Path(output_dir).mkdir(parents=True, exist_ok=True)\n+\n+ # Write to timestamped file\n+ output_file = Path(output_dir) / f\"debug_logs_{timestamp}_{session_id}.html\"\n+ with open(output_file, \"w\") as f:\n+ f.write(html_content)\n+\n+ # Also update the latest.html file\n+ latest_file = Path(output_dir) / \"latest.html\"\n+ with open(latest_file, \"w\") as f:\n+ f.write(html_content)\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,95 @@\n+import datetime\n+import json\n+import uuid\n+from pathlib import Path\n+from typing import Any\n+\n+from oumi.utils.logging import logger\n+\n+\n+def log_example_for_debugging(\n+ raw_example: Any,\n+ formatted_example: str,\n+ tokenized_example: list[tuple[int, str]],\n+ model_input: dict[str, Any],\n+) -> None:\n+ \"\"\"Logs an example of the data in each step for debugging purposes.\n+\n+ Args:\n+ raw_example: The raw example from the dataset.\n+ formatted_example: The formatted example after processing.\n+ tokenized_example: The tokenized example after tokenization.\n+ model_input: The final model input after collating.\n+ \"\"\"\n+ # Log to debug file\n+ logger.debug(\"Raw example: %s\", raw_example)\n+ logger.debug(\"Formatted example: %s\", formatted_example)\n+ logger.debug(\"Tokenized example: %s\", tokenized_example)\n+ logger.debug(\"Model input: %s\", model_input)\n+\n+ # Generate timestamp for the debug file\n+ timestamp = datetime.datetime.now().strftime(\"%Y-%m-%d_%H-%M-%S\")\n+ session_id = str(uuid.uuid4())[:6]\n+\n+ # Format data for HTML display\n+ def format_for_html(obj):\n+ try:\n+ return json.dumps(obj, indent=2, default=str)\n+ except Exception:\n+ return str(obj)",
"line": null,
"original_line": 39,
"original_start_line": 34,
"path": "src/oumi/utils/debug_utils.py",
"start_line": null,
"text": "@user1:\nSaving as HTML is likely overkill here. I think calling `logger.debug` is likely sufficient.\n\n@author:\nThank you for the comment! I will remove the HTML part from the function."
},
{
"diff_hunk": "@@ -91,16 +93,18 @@ def build_data_collator(\n + f\" tokenizer's model maximum length ({tokenizer.model_max_length})\"\n )\n \n+ collator = None\n if collator_name == \"text_with_padding\":\n- return TextCollatorWithPadding(\n+ collator = TextCollatorWithPadding(",
"line": null,
"original_line": 98,
"original_start_line": null,
"path": "src/oumi/builders/collators.py",
"start_line": null,
"text": "@user1:\nAre the changes to these lines necessary? If not, let's leave them as is. Early returns are less error prone than a single None check. \n\n@author:\nI have reverted back to early returns"
},
{
"diff_hunk": "@@ -64,4 +73,30 @@ def __call__(self, batch) -> dict[str, Any]:\n # Collate batch prompts.\n collated_text_inputs = self._collate(batch)\n \n+ if self._debug and not self._has_logged_example:\n+ # Log the first example for debugging\n+ raw_example = batch[0]",
"line": null,
"original_line": 78,
"original_start_line": null,
"path": "src/oumi/core/collators/text_completions_collator_with_padding.py",
"start_line": null,
"text": "@user1:\nCan this new code block be moved into a helper function? it's relatively complex/long\r\n\r\nadd unit tests for new functionality\n\n@author:\nThank you for the comment, I will create a helper function and add unit tests as well."
},
{
"diff_hunk": "@@ -223,6 +230,51 @@ def __call__(self, batch) -> dict[str, Any]:\n if labels_on:\n combined_batch[_LABELS_KEY] = collated_text_inputs[_LABELS_KEY]\n \n+ # If debug is on and we haven't logged an example yet, log the first example\n+ if self._debug and not self._has_logged_example and len(batch) > 0:\n+ first_input_ids = combined_batch[_INPUT_IDS_KEY][0]",
"line": null,
"original_line": 235,
"original_start_line": null,
"path": "src/oumi/core/collators/text_collator_with_padding.py",
"start_line": null,
"text": "@user1:\nCan this new code block be moved into a helper function? it's relatively complex/long\r\n\r\n+ add unit tests for new functionality \n\n@author:\nI have added helper function and unit tests"
}
] |
bb48a41ac9deaac0220874bf93e6a75d2cf12957
|
diff --git a/src/oumi/builders/collators.py b/src/oumi/builders/collators.py
index c373564cc9..32b974e7d4 100644
--- a/src/oumi/builders/collators.py
+++ b/src/oumi/builders/collators.py
@@ -40,6 +40,7 @@ def build_data_collator(
*,
max_length: Optional[int],
label_ignore_index: Optional[int] = constants.LABEL_IGNORE_INDEX,
+ debug: bool = False,
**kwargs,
) -> Callable:
"""Builds a data collator based on the given collator name.
@@ -62,6 +63,7 @@ def build_data_collator(
PyTorch convention is to use -100 as the `ignore_index` label. Refer to
the `ignore_index` parameter of `torch.nn.CrossEntropyLoss()`
for more details.
+ debug: If True, logs a single example for debugging purposes.
**kwargs: Additional keyword arguments to pass to the collator constructor.
Returns:
@@ -97,6 +99,7 @@ def build_data_collator(
max_length=max_length,
truncation=enable_truncation,
label_ignore_index=label_ignore_index,
+ debug=debug,
**kwargs,
)
elif collator_name == "vision_language_with_padding":
@@ -126,13 +129,13 @@ def build_data_collator(
tokenizer=tokenizer,
instruction_prefix="<|start_header_id|>user<|end_header_id|>\n\n",
response_prefix="<|start_header_id|>assistant<|end_header_id|>\n\n",
+ debug=debug,
)
-
raise ValueError(f"Unknown data collator name: '{collator_name}'")
def build_collator_from_config(
- config: TrainingConfig, tokenizer: Optional[BaseTokenizer]
+ config: TrainingConfig, tokenizer: Optional[BaseTokenizer], debug: bool = False
) -> Optional[Callable]:
"""Creates data collator if specified in config."""
train_split = config.data.get_split(DatasetSplit.TRAIN)
@@ -195,5 +198,6 @@ def build_collator_from_config(
tokenizer=tokenizer,
max_length=config.model.model_max_length,
label_ignore_index=label_ignore_index,
+ debug=debug,
**collator_kwargs,
)
diff --git a/src/oumi/core/collators/text_collator_with_padding.py b/src/oumi/core/collators/text_collator_with_padding.py
index 671dac53b7..2039a03cc3 100644
--- a/src/oumi/core/collators/text_collator_with_padding.py
+++ b/src/oumi/core/collators/text_collator_with_padding.py
@@ -16,6 +16,7 @@
from typing import Any, NamedTuple, Optional
from oumi.core.tokenizers.base_tokenizer import BaseTokenizer
+from oumi.utils.debug_utils import log_example_for_debugging
from oumi.utils.logging import logger
from oumi.utils.torch_utils import (
create_ones_like,
@@ -50,6 +51,7 @@ def __init__(
truncation: bool = False,
label_ignore_index: Optional[int] = None,
max_variable_sized_dims: int = 1,
+ debug: bool = False,
):
"""Custom collator for text LLM training.
@@ -65,6 +67,7 @@ def __init__(
Normally, it's 1 (sequence length dimension), but can sometimes be higher
e.g., 2 for "cross_attention_mask" for VLM-s with multi-image inputs.
Negative value mean `Unlimited`.
+ debug: Whether to log a debug example.
"""
self._max_length: Optional[int] = (
int(max_length) if max_length is not None and max_length > 0 else None
@@ -91,6 +94,10 @@ def __init__(
self._max_input_ids_length: int = 0
self._max_previously_logged_input_ids_length: int = 0
self._max_variable_sized_dims: int = max_variable_sized_dims
+ self._debug: bool = debug
+ # Track if we've already logged an example
+ self._has_logged_example: bool = False
+ self._tokenizer = tokenizer # Store tokenizer for debugging
def _collate_simple(
self,
@@ -223,8 +230,67 @@ def __call__(self, batch) -> dict[str, Any]:
if labels_on:
combined_batch[_LABELS_KEY] = collated_text_inputs[_LABELS_KEY]
+ # If debug is on and we haven't logged an example yet, log the first example
+ if self._debug and not self._has_logged_example and len(batch) > 0:
+ # Log an example of the data in the first step for debugging purposes.
+ self._log_debug_example(batch, combined_batch)
+
return combined_batch
+ def _log_debug_example(
+ self,
+ batch: list[dict[str, Any]],
+ combined_batch: dict[str, Any],
+ ) -> None:
+ """Logs a debug example if debug is enabled.
+
+ Args:
+ batch: The original batch of data.
+ combined_batch: The collated batch after processing.
+ """
+ first_input_ids = combined_batch[_INPUT_IDS_KEY][0]
+ formatted_example = self._tokenizer.decode(
+ first_input_ids, skip_special_tokens=False
+ )
+ # Decode raw text without special tokens for raw example
+ raw_text = self._tokenizer.decode(first_input_ids, skip_special_tokens=True)
+
+ tokenized_example = []
+ for tid in first_input_ids:
+ if hasattr(tid, "item"):
+ token_id = int(tid.item())
+ decoded_token = self._tokenizer.decode([tid])
+ else:
+ token_id = int(tid)
+ decoded_token = self._tokenizer.decode(tid)
+ tokenized_example.append((token_id, decoded_token))
+
+ model_input = {
+ "input_ids": (
+ first_input_ids.tolist()
+ if hasattr(first_input_ids, "tolist")
+ else first_input_ids
+ ),
+ "attention_mask": (
+ combined_batch[_ATTENTION_MASK_KEY][0].tolist()
+ if hasattr(combined_batch[_ATTENTION_MASK_KEY][0], "tolist")
+ else combined_batch[_ATTENTION_MASK_KEY][0]
+ ),
+ }
+
+ if _LABELS_KEY in combined_batch:
+ lbl = combined_batch[_LABELS_KEY][0]
+ model_input["labels"] = lbl.tolist() if hasattr(lbl, "tolist") else lbl
+
+ # Mark that we've logged an example to avoid logging again
+ self._has_logged_example = True
+ log_example_for_debugging(
+ raw_example=raw_text,
+ formatted_example=str(formatted_example),
+ tokenized_example=tokenized_example,
+ model_input=model_input,
+ )
+
def _update_max_lengths_and_log(self, *, max_input_ids_length: int):
"""Updates max length counters.
diff --git a/src/oumi/core/collators/text_completions_collator_with_padding.py b/src/oumi/core/collators/text_completions_collator_with_padding.py
index f489a084c5..dc6ff9d124 100644
--- a/src/oumi/core/collators/text_completions_collator_with_padding.py
+++ b/src/oumi/core/collators/text_completions_collator_with_padding.py
@@ -17,13 +17,18 @@
import trl
from oumi.core.tokenizers.base_tokenizer import BaseTokenizer
+from oumi.utils.debug_utils import log_example_for_debugging
_INPUT_IDS_KEY = "input_ids"
class TextCompletionsCollatorWithPadding:
def __init__(
- self, tokenizer: BaseTokenizer, instruction_prefix: str, response_prefix: str
+ self,
+ tokenizer: BaseTokenizer,
+ instruction_prefix: str,
+ response_prefix: str,
+ debug: bool = False,
):
"""Custom collator for text LLM training.
@@ -31,6 +36,7 @@ def __init__(
tokenizer: The tokenizer used for encoding the data.
instruction_prefix: The prefix marking the beginning of the user instruction.
response_prefix: The prefix marking the beginning of the assistant response.
+ debug: If True, enables debug mode for logging.
"""
self._default_collator = trl.DataCollatorForCompletionOnlyLM(
tokenizer=tokenizer,
@@ -41,6 +47,9 @@ def __init__(
if not hasattr(tokenizer, "pad_token_id") or tokenizer.pad_token_id is None:
raise RuntimeError("Tokenizer doesn't define `pad_token_id`.")
+ self._debug = debug
+ self._has_logged_example = False
+
def _collate(self, inputs: list[Any]) -> dict[str, Any]:
result = self._default_collator(inputs)
return result
@@ -64,4 +73,58 @@ def __call__(self, batch) -> dict[str, Any]:
# Collate batch prompts.
collated_text_inputs = self._collate(batch)
+ if self._debug and not self._has_logged_example:
+ # Log an example of the data in the first step for debugging purposes.
+ self._log_debug_example(batch, collated_text_inputs)
return collated_text_inputs
+
+ def _log_debug_example(
+ self, batch: list[dict[str, Any]], collated_text_inputs: dict[str, Any]
+ ) -> None:
+ """Logs an example of the data in each step for debugging purposes.
+
+ Args:
+ batch: The batch of examples to log.
+ collated_text_inputs: The collated inputs after processing.
+ """
+ raw_example = batch[0]
+ token_ids = raw_example[_INPUT_IDS_KEY]
+ # Raw text without special tokens
+ raw_text = self._default_collator.tokenizer.decode(
+ token_ids, skip_special_tokens=True
+ )
+ # Formatted example with special tokens
+ formatted_example = self._default_collator.tokenizer.decode(
+ token_ids, skip_special_tokens=False
+ )
+ tokenized_ids = raw_example[_INPUT_IDS_KEY]
+ tokenized_example = [
+ (token_id, self._default_collator.tokenizer.decode([token_id]))
+ for token_id in tokenized_ids
+ ]
+ self._has_logged_example = True
+
+ # Extract the first example from the batched tensors for cleaner debug output
+ def _to_py(x):
+ """Convert tensor-like objects to Python native types."""
+ if hasattr(x, "tolist"):
+ return x.tolist()
+ elif hasattr(x, "item"):
+ return x.item()
+ else:
+ return x
+
+ # Process the collated inputs to get a clean representation for debugging
+ model_input = {}
+ for key, value in collated_text_inputs.items():
+ # For batch tensors, extract just the first example
+ if hasattr(value, "dim") and value.dim() > 1:
+ model_input[key] = _to_py(value[0])
+ # For single tensors or other objects
+ else:
+ model_input[key] = _to_py(value)
+
+ # Log all components for debugging
+ log_example_for_debugging(
+ raw_text, formatted_example, tokenized_example, model_input
+ )
diff --git a/src/oumi/utils/debug_utils.py b/src/oumi/utils/debug_utils.py
new file mode 100644
index 0000000000..a9aa22fa3c
--- /dev/null
+++ b/src/oumi/utils/debug_utils.py
@@ -0,0 +1,38 @@
+# Copyright 2025 - Oumi
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import Any
+
+from oumi.utils.logging import logger
+
+
+def log_example_for_debugging(
+ raw_example: Any,
+ formatted_example: str,
+ tokenized_example: list[tuple[int, str]],
+ model_input: dict[str, Any],
+) -> None:
+ """Logs an example of the data in each step for debugging purposes.
+
+ Args:
+ raw_example: The raw example from the dataset.
+ formatted_example: The formatted example after processing.
+ tokenized_example: The tokenized example after tokenization.
+ model_input: The final model input after collating.
+ """
+ # Log to debug file
+ logger.debug("Raw example: %s", raw_example)
+ logger.debug("Formatted example: %s", formatted_example)
+ logger.debug("Tokenized example: %s", tokenized_example)
+ logger.debug("Model input: %s", model_input)
diff --git a/tests/unit/core/collators/test_text_collator_with_padding.py b/tests/unit/core/collators/test_text_collator_with_padding.py
index a302bc5bbd..8eed50736e 100644
--- a/tests/unit/core/collators/test_text_collator_with_padding.py
+++ b/tests/unit/core/collators/test_text_collator_with_padding.py
@@ -9,6 +9,7 @@
from oumi.core.collators.text_collator_with_padding import TextCollatorWithPadding
from oumi.core.configs import ModelParams
from oumi.core.tokenizers.base_tokenizer import BaseTokenizer
+from oumi.utils import logging
@functools.cache # same as @cache added in Python 3.9
@@ -177,3 +178,60 @@ def test_success_label_ingnore_index():
dtype=np.int32,
)
)
+
+
+def test_debug_logging(caplog):
+ """Test that example debugging logs are correctly generated when debug=True."""
+ # Set the logging level to DEBUG for both caplog and the oumi logger
+ caplog.set_level("DEBUG")
+
+ # Get and configure the oumi logger to ensure debug messages are captured
+ oumi_logger = logging.get_logger("oumi")
+ oumi_logger.setLevel("DEBUG")
+ oumi_logger.propagate = True # Ensure propagation to root logger
+
+ tokenizer, _ = create_test_tokenizer()
+
+ # Create collator with debug=True
+ collator = TextCollatorWithPadding(tokenizer, max_length=None, debug=True)
+
+ # Test data
+ batch = [
+ {"input_ids": [101, 102, 103, 104], "labels": [101, 102, 103, 104]},
+ {"input_ids": [201, 202], "labels": [201, 202]},
+ ]
+
+ # Process the batch
+ _ = collator(batch)
+
+ # Check that debug logs were generated and verify their content
+ log_text = caplog.text
+
+ # Verify raw example (decoded without special tokens)
+ expected_raw_text = tokenizer.decode([101, 102, 103, 104], skip_special_tokens=True)
+ assert f"Raw example: {expected_raw_text}" in log_text
+
+ # Verify formatted example (decoded with special tokens)
+ expected_formatted_text = tokenizer.decode(
+ [101, 102, 103, 104], skip_special_tokens=False
+ )
+ assert f"Formatted example: {expected_formatted_text}" in log_text
+
+ # Verify tokenized example (list of tuples with token_id and decoded token)
+ expected_tokenized = [
+ (101, tokenizer.decode([101])),
+ (102, tokenizer.decode([102])),
+ (103, tokenizer.decode([103])),
+ (104, tokenizer.decode([104])),
+ ]
+ assert f"Tokenized example: {expected_tokenized}" in log_text
+
+ # Verify model input (the actual tensors converted to lists)
+ expected_input_ids = [101, 102, 103, 104]
+ expected_attention_mask = [1, 1, 1, 1]
+ expected_labels = [101, 102, 103, 104]
+
+ # Check that the model input contains the expected values
+ assert f"'input_ids': {expected_input_ids}" in log_text
+ assert f"'attention_mask': {expected_attention_mask}" in log_text
+ assert f"'labels': {expected_labels}" in log_text
diff --git a/tests/unit/core/collators/test_text_completions_collator_with_padding.py b/tests/unit/core/collators/test_text_completions_collator_with_padding.py
index abed3797a7..c79c524c59 100644
--- a/tests/unit/core/collators/test_text_completions_collator_with_padding.py
+++ b/tests/unit/core/collators/test_text_completions_collator_with_padding.py
@@ -11,6 +11,7 @@
)
from oumi.core.configs import ModelParams
from oumi.core.tokenizers.base_tokenizer import BaseTokenizer
+from oumi.utils import logging
@pytest.fixture
@@ -149,3 +150,91 @@ def test_success_basic():
assert np.all(
collated_batch["labels"].numpy() == np.array(expected_labels, dtype=np.int32)
)
+
+
+def test_debug_logging(caplog):
+ """Test that example debugging logs are correctly generated when debug=True."""
+ # Set the logging level to DEBUG for both caplog and the oumi logger
+ caplog.set_level("DEBUG")
+
+ # Get and configure the oumi logger to ensure debug messages are captured
+ oumi_logger = logging.get_logger("oumi")
+ oumi_logger.setLevel("DEBUG")
+ oumi_logger.propagate = True # Ensure propagation to root logger
+
+ tokenizer, pad_token_id = create_test_tokenizer()
+
+ instruction_prefix = "ignore this and after me"
+ response_prefix = "ignore this but not after me"
+
+ instruction_prefix_tokens = tokenizer.encode(
+ instruction_prefix, add_special_tokens=False
+ )
+ response_prefix_tokens = tokenizer.encode(response_prefix, add_special_tokens=False)
+
+ collator = TextCompletionsCollatorWithPadding(
+ tokenizer=tokenizer,
+ instruction_prefix=instruction_prefix,
+ response_prefix=response_prefix,
+ debug=True,
+ )
+ assert callable(collator)
+
+ batch = [
+ # Instructions with no response, all tokens are ignored
+ {"input_ids": instruction_prefix_tokens + [101] + response_prefix_tokens},
+ # Response with no instructions, only in-between tokens are used
+ {
+ "input_ids": (
+ response_prefix_tokens
+ + [201, 202, 203, 204]
+ + instruction_prefix_tokens
+ )
+ },
+ # No instructions or response, all tokens are ignored
+ {"input_ids": [301, 302]},
+ # Normal multi-turn conversation, only tokens after response are used
+ {
+ "input_ids": (
+ instruction_prefix_tokens
+ + [301, 302]
+ + response_prefix_tokens
+ + [303, 304]
+ + instruction_prefix_tokens
+ + [305, 306]
+ + response_prefix_tokens
+ + [307, 308]
+ )
+ },
+ ]
+
+ _ = collator(batch)
+
+ # Check that debug logs were generated and verify their content
+ log_text = caplog.text
+
+ # Get the first example's token IDs for verification
+ first_example_input_ids = batch[0]["input_ids"]
+
+ # Verify raw example (decoded without special tokens)
+ expected_raw_text = tokenizer.decode(
+ first_example_input_ids, skip_special_tokens=True
+ )
+ assert f"Raw example: {expected_raw_text}" in log_text
+
+ # Verify formatted example (decoded with special tokens)
+ expected_formatted_text = tokenizer.decode(
+ first_example_input_ids, skip_special_tokens=False
+ )
+ assert f"Formatted example: {expected_formatted_text}" in log_text
+
+ # Verify tokenized example (list of tuples with token_id and decoded token)
+ expected_tokenized = [
+ (token_id, tokenizer.decode([token_id])) for token_id in first_example_input_ids
+ ]
+ assert f"Tokenized example: {expected_tokenized}" in log_text
+
+ # Verify model input contains the expected structure
+ assert "'input_ids':" in log_text
+ assert "'attention_mask':" in log_text
+ assert "'labels':" in log_text
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
frappe__erpnext-8869@50be447
|
frappe/erpnext
|
Python
| 8,869
|
Issue 8842
|
Fixes #8842. I also added a test for the issue and `encode_company_abbr`
|
2017-05-16T13:21:11Z
|
Warehouse Renaming Issue
Hi,
There's an issue when trying to rename a warehouse to something that has a hyphen like 'New - Name'. Let's assume company abbreviation is ABC. The new ID is supposed to be New - Name - ABC but instead, the system names it as New - ABC
Kindly help resolve
Thanks
|
Hello @Olawale1
It seems like you are deleting the `- [abbreviation]` in the name field. Is there any special reason for doing that?
Hi @tundebabzy
Thanks for your response. I'm not deleting anything... the system is deleting the part of the name after the first hyphen as explained in my post. That's the bug being reported
Thanks
Ok. The rename gobbles gobbles up the last `- [abbreivation]` it finds. This is definitely a bug.
Thanks for the report. We'll fix ASAP
|
[
{
"body": "Hi,\r\n\r\nThere's an issue when trying to rename a warehouse to something that has a hyphen like 'New - Name'. Let's assume company abbreviation is ABC. The new ID is supposed to be New - Name - ABC but instead, the system names it as New - ABC\r\n\r\nKindly help resolve\r\n\r\nThanks",
"number": 8842,
"title": "Warehouse Renaming Issue"
}
] |
3bd15fb19e2ab3eb7dda91b8eb4c0da595d6df24
|
{
"head_commit": "50be44783a20befcb2f8f841c9fac9d70ed8caa7",
"head_commit_message": "fixes failing test",
"patch_to_review": "diff --git a/__init__.py b/__init__.py\nnew file mode 100644\nindex 000000000000..e69de29bb2d1\ndiff --git a/erpnext/__init__.py b/erpnext/__init__.py\nindex 6f73741e8561..442050d4c9dd 100644\n--- a/erpnext/__init__.py\n+++ b/erpnext/__init__.py\n@@ -4,6 +4,7 @@\n \n __version__ = '8.0.26'\n \n+\n def get_default_company(user=None):\n \t'''Get default company for user'''\n \tfrom frappe.defaults import get_user_default_as_list\n@@ -19,12 +20,14 @@ def get_default_company(user=None):\n \n \treturn default_company\n \n+\n def get_default_currency():\n \t'''Returns the currency of the default company'''\n \tcompany = get_default_company()\n \tif company:\n \t\treturn frappe.db.get_value('Company', company, 'default_currency')\n \n+\n def get_company_currency(company):\n \t'''Returns the default company currency'''\n \tif not frappe.flags.company_currency:\n@@ -33,11 +36,13 @@ def get_company_currency(company):\n \t\tfrappe.flags.company_currency[company] = frappe.db.get_value('Company', company, 'default_currency')\n \treturn frappe.flags.company_currency[company]\n \n+\n def set_perpetual_inventory(enable=1):\n \taccounts_settings = frappe.get_doc(\"Accounts Settings\")\n \taccounts_settings.auto_accounting_for_stock = enable\n \taccounts_settings.save()\n \n+\n def encode_company_abbr(name, company):\n \t'''Returns name encoded with company abbreviation'''\n \tcompany_abbr = frappe.db.get_value(\"Company\", company, \"abbr\")\n@@ -46,4 +51,6 @@ def encode_company_abbr(name, company):\n \tif parts[-1].lower() != company_abbr.lower():\n \t\tparts.append(company_abbr)\n \n-\treturn \" - \".join([parts[0], company_abbr])\n+\treturn \" - \".join(parts)\n+\n+\ndiff --git a/erpnext/stock/doctype/warehouse/test_warehouse.py b/erpnext/stock/doctype/warehouse/test_warehouse.py\nindex ec64bdd3bdcf..31d1926cfa35 100644\n--- a/erpnext/stock/doctype/warehouse/test_warehouse.py\n+++ b/erpnext/stock/doctype/warehouse/test_warehouse.py\n@@ -59,6 +59,15 @@ def test_warehouse_renaming(self):\n \t\tself.assertTrue(frappe.db.get_value(\"Account\",\n \t\t\tfilters={\"warehouse\": \"Test Warehouse for Renaming 3 - _TC\"}))\n \n+\t\t# Another rename with multiple dashes\n+\t\tif frappe.db.exists(\"Warehouse\", \"Test - Warehouse - Company - _TC\"):\n+\t\t\tfrappe.delete_doc(\"Warehouse\", \"Test - Warehouse - Company - _TC\")\n+\t\trename_doc(\"Warehouse\", \"Test Warehouse for Renaming 3 - _TC\", \"Test - Warehouse - Company\")\n+\n+\t\tself.assertTrue(frappe.db.exists(\"Account\", \"Test - Warehouse - Company - _TC\"))\n+\t\tself.assertTrue(frappe.db.get_value(\"Account\", filters={\"warehouse\": \"Test - Warehouse - Company - _TC\"}))\n+\t\tself.assertFalse(frappe.db.get_value(\"Account\", filters={\"warehouse\": \"Test Warehouse for Renaming 3 - _TC\"}))\n+\n \tdef test_warehouse_merging(self):\n \t\tset_perpetual_inventory(1)\n \ndiff --git a/erpnext/tests/test_init.py b/erpnext/tests/test_init.py\nnew file mode 100644\nindex 000000000000..2f15f02c168c\n--- /dev/null\n+++ b/erpnext/tests/test_init.py\n@@ -0,0 +1,41 @@\n+import unittest\n+\n+import frappe\n+from erpnext import encode_company_abbr\n+from six.moves import range\n+\n+test_records = frappe.get_test_records('Company')\n+\n+\n+class TestInit(unittest.TestCase):\n+\n+\tdef test_encode_company_abbr(self):\n+\t\tcompany = frappe.new_doc(\"Company\")\n+\t\tcompany.company_name = \"New from Existing Company For Test\"\n+\t\tcompany.abbr = \"NFECT\"\n+\t\tcompany.default_currency = \"INR\"\n+\t\tcompany.save()\n+\n+\t\tabbr = company.abbr\n+\n+\t\tnames = [\n+\t\t\t\"Warehouse Name\", \"ERPNext Foundation India\", \"Gold - Member - {a}\".format(a=abbr),\n+\t\t \" - {a}\".format(a=abbr), \"ERPNext - Foundation - India\",\n+\t\t \"ERPNext Foundation India - {a}\".format(a=abbr),\n+\t\t \"No-Space-{a}\".format(a=abbr), \"- Warehouse\"\n+\t\t]\n+\n+\t\texpected_names = [\n+\t\t\t\"Warehouse Name - {a}\".format(a=abbr), \"ERPNext Foundation India - {a}\".format(a=abbr),\n+\t\t\t\"Gold - Member - {a}\".format(a=abbr), \" - {a}\".format(a=abbr),\n+\t\t\t\"ERPNext - Foundation - India - {a}\".format(a=abbr),\n+\t\t\t\"ERPNext Foundation India - {a}\".format(a=abbr), \"No-Space-{a} - {a}\".format(a=abbr),\n+\t\t\t\"- Warehouse - {a}\".format(a=abbr)\n+\t\t]\n+\n+\t\tfor i in range(len(names)):\n+\t\t\tenc_name = encode_company_abbr(names[i], company.name)\n+\t\t\tself.assertTrue(\n+\t\t\t\tenc_name == expected_names[i],\n+\t\t\t \"{enc} is not same as {exp}\".format(enc=enc_name, exp=expected_names[i])\n+\t\t\t)\n\\ No newline at end of file\ndiff --git a/requirements.txt b/requirements.txt\nindex 0e81b1581d20..98707d09f048 100644\n--- a/requirements.txt\n+++ b/requirements.txt\n@@ -1,2 +1,3 @@\n frappe\n unidecode\n+six\n"
}
|
[
{
"diff_hunk": "@@ -1,2 +1,3 @@\n frappe\n unidecode\n+six",
"line": null,
"original_line": 3,
"original_start_line": null,
"path": "requirements.txt",
"start_line": null,
"text": "@user1:\nwhy another library? why not using python's inbuilt `range` or `xrange`?"
}
] |
09134d40c79f2a03a884a68434a2e39bf4f2ed8a
|
diff --git a/__init__.py b/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/erpnext/__init__.py b/erpnext/__init__.py
index 6f73741e8561..442050d4c9dd 100644
--- a/erpnext/__init__.py
+++ b/erpnext/__init__.py
@@ -4,6 +4,7 @@
__version__ = '8.0.26'
+
def get_default_company(user=None):
'''Get default company for user'''
from frappe.defaults import get_user_default_as_list
@@ -19,12 +20,14 @@ def get_default_company(user=None):
return default_company
+
def get_default_currency():
'''Returns the currency of the default company'''
company = get_default_company()
if company:
return frappe.db.get_value('Company', company, 'default_currency')
+
def get_company_currency(company):
'''Returns the default company currency'''
if not frappe.flags.company_currency:
@@ -33,11 +36,13 @@ def get_company_currency(company):
frappe.flags.company_currency[company] = frappe.db.get_value('Company', company, 'default_currency')
return frappe.flags.company_currency[company]
+
def set_perpetual_inventory(enable=1):
accounts_settings = frappe.get_doc("Accounts Settings")
accounts_settings.auto_accounting_for_stock = enable
accounts_settings.save()
+
def encode_company_abbr(name, company):
'''Returns name encoded with company abbreviation'''
company_abbr = frappe.db.get_value("Company", company, "abbr")
@@ -46,4 +51,6 @@ def encode_company_abbr(name, company):
if parts[-1].lower() != company_abbr.lower():
parts.append(company_abbr)
- return " - ".join([parts[0], company_abbr])
+ return " - ".join(parts)
+
+
diff --git a/erpnext/stock/doctype/warehouse/test_warehouse.py b/erpnext/stock/doctype/warehouse/test_warehouse.py
index ec64bdd3bdcf..31d1926cfa35 100644
--- a/erpnext/stock/doctype/warehouse/test_warehouse.py
+++ b/erpnext/stock/doctype/warehouse/test_warehouse.py
@@ -59,6 +59,15 @@ def test_warehouse_renaming(self):
self.assertTrue(frappe.db.get_value("Account",
filters={"warehouse": "Test Warehouse for Renaming 3 - _TC"}))
+ # Another rename with multiple dashes
+ if frappe.db.exists("Warehouse", "Test - Warehouse - Company - _TC"):
+ frappe.delete_doc("Warehouse", "Test - Warehouse - Company - _TC")
+ rename_doc("Warehouse", "Test Warehouse for Renaming 3 - _TC", "Test - Warehouse - Company")
+
+ self.assertTrue(frappe.db.exists("Account", "Test - Warehouse - Company - _TC"))
+ self.assertTrue(frappe.db.get_value("Account", filters={"warehouse": "Test - Warehouse - Company - _TC"}))
+ self.assertFalse(frappe.db.get_value("Account", filters={"warehouse": "Test Warehouse for Renaming 3 - _TC"}))
+
def test_warehouse_merging(self):
set_perpetual_inventory(1)
diff --git a/erpnext/tests/test_init.py b/erpnext/tests/test_init.py
new file mode 100644
index 000000000000..2baea97838ec
--- /dev/null
+++ b/erpnext/tests/test_init.py
@@ -0,0 +1,40 @@
+import unittest
+
+import frappe
+from erpnext import encode_company_abbr
+from six.moves import range
+
+test_records = frappe.get_test_records('Company')
+
+
+class TestInit(unittest.TestCase):
+ def test_encode_company_abbr(self):
+ company = frappe.new_doc("Company")
+ company.company_name = "New from Existing Company For Test"
+ company.abbr = "NFECT"
+ company.default_currency = "INR"
+ company.save()
+
+ abbr = company.abbr
+
+ names = [
+ "Warehouse Name", "ERPNext Foundation India", "Gold - Member - {a}".format(a=abbr),
+ " - {a}".format(a=abbr), "ERPNext - Foundation - India",
+ "ERPNext Foundation India - {a}".format(a=abbr),
+ "No-Space-{a}".format(a=abbr), "- Warehouse"
+ ]
+
+ expected_names = [
+ "Warehouse Name - {a}".format(a=abbr), "ERPNext Foundation India - {a}".format(a=abbr),
+ "Gold - Member - {a}".format(a=abbr), " - {a}".format(a=abbr),
+ "ERPNext - Foundation - India - {a}".format(a=abbr),
+ "ERPNext Foundation India - {a}".format(a=abbr), "No-Space-{a} - {a}".format(a=abbr),
+ "- Warehouse - {a}".format(a=abbr)
+ ]
+
+ for i in range(len(names)):
+ enc_name = encode_company_abbr(names[i], company.name)
+ self.assertTrue(
+ enc_name == expected_names[i],
+ "{enc} is not same as {exp}".format(enc=enc_name, exp=expected_names[i])
+ )
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
oumi-ai__oumi-1406@990a9b7
|
oumi-ai/oumi
|
Python
| 1,406
|
Add a Slurm cluster and cloud to the oumi launcher.
|
# Description
<!--
Thank you for contributing to Oumi! Before sending your PR out for review, please take a quick read through this template.
When your PR is merged, its title will appear in our release notes. Make sure your title gives a clear description of your change!
After you've updated your title, please replace this section with a detailed description of your change. Include as much context as possible so your reviewers can easily understand *what* you're changing and *why*.
The more information you provide, the faster we can review your change!
-->
<!--↓↓↓↓↓↓↓↓↓↓ Describe your change below ↓↓↓↓↓↓↓↓↓↓-->
This PR adds a cluster and cloud implementation for slurm.
In oumi, we'll consider a "cluster" a logical pair of a `hostname` and a `user`, ex `taenin@myslurmhost`.
This means that running jobs as a second user on the same host would be represented with a second cluster in oumi. Ex: `janedoe@myslurmhost`
Users can set the `OUMI_SLURM_CONNECTIONS` env var to automatically connect to specific slurm hosts in the oumi CLI. This should be a comma-separated list of <user>@<host> pairs.
Ex:
```
export OUMI_SLURM_CONNECTIONS="taenin@myslurmhost,janedoe@myslurmhost"
```
<!--↑↑↑↑↑↑↑↑↑↑ Describe your change above ↑↑↑↑↑↑↑↑↑↑-->
## Related issues
<!--
Make sure to list any relevant related issues to your change. More often than not this will be the single issue fixed by your PR.
-->
<!--↓↓↓↓↓↓↓↓↓↓ List your related issues below ↓↓↓↓↓↓↓↓↓↓-->
Fixes #1382
<!--↑↑↑↑↑↑↑↑↑↑ List your related issues above ↑↑↑↑↑↑↑↑↑↑-->
## Before submitting
- [ ] This PR only changes documentation. (You can ignore the following checks in that case)
- [x] Did you read the [contributor guideline](https://github.com/oumi-ai/oumi/blob/main/CONTRIBUTING.md) Pull Request guidelines?
- [x] Did you link the issue(s) related to this PR in the section above?
- [x] Did you add / update tests where needed?
## Reviewers
At least one review from a member of `oumi-ai/oumi-staff` is required.
<!-- Add `oumi-ai/oumi-staff` as a reviewer when your PR is ready for review.
You are also welcome to add individual members of `oumi-ai/oumi-staff` as reviewers.
If no one has reviewed your PR after several days, feel free to add a comment tagging specific reviewers.
-->
|
2025-02-07T02:47:24Z
|
[Feature] Add Slurm support in Oumi Launch
### Feature request
We should add support for sshing into a slurm head node and kicking off jobs. This is great for local and dedicated clusters.
### Motivation / references
This greatly simplifies workflows with Oumi when working on local clusters that are already running slurm.
### Your contribution
I will contribute this feature.
|
[
{
"body": "### Feature request\n\nWe should add support for sshing into a slurm head node and kicking off jobs. This is great for local and dedicated clusters.\n\n### Motivation / references\n\nThis greatly simplifies workflows with Oumi when working on local clusters that are already running slurm.\n\n### Your contribution\n\nI will contribute this feature.",
"number": 1382,
"title": "[Feature] Add Slurm support in Oumi Launch"
}
] |
407d02e3c0e5ab4dc3e3ce3c70daf47fa5b8be9a
|
{
"head_commit": "990a9b7078f2b431ed0f1ca4ee91b869154bd779",
"head_commit_message": "Merge branch 'main' into taenin/slurm_cluster",
"patch_to_review": "diff --git a/src/oumi/cli/env.py b/src/oumi/cli/env.py\nindex 32dbd2524..c4e93e766 100644\n--- a/src/oumi/cli/env.py\n+++ b/src/oumi/cli/env.py\n@@ -80,6 +80,7 @@ def env():\n \"LOCAL_RANK\",\n \"LOCAL_WORLD_SIZE\",\n \"OUMI_EXTRA_DEPS_FILE\",\n+ \"OUMI_SLURM_CONNECTIONS\",\n \"OUMI_USE_SPOT_VM\",\n \"RANK\",\n \"WORLD_SIZE\",\ndiff --git a/src/oumi/launcher/clients/slurm_client.py b/src/oumi/launcher/clients/slurm_client.py\nindex 4ee2fe3f6..a13e45965 100644\n--- a/src/oumi/launcher/clients/slurm_client.py\n+++ b/src/oumi/launcher/clients/slurm_client.py\n@@ -320,7 +320,11 @@ def list_jobs(self) -> list[JobStatus]:\n A list of JobStatus.\n \"\"\"\n response_format = \"JobId%-30,JobName%30,User%30,State%30,Reason%30\"\n- command = f\"sacct --user={self._user} --format='{response_format}'\"\n+ # Forcibly list all jobs since Jan 1, 2025.\n+ command = (\n+ f\"sacct --user={self._user} --format='{response_format}' -X \"\n+ \"--starttime 2025-01-01\"\n+ )\n result = self.run_commands([command])\n if result.exit_code != 0:\n raise RuntimeError(f\"Failed to list jobs. stderr: {result.stderr}\")\ndiff --git a/src/oumi/launcher/clouds/__init__.py b/src/oumi/launcher/clouds/__init__.py\nindex ac4f3096d..cb4efc0ae 100644\n--- a/src/oumi/launcher/clouds/__init__.py\n+++ b/src/oumi/launcher/clouds/__init__.py\n@@ -35,6 +35,7 @@\n from oumi.launcher.clouds.local_cloud import LocalCloud\n from oumi.launcher.clouds.polaris_cloud import PolarisCloud\n from oumi.launcher.clouds.sky_cloud import SkyCloud\n+from oumi.launcher.clouds.slurm_cloud import SlurmCloud\n from oumi.utils import logging\n \n logging.configure_dependency_warnings()\n@@ -44,4 +45,5 @@\n \"LocalCloud\",\n \"PolarisCloud\",\n \"SkyCloud\",\n+ \"SlurmCloud\",\n ]\ndiff --git a/src/oumi/launcher/clouds/slurm_cloud.py b/src/oumi/launcher/clouds/slurm_cloud.py\nnew file mode 100644\nindex 000000000..e4a17fa93\n--- /dev/null\n+++ b/src/oumi/launcher/clouds/slurm_cloud.py\n@@ -0,0 +1,158 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import os\n+import re\n+from dataclasses import dataclass\n+from typing import Optional\n+\n+from oumi.core.configs import JobConfig\n+from oumi.core.launcher import BaseCloud, BaseCluster, JobStatus\n+from oumi.core.registry import register_cloud_builder\n+from oumi.launcher.clients.slurm_client import SlurmClient\n+from oumi.launcher.clusters.slurm_cluster import SlurmCluster\n+from oumi.utils.logging import logger\n+\n+_OUMI_SLURM_CONNECTIONS = \"OUMI_SLURM_CONNECTIONS\"\n+\n+\n+@dataclass\n+class _ConnectionInfo:\n+ \"\"\"Dataclass to hold information about a connection.\"\"\"\n+\n+ hostname: str\n+ user: str\n+\n+ def name(self):\n+ return f\"{self.user}@{self.hostname}\"\n+\n+\n+def _parse_cluster_name(name: str) -> _ConnectionInfo:\n+ \"\"\"Parses the cluster name into queue and user components.\n+\n+ Args:\n+ name: The name of the cluster.\n+\n+ Returns:\n+ _ConnectionInfo: The parsed cluster information.\n+ \"\"\"\n+ # Expected format: <user>@<hostname>\n+ connection_regex = r\"^([a-zA-Z0-9\\.\\-\\_]+)\\@([a-zA-Z0-9\\.\\-\\_]+)\"\n+ match = re.match(connection_regex, name)\n+ if not match:\n+ raise ValueError(\n+ f\"Invalid cluster name: {name}. Must be in the format 'user@hostname'.\"\n+ )\n+ return _ConnectionInfo(hostname=match.group(2), user=match.group(1))\n+\n+\n+def _get_slurm_connections() -> list[_ConnectionInfo]:\n+ \"\"\"Gets Slurm connections from the OUMI_SLURM_CONNECTIONS environment variable.\"\"\"\n+ connections_str = os.getenv(_OUMI_SLURM_CONNECTIONS, \"\")\n+ if not connections_str:\n+ return []\n+ valid_connections = []\n+\n+ for connection in [h.strip() for h in connections_str.split(\",\")]:\n+ try:\n+ valid_connections.append(_parse_cluster_name(connection))\n+ except ValueError:\n+ logger.warning(f\"Invalid Slurm connection string: {connection}. Skipping.\")\n+ return valid_connections\n+\n+\n+class SlurmCloud(BaseCloud):\n+ \"\"\"A resource pool for managing the Slurm ALCF job queues.\"\"\"\n+\n+ def __init__(self):\n+ \"\"\"Initializes a new instance of the SlurmCloud class.\"\"\"\n+ # A mapping from cluster names to Slurm Cluster instances.\n+ self._clusters = {}\n+\n+ # Initialize default connections.\n+ self.initialize_clusters()\n+\n+ def _get_or_create_cluster(self, name: str) -> SlurmCluster:\n+ \"\"\"Gets the cluster with the specified name, or creates one if it doesn't exist.\n+\n+ Args:\n+ name: The name of the cluster.\n+\n+ Returns:\n+ SlurmCluster: The cluster instance.\n+ \"\"\"\n+ if name not in self._clusters:\n+ cluster_info = _parse_cluster_name(name)\n+ self._clusters[name] = SlurmCluster(\n+ name,\n+ SlurmClient(\n+ user=cluster_info.user,\n+ slurm_host=cluster_info.hostname,\n+ cluster_name=cluster_info.name(),\n+ ),\n+ )\n+ return self._clusters[name]\n+\n+ def initialize_clusters(self) -> list[BaseCluster]:\n+ \"\"\"Initializes clusters for the specified user for all Slurm queues.\n+\n+ Returns:\n+ List[SlurmCluster]: The list of initialized clusters.\n+ \"\"\"\n+ connections = _get_slurm_connections()\n+ clusters = []\n+ for c in connections:\n+ cluster = self._get_or_create_cluster(c.name())\n+ clusters.append(cluster)\n+ return clusters\n+\n+ def up_cluster(self, job: JobConfig, name: Optional[str], **kwargs) -> JobStatus:\n+ \"\"\"Creates a cluster and starts the provided Job.\"\"\"\n+ if not job.user:\n+ raise ValueError(\"User must be provided in the job config.\")\n+ if name:\n+ cluster_info = _parse_cluster_name(name)\n+ if cluster_info.user != job.user:\n+ raise ValueError(\n+ f\"Invalid cluster name: `{name}`. \"\n+ f\"User must match the provided job user: `{job.user}`.\"\n+ )\n+ else:\n+ raise ValueError(\n+ \"A cluster name must be provided for Slurm. \"\n+ \"Cluster names are of the form 'user@hostname'.\"\n+ )\n+ cluster = self._get_or_create_cluster(cluster_info.name())\n+ job_status = cluster.run_job(job)\n+ if not job_status:\n+ raise RuntimeError(\"Failed to start job.\")\n+ return job_status\n+\n+ def get_cluster(self, name) -> Optional[BaseCluster]:\n+ \"\"\"Gets the cluster with the specified name, or None if not found.\"\"\"\n+ clusters = self.list_clusters()\n+ for cluster in clusters:\n+ if cluster.name() == name:\n+ return cluster\n+ return None\n+\n+ def list_clusters(self) -> list[BaseCluster]:\n+ \"\"\"Lists the active clusters on this cloud.\"\"\"\n+ return list(self._clusters.values())\n+\n+\n+@register_cloud_builder(\"slurm\")\n+def slurm_cloud_builder() -> SlurmCloud:\n+ \"\"\"Builds a SlurmCloud instance.\"\"\"\n+ return SlurmCloud()\ndiff --git a/src/oumi/launcher/clusters/slurm_cluster.py b/src/oumi/launcher/clusters/slurm_cluster.py\nnew file mode 100644\nindex 000000000..2e61543fe\n--- /dev/null\n+++ b/src/oumi/launcher/clusters/slurm_cluster.py\n@@ -0,0 +1,236 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import time\n+import uuid\n+from datetime import datetime\n+from functools import reduce\n+from pathlib import Path\n+from typing import Any, Optional\n+\n+from oumi.core.configs import JobConfig\n+from oumi.core.launcher import BaseCluster, JobStatus\n+from oumi.launcher.clients.slurm_client import SlurmClient\n+from oumi.utils.logging import logger\n+\n+\n+def _format_date(date: datetime) -> str:\n+ \"\"\"Formats the provided date as a string.\n+\n+ Args:\n+ date: The date to format.\n+\n+ Returns:\n+ The formatted date.\n+ \"\"\"\n+ return date.strftime(\"%Y%m%d_%H%M%S%f\")\n+\n+\n+def _last_sbatch_line(script: list[str]) -> int:\n+ \"\"\"Finds the last SBATCH instruction line in the script.\n+\n+ Args:\n+ script: The lines of the script.\n+\n+ Returns:\n+ The index of the last SBATCH instruction line. -1 if not found.\n+ \"\"\"\n+ return reduce(\n+ lambda acc, val: val[0] if val[1].startswith(\"#SBATCH\") else acc,\n+ enumerate(script),\n+ -1,\n+ )\n+\n+\n+def _create_job_script(job: JobConfig) -> str:\n+ \"\"\"Creates a job script for the specified job.\n+\n+ Args:\n+ job: The job to create a script for.\n+\n+ Returns:\n+ The script as a string.\n+ \"\"\"\n+ setup_lines = [] if not job.setup else job.setup.strip().split(\"\\n\")\n+ run_lines = job.run.strip().split(\"\\n\")\n+ # Find the last SBATCH instruction line.\n+ last_run_sbatch = _last_sbatch_line(run_lines) + 1\n+ last_setup_sbatch = _last_sbatch_line(setup_lines) + 1\n+ # Inject environment variables into the script after SBATCH instructions.\n+ env_lines = [f\"export {key}={value}\" for key, value in job.envs.items()]\n+ # Pad the environment variables with newlines.\n+ env_lines = [\"\"] + env_lines + [\"\"] if env_lines else []\n+ # Generate the job script.\n+ # The script should have the following structure:\n+ # 1. SBATCH instructions from Setup and Run commands (in that order).\n+ # 2. Environment variables.\n+ # 3. Setup commands.\n+ # 4. Run commands.\n+ output_lines = (\n+ setup_lines[:last_setup_sbatch]\n+ + run_lines[:last_run_sbatch]\n+ + env_lines\n+ + setup_lines[last_setup_sbatch:]\n+ + run_lines[last_run_sbatch:]\n+ )\n+ # Always start the script with #!/bin/bash.\n+ script_prefix = \"#!/bin/bash\"\n+ if len(output_lines) > 0:\n+ if not output_lines[0].startswith(\"script_prefix\"):\n+ output_lines.insert(0, script_prefix)\n+ # Join each line. Always end the script with a new line.\n+ return \"\\n\".join(output_lines) + \"\\n\"\n+\n+\n+def _validate_job_config(job: JobConfig) -> None:\n+ \"\"\"Validates the provided job configuration.\n+\n+ Args:\n+ job: The job to validate.\n+ \"\"\"\n+ if not job.user:\n+ raise ValueError(\"User must be provided for Slurm jobs.\")\n+ if not job.working_dir:\n+ raise ValueError(\"Working directory must be provided for Slurm jobs.\")\n+ if not job.run:\n+ raise ValueError(\"Run script must be provided for Slurm jobs.\")\n+ if job.num_nodes < 1:\n+ raise ValueError(\"Number of nodes must be at least 1.\")\n+ if job.resources.cloud != \"slurm\":\n+ raise ValueError(\n+ f\"`Resources.cloud` must be `slurm`. \"\n+ f\"Unsupported cloud: {job.resources.cloud}\"\n+ )\n+ # Warn that other resource parameters are unused for Slurm.\n+ if job.resources.region:\n+ logger.warning(\"Region is unused for Slurm jobs.\")\n+ if job.resources.zone:\n+ logger.warning(\"Zone is unused for Slurm jobs.\")\n+ if job.resources.accelerators:\n+ logger.warning(\"Accelerators are unused for Slurm jobs.\")\n+ if job.resources.cpus:\n+ logger.warning(\"CPUs are unused for Slurm jobs.\")\n+ if job.resources.memory:\n+ logger.warning(\"Memory is unused for Slurm jobs.\")\n+ if job.resources.instance_type:\n+ logger.warning(\"Instance type is unused for Slurm jobs.\")\n+ if job.resources.disk_size:\n+ logger.warning(\"Disk size is unused for Slurm jobs.\")\n+ if job.resources.instance_type:\n+ logger.warning(\"Instance type is unused for Slurm jobs.\")\n+ # Warn that storage mounts are currently unsupported.\n+ if len(job.storage_mounts.items()) > 0:\n+ logger.warning(\"Storage mounts are currently unsupported for Slurm jobs.\")\n+\n+\n+class SlurmCluster(BaseCluster):\n+ \"\"\"A cluster implementation backed by a Slurm scheduler.\"\"\"\n+\n+ def __init__(self, name: str, client: SlurmClient) -> None:\n+ \"\"\"Initializes a new instance of the SlurmCluster class.\"\"\"\n+ self._name = name\n+ self._client = client\n+\n+ def __eq__(self, other: Any) -> bool:\n+ \"\"\"Checks if two SlurmClusters are equal.\"\"\"\n+ if not isinstance(other, SlurmCluster):\n+ return False\n+ return self.name() == other.name()\n+\n+ def name(self) -> str:\n+ \"\"\"Gets the name of the cluster.\"\"\"\n+ return self._name\n+\n+ def get_job(self, job_id: str) -> Optional[JobStatus]:\n+ \"\"\"Gets the jobs on this cluster if it exists, else returns None.\"\"\"\n+ for job in self.get_jobs():\n+ if job.id == job_id:\n+ return job\n+ return None\n+\n+ def get_jobs(self) -> list[JobStatus]:\n+ \"\"\"Lists the jobs on this cluster.\"\"\"\n+ jobs = self._client.list_jobs()\n+ for job in jobs:\n+ job.cluster = self._name\n+ return jobs\n+\n+ def cancel_job(self, job_id: str) -> JobStatus:\n+ \"\"\"Cancels the specified job on this cluster.\"\"\"\n+ self._client.cancel(job_id)\n+ job = self.get_job(job_id)\n+ if job is None:\n+ raise RuntimeError(f\"Job {job_id} not found.\")\n+ return job\n+\n+ def run_job(self, job: JobConfig) -> JobStatus:\n+ \"\"\"Runs the specified job on this cluster.\n+\n+ For Slurm this method consists of 5 parts:\n+\n+ 1. Copy the working directory to ~/oumi_launcher/$JOB_NAME.\n+ 2. Check if there is a conda installation at /home/$USER/miniconda3/envs/oumi.\n+ If not, install it.\n+ 3. Copy all file mounts.\n+ 4. Create a job script with all env vars, setup, and run commands.\n+ 5. CD into the working directory and submit the job.\n+\n+ Args:\n+ job: The job to run.\n+\n+ Returns:\n+ JobStatus: The job status.\n+ \"\"\"\n+ _validate_job_config(job)\n+ job_name = job.name or uuid.uuid1().hex\n+ submission_time = _format_date(datetime.now())\n+ remote_working_dir = Path(f\"~/oumi_launcher/{submission_time}\")\n+ # Copy the working directory to ~/oumi_launcher/...\n+ self._client.put_recursive(job.working_dir, str(remote_working_dir))\n+ # Copy all file mounts.\n+ for remote_path, local_path in job.file_mounts.items():\n+ self._client.put_recursive(local_path, remote_path)\n+ # Create the job script by merging envs, setup, and run commands.\n+ job_script = _create_job_script(job)\n+ script_path = remote_working_dir / \"oumi_job.sh\"\n+ self._client.put(job_script, str(script_path))\n+ # Set the proper CHMOD permissions.\n+ self._client.run_commands([f\"chmod +x {script_path}\"])\n+ # Submit the job.\n+ job_id = self._client.submit_job(\n+ str(script_path),\n+ str(remote_working_dir),\n+ job.num_nodes,\n+ job_name,\n+ )\n+ max_retries = 3\n+ wait_time = 5\n+ for _ in range(max_retries):\n+ job_status = self.get_job(job_id)\n+ if job_status is not None:\n+ return job_status\n+ logger.info(f\"Job {job_id} not found. Retrying in {wait_time} seconds.\")\n+ time.sleep(wait_time)\n+ job_status = self.get_job(job_id)\n+ if job_status is None:\n+ raise RuntimeError(f\"Job {job_id} not found after submission.\")\n+ return job_status\n+\n+ def stop(self) -> None:\n+ \"\"\"This is a no-op for Slurm clusters.\"\"\"\n+ pass\n+\n+ def down(self) -> None:\n+ \"\"\"This is a no-op for Slurm clusters.\"\"\"\n+ pass\ndiff --git a/tests/unit/launcher/clients/test_slurm_client.py b/tests/unit/launcher/clients/test_slurm_client.py\nindex 14d3071f3..cec4c8f97 100644\n--- a/tests/unit/launcher/clients/test_slurm_client.py\n+++ b/tests/unit/launcher/clients/test_slurm_client.py\n@@ -10,7 +10,8 @@\n \n _CTRL_PATH: str = \"-S ~/.ssh/control-%h-%p-%r\"\n _SACCT_CMD = (\n- \"sacct --user=user --format='JobId%-30,JobName%30,User%30,State%30,Reason%30'\"\n+ \"sacct --user=user --format='JobId%-30,JobName%30,User%30,State%30,Reason%30' \"\n+ \"-X --starttime 2025-01-01\"\n )\n \n \ndiff --git a/tests/unit/launcher/clouds/test_slurm_cloud.py b/tests/unit/launcher/clouds/test_slurm_cloud.py\nnew file mode 100644\nindex 000000000..a85fbe310\n--- /dev/null\n+++ b/tests/unit/launcher/clouds/test_slurm_cloud.py\n@@ -0,0 +1,237 @@\n+from unittest.mock import Mock, call, patch\n+\n+import pytest\n+\n+from oumi.core.configs import JobConfig, JobResources, StorageMount\n+from oumi.core.launcher import JobStatus\n+from oumi.core.registry import REGISTRY, RegistryType\n+from oumi.launcher.clients.slurm_client import SlurmClient\n+from oumi.launcher.clouds.slurm_cloud import SlurmCloud\n+from oumi.launcher.clusters.slurm_cluster import SlurmCluster\n+\n+\n+#\n+# Fixtures\n+#\[email protected]\n+def mock_slurm_client():\n+ with patch(\"oumi.launcher.clouds.slurm_cloud.SlurmClient\") as client:\n+ yield client\n+\n+\[email protected]\n+def mock_slurm_cluster():\n+ with patch(\"oumi.launcher.clouds.slurm_cloud.SlurmCluster\") as cluster:\n+ yield cluster\n+\n+\[email protected]\n+def mock_os():\n+ with patch(\"oumi.launcher.clouds.slurm_cloud.os\") as os_mock:\n+ os_mock.getenv.return_value = \"\"\n+ yield os_mock\n+\n+\n+def _get_default_job(cloud: str) -> JobConfig:\n+ resources = JobResources(\n+ cloud=cloud,\n+ region=\"us-central1\",\n+ zone=None,\n+ accelerators=\"A100-80GB\",\n+ cpus=\"4\",\n+ memory=\"64\",\n+ instance_type=None,\n+ use_spot=True,\n+ disk_size=512,\n+ disk_tier=\"low\",\n+ )\n+ return JobConfig(\n+ name=\"myjob\",\n+ user=\"user\",\n+ working_dir=\"./\",\n+ num_nodes=2,\n+ resources=resources,\n+ envs={\"var1\": \"val1\"},\n+ file_mounts={},\n+ storage_mounts={\n+ \"~/home/remote/path/gcs/\": StorageMount(\n+ source=\"gs://mybucket/\", store=\"gcs\"\n+ )\n+ },\n+ setup=\"pip install -r requirements.txt\",\n+ run=\"./hello_world.sh\",\n+ )\n+\n+\n+#\n+# Tests\n+#\n+def test_slurm_cloud_up_cluster(mock_slurm_client, mock_os, mock_slurm_cluster):\n+ cloud = SlurmCloud()\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_slurm_client.side_effect = [mock_client]\n+ mock_cluster = Mock(spec=SlurmCluster)\n+ mock_slurm_cluster.side_effect = [mock_cluster]\n+ expected_job_status = JobStatus(\n+ id=\"job_id\",\n+ cluster=\"user@somehost\",\n+ name=\"foo\",\n+ status=\"running\",\n+ metadata=\"bar\",\n+ done=False,\n+ )\n+ mock_cluster.run_job.return_value = expected_job_status\n+ job = _get_default_job(\"slurm\")\n+ job_status = cloud.up_cluster(job, \"user@somehost\")\n+ mock_slurm_client.assert_called_once_with(\n+ user=\"user\", slurm_host=\"somehost\", cluster_name=\"user@somehost\"\n+ )\n+ mock_cluster.run_job.assert_called_once_with(job)\n+ assert job_status == expected_job_status\n+\n+\n+def test_slurm_cloud_up_cluster_fails_mismatched_user(\n+ mock_slurm_client, mock_os, mock_slurm_cluster\n+):\n+ cloud = SlurmCloud()\n+ with pytest.raises(\n+ ValueError,\n+ match=(\n+ \"Invalid cluster name: `user1@somehost`. \"\n+ \"User must match the provided job user: `user`.\"\n+ ),\n+ ):\n+ _ = cloud.up_cluster(_get_default_job(\"slurm\"), \"user1@somehost\")\n+\n+\n+def test_slurm_cloud_init_with_connections(mock_os, mock_slurm_client):\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_os.getenv.return_value = \"user1@host1,user2@host2\"\n+ mock_slurm_client.side_effect = [mock_client, mock_client]\n+ cloud = SlurmCloud()\n+ cluster_names = [cluster.name() for cluster in cloud.list_clusters()]\n+ cluster_names.sort()\n+ assert cluster_names == [\n+ \"user1@host1\",\n+ \"user2@host2\",\n+ ]\n+ mock_slurm_client.assert_has_calls(\n+ [\n+ call(user=\"user1\", slurm_host=\"host1\", cluster_name=\"user1@host1\"),\n+ call(user=\"user2\", slurm_host=\"host2\", cluster_name=\"user2@host2\"),\n+ ]\n+ )\n+\n+\n+def test_slurm_cloud_init_skips_malformed_connections(mock_os, mock_slurm_client):\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_os.getenv.return_value = \"user1@host1,foob@@@ar, user2@host2 , user3@host3\"\n+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]\n+ cloud = SlurmCloud()\n+ cluster_names = [cluster.name() for cluster in cloud.list_clusters()]\n+ cluster_names.sort()\n+ assert cluster_names == [\n+ \"user1@host1\",\n+ \"user2@host2\",\n+ \"user3@host3\",\n+ ]\n+ mock_slurm_client.assert_has_calls(\n+ [\n+ call(user=\"user1\", slurm_host=\"host1\", cluster_name=\"user1@host1\"),\n+ call(user=\"user2\", slurm_host=\"host2\", cluster_name=\"user2@host2\"),\n+ call(user=\"user3\", slurm_host=\"host3\", cluster_name=\"user3@host3\"),\n+ ]\n+ )\n+\n+\n+def test_slurm_cloud_initialize_cluster(mock_os, mock_slurm_client):\n+ cloud = SlurmCloud()\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_os.getenv.return_value = \"user1@host1,user2@host2,user3@host3\"\n+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]\n+ clusters = cloud.initialize_clusters()\n+ clusters2 = cloud.initialize_clusters()\n+ mock_slurm_client.assert_has_calls(\n+ [\n+ call(user=\"user1\", slurm_host=\"host1\", cluster_name=\"user1@host1\"),\n+ call(user=\"user2\", slurm_host=\"host2\", cluster_name=\"user2@host2\"),\n+ call(user=\"user3\", slurm_host=\"host3\", cluster_name=\"user3@host3\"),\n+ ]\n+ )\n+ cluster_names = [cluster.name() for cluster in clusters]\n+ cluster_names.sort()\n+ assert cluster_names == [\n+ \"user1@host1\",\n+ \"user2@host2\",\n+ \"user3@host3\",\n+ ]\n+ # Verify that the second initialization returns the same clusters.\n+ assert clusters == clusters2\n+\n+\n+def test_slurm_cloud_list_clusters(mock_os, mock_slurm_client):\n+ cloud = SlurmCloud()\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_os.getenv.return_value = \"user1@host1,user2@host2,user3@host3\"\n+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]\n+ assert [] == cloud.list_clusters()\n+ clusters = cloud.initialize_clusters()\n+ mock_slurm_client.assert_has_calls(\n+ [\n+ call(user=\"user1\", slurm_host=\"host1\", cluster_name=\"user1@host1\"),\n+ call(user=\"user2\", slurm_host=\"host2\", cluster_name=\"user2@host2\"),\n+ call(user=\"user3\", slurm_host=\"host3\", cluster_name=\"user3@host3\"),\n+ ]\n+ )\n+ clusters = cloud.list_clusters()\n+ expected_clusters = [\n+ \"user1@host1\",\n+ \"user2@host2\",\n+ \"user3@host3\",\n+ ]\n+ cluster_names = [cluster.name() for cluster in clusters]\n+ cluster_names.sort()\n+ assert cluster_names == expected_clusters\n+\n+\n+def test_slurm_cloud_get_cluster_empty(mock_os, mock_slurm_client):\n+ cloud = SlurmCloud()\n+ # Check that there are no initial clusters.\n+ assert cloud.get_cluster(\"debug.user\") is None\n+\n+\n+def test_slurm_cloud_get_cluster_success(mock_os, mock_slurm_client):\n+ cloud = SlurmCloud()\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_os.getenv.return_value = \"user1@host1,user2@host2,user3@host3\"\n+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]\n+ assert [] == cloud.list_clusters()\n+ _ = cloud.initialize_clusters()\n+ mock_slurm_client.assert_has_calls(\n+ [\n+ call(user=\"user1\", slurm_host=\"host1\", cluster_name=\"user1@host1\"),\n+ call(user=\"user2\", slurm_host=\"host2\", cluster_name=\"user2@host2\"),\n+ call(user=\"user3\", slurm_host=\"host3\", cluster_name=\"user3@host3\"),\n+ ]\n+ )\n+ expected_clusters = [\n+ \"user1@host1\",\n+ \"user2@host2\",\n+ \"user3@host3\",\n+ ]\n+ for name in expected_clusters:\n+ cluster = cloud.get_cluster(name)\n+ assert cluster is not None\n+ assert cluster.name() == name\n+\n+\n+def test_slurm_cloud_get_cluster_fails(mock_os, mock_slurm_client):\n+ cloud = SlurmCloud()\n+ mock_client = Mock(spec=SlurmClient)\n+ mock_slurm_client.side_effect = [mock_client, mock_client]\n+ cloud.initialize_clusters()\n+ assert cloud.get_cluster(\"nonexistent\") is None\n+\n+\n+def test_slurm_cloud_builder_registered():\n+ assert REGISTRY.contains(\"slurm\", RegistryType.CLOUD)\ndiff --git a/tests/unit/launcher/clusters/test_slurm_cluster.py b/tests/unit/launcher/clusters/test_slurm_cluster.py\nnew file mode 100644\nindex 000000000..c125365a1\n--- /dev/null\n+++ b/tests/unit/launcher/clusters/test_slurm_cluster.py\n@@ -0,0 +1,714 @@\n+from datetime import datetime\n+from unittest.mock import Mock, call, patch\n+\n+import pytest\n+\n+from oumi.core.configs import JobConfig, JobResources, StorageMount\n+from oumi.core.launcher import JobStatus\n+from oumi.launcher.clients.slurm_client import SlurmClient\n+from oumi.launcher.clusters.slurm_cluster import SlurmCluster\n+\n+\n+#\n+# Fixtures\n+#\[email protected]\n+def mock_slurm_client():\n+ yield Mock(spec=SlurmClient)\n+\n+\[email protected]\n+def mock_time():\n+ with patch(\"oumi.launcher.clusters.slurm_cluster.time\") as mock_t:\n+ yield mock_t\n+\n+\[email protected]\n+def mock_datetime():\n+ with patch(\"oumi.launcher.clusters.slurm_cluster.datetime\") as mock_dt:\n+ mock_dt.now.return_value = datetime(2024, 10, 9, 13, 4, 24, 513094)\n+ yield mock_dt\n+\n+\n+def _get_default_job(cloud: str) -> JobConfig:\n+ resources = JobResources(\n+ cloud=cloud,\n+ region=\"us-central1\",\n+ zone=None,\n+ accelerators=\"A100-80GB\",\n+ cpus=\"4\",\n+ memory=\"64\",\n+ instance_type=None,\n+ use_spot=True,\n+ disk_size=512,\n+ disk_tier=\"low\",\n+ )\n+ return JobConfig(\n+ name=\"myjob\",\n+ user=\"user\",\n+ working_dir=\"./\",\n+ num_nodes=2,\n+ resources=resources,\n+ envs={\"var1\": \"val1\"},\n+ file_mounts={\n+ \"~/home/remote/path.bar\": \"~/local/path.bar\",\n+ \"~/home/remote/path2.txt\": \"~/local/path2.txt\",\n+ },\n+ storage_mounts={\n+ \"~/home/remote/path/gcs/\": StorageMount(\n+ source=\"gs://mybucket/\", store=\"gcs\"\n+ )\n+ },\n+ setup=(\n+ \"#SBATCH --gpus-per-task=8 \\n#SBATCH --cpus-per-task=4\\n\"\n+ \"pip install -r requirements.txt\"\n+ ),\n+ run=\"./hello_world.sh\",\n+ )\n+\n+\n+#\n+# Tests\n+#\n+def test_slurm_cluster_name(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"demand.einstein\", mock_slurm_client)\n+ assert cluster.name() == \"demand.einstein\"\n+\n+ cluster = SlurmCluster(\"debug.einstein\", mock_slurm_client)\n+ assert cluster.name() == \"debug.einstein\"\n+\n+ cluster = SlurmCluster(\"debug-scaling.einstein\", mock_slurm_client)\n+ assert cluster.name() == \"debug-scaling.einstein\"\n+\n+ cluster = SlurmCluster(\"preemptable.einstein\", mock_slurm_client)\n+ assert cluster.name() == \"preemptable.einstein\"\n+\n+ cluster = SlurmCluster(\"prod.einstein\", mock_slurm_client)\n+ assert cluster.name() == \"prod.einstein\"\n+\n+\n+def test_slurm_cluster_get_job_valid_id(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"myjob\",\n+ name=\"some name\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"final job\",\n+ name=\"name3\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ ]\n+ job = cluster.get_job(\"myjob\")\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job is not None\n+ assert job.id == \"myjob\"\n+ assert job.cluster == \"debug@host\"\n+\n+\n+def test_slurm_cluster_get_job_invalid_id_empty(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = []\n+ job = cluster.get_job(\"myjob\")\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job is None\n+\n+\n+def test_slurm_cluster_get_job_invalid_id_nonempty(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"myjob\",\n+ name=\"some name\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"final job\",\n+ name=\"name3\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ ]\n+ job = cluster.get_job(\"wrong job\")\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job is None\n+\n+\n+def test_slurm_cluster_get_jobs_nonempty(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"myjob\",\n+ name=\"some name\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"final job\",\n+ name=\"name3\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ ),\n+ ]\n+ jobs = cluster.get_jobs()\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ expected_jobs = [\n+ JobStatus(\n+ id=\"myjob\",\n+ name=\"some name\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"final job\",\n+ name=\"name3\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ ]\n+ assert jobs == expected_jobs\n+\n+\n+def test_slurm_cluster_get_jobs_empty(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = []\n+ jobs = cluster.get_jobs()\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ expected_jobs = []\n+ assert jobs == expected_jobs\n+\n+\n+def test_slurm_cluster_cancel_job(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"prod@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"myjob\",\n+ name=\"some name\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ JobStatus(\n+ id=\"final job\",\n+ name=\"name3\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ ]\n+ job_status = cluster.cancel_job(\"job2\")\n+ expected_status = JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"prod@host\",\n+ done=False,\n+ )\n+ mock_slurm_client.cancel.assert_called_once_with(\n+ \"job2\",\n+ )\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_cancel_job_fails(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"prod@host\", mock_slurm_client)\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"job2\",\n+ name=\"some\",\n+ status=\"running\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ ),\n+ ]\n+ with pytest.raises(RuntimeError):\n+ _ = cluster.cancel_job(\"myjobid\")\n+\n+\n+def test_slurm_cluster_run_job(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_successful_cmd = Mock()\n+ mock_successful_cmd.exit_code = 0\n+ mock_slurm_client.run_commands.return_value = mock_successful_cmd\n+ mock_slurm_client.submit_job.return_value = \"1234\"\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ]\n+ expected_status = JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ )\n+ job_status = cluster.run_job(_get_default_job(\"slurm\"))\n+ mock_slurm_client.put_recursive.assert_has_calls(\n+ [\n+ call(\n+ \"./\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ ),\n+ call(\n+ \"~/local/path.bar\",\n+ \"~/home/remote/path.bar\",\n+ ),\n+ call(\n+ \"~/local/path2.txt\",\n+ \"~/home/remote/path2.txt\",\n+ ),\n+ ],\n+ )\n+ mock_slurm_client.run_commands.assert_has_calls(\n+ [\n+ call([\"chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh\"]),\n+ ]\n+ )\n+ job_script = (\n+ \"#!/bin/bash\\n#SBATCH --gpus-per-task=8 \\n#SBATCH --cpus-per-task=4\\n\\n\"\n+ \"export var1=val1\\n\\n\"\n+ \"pip install -r requirements.txt\\n./hello_world.sh\\n\"\n+ )\n+ mock_slurm_client.put.assert_called_once_with(\n+ job_script, \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\"\n+ )\n+ mock_slurm_client.submit_job.assert_called_once_with(\n+ \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ 2,\n+ \"myjob\",\n+ )\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_run_job_with_polling_succeeds(\n+ mock_time, mock_datetime, mock_slurm_client\n+):\n+ mock_time.sleep.side_effect = [None, None, None, None, None]\n+ mock_successful_cmd = Mock()\n+ mock_successful_cmd.exit_code = 0\n+ mock_failed_cmd = Mock()\n+ mock_failed_cmd.exit_code = 1\n+ mock_slurm_client.run_commands.side_effect = [\n+ mock_failed_cmd,\n+ mock_successful_cmd,\n+ mock_successful_cmd,\n+ mock_successful_cmd,\n+ ]\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.submit_job.return_value = \"1234\"\n+ mock_slurm_client.list_jobs.side_effect = [\n+ [],\n+ [\n+ JobStatus(\n+ id=\"1\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ],\n+ [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ],\n+ ]\n+ expected_status = JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ )\n+ job_status = cluster.run_job(_get_default_job(\"slurm\"))\n+ mock_slurm_client.put_recursive.assert_has_calls(\n+ [\n+ call(\n+ \"./\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ ),\n+ call(\n+ \"~/local/path.bar\",\n+ \"~/home/remote/path.bar\",\n+ ),\n+ call(\n+ \"~/local/path2.txt\",\n+ \"~/home/remote/path2.txt\",\n+ ),\n+ ],\n+ )\n+ mock_slurm_client.run_commands.assert_has_calls(\n+ [\n+ call([\"chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh\"]),\n+ ]\n+ )\n+ job_script = (\n+ \"#!/bin/bash\\n#SBATCH --gpus-per-task=8 \\n#SBATCH --cpus-per-task=4\\n\\n\"\n+ \"export var1=val1\\n\\n\"\n+ \"pip install -r requirements.txt\\n./hello_world.sh\\n\"\n+ )\n+ mock_slurm_client.put.assert_called_once_with(\n+ job_script, \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\"\n+ )\n+ mock_slurm_client.submit_job.assert_called_once_with(\n+ \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ 2,\n+ \"myjob\",\n+ )\n+ mock_slurm_client.list_jobs.assert_has_calls([call(), call(), call()])\n+ mock_time.sleep.assert_has_calls([call(5), call(5)])\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_run_job_no_name(mock_datetime, mock_slurm_client):\n+ mock_successful_cmd = Mock()\n+ mock_successful_cmd.exit_code = 0\n+ mock_slurm_client.run_commands.return_value = mock_successful_cmd\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.submit_job.return_value = \"1234\"\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ]\n+ expected_status = JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ )\n+ job = _get_default_job(\"slurm\")\n+ job.name = None\n+ with patch(\"oumi.launcher.clusters.slurm_cluster.uuid\") as mock_uuid:\n+ mock_hex = Mock()\n+ mock_hex.hex = \"1-2-3\"\n+ mock_uuid.uuid1.return_value = mock_hex\n+ job_status = cluster.run_job(job)\n+ mock_slurm_client.put_recursive.assert_has_calls(\n+ [\n+ call(\n+ \"./\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ ),\n+ call(\n+ \"~/local/path.bar\",\n+ \"~/home/remote/path.bar\",\n+ ),\n+ call(\n+ \"~/local/path2.txt\",\n+ \"~/home/remote/path2.txt\",\n+ ),\n+ ],\n+ )\n+ mock_slurm_client.run_commands.assert_has_calls(\n+ [\n+ call([\"chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh\"]),\n+ ]\n+ )\n+ job_script = (\n+ \"#!/bin/bash\\n#SBATCH --gpus-per-task=8 \\n#SBATCH --cpus-per-task=4\\n\\n\"\n+ \"export var1=val1\\n\\n\"\n+ \"pip install -r requirements.txt\\n./hello_world.sh\\n\"\n+ )\n+ mock_slurm_client.put.assert_called_once_with(\n+ job_script, \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\"\n+ )\n+ mock_slurm_client.submit_job.assert_called_once_with(\n+ \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ 2,\n+ \"1-2-3\",\n+ )\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_run_job_no_mounts(mock_datetime, mock_slurm_client):\n+ mock_successful_cmd = Mock()\n+ mock_successful_cmd.exit_code = 0\n+ mock_slurm_client.run_commands.return_value = mock_successful_cmd\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.submit_job.return_value = \"1234\"\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ]\n+ expected_status = JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ )\n+ job = _get_default_job(\"slurm\")\n+ job.file_mounts = {}\n+ job_status = cluster.run_job(job)\n+ mock_slurm_client.put_recursive.assert_has_calls(\n+ [\n+ call(\n+ \"./\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ ),\n+ ],\n+ )\n+ mock_slurm_client.run_commands.assert_has_calls(\n+ [\n+ call([\"chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh\"]),\n+ ]\n+ )\n+ job_script = (\n+ \"#!/bin/bash\\n#SBATCH --gpus-per-task=8 \\n#SBATCH --cpus-per-task=4\\n\\n\"\n+ \"export var1=val1\\n\\n\"\n+ \"pip install -r requirements.txt\\n./hello_world.sh\\n\"\n+ )\n+ mock_slurm_client.put.assert_called_once_with(\n+ job_script, \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\"\n+ )\n+ mock_slurm_client.submit_job.assert_called_once_with(\n+ \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ 2,\n+ \"myjob\",\n+ )\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_run_job_no_pbs(mock_datetime, mock_slurm_client):\n+ mock_successful_cmd = Mock()\n+ mock_successful_cmd.exit_code = 0\n+ mock_slurm_client.run_commands.return_value = mock_successful_cmd\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.submit_job.return_value = \"1234\"\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ]\n+ expected_status = JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ )\n+ job = _get_default_job(\"slurm\")\n+ job.file_mounts = {}\n+ job.setup = \"small setup\"\n+ job.run = \"./hello_world.sh\"\n+ job_status = cluster.run_job(job)\n+ mock_slurm_client.put_recursive.assert_has_calls(\n+ [\n+ call(\n+ \"./\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ ),\n+ ],\n+ )\n+ mock_slurm_client.run_commands.assert_has_calls(\n+ [\n+ call([\"chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh\"]),\n+ ]\n+ )\n+ job_script = (\n+ \"#!/bin/bash\\n\\n\" \"export var1=val1\\n\\n\" \"small setup\\n./hello_world.sh\\n\"\n+ )\n+ mock_slurm_client.put.assert_called_once_with(\n+ job_script, \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\"\n+ )\n+ mock_slurm_client.submit_job.assert_called_once_with(\n+ \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ 2,\n+ \"myjob\",\n+ )\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_run_job_no_setup(mock_datetime, mock_slurm_client):\n+ mock_successful_cmd = Mock()\n+ mock_successful_cmd.exit_code = 0\n+ mock_slurm_client.run_commands.return_value = mock_successful_cmd\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.submit_job.return_value = \"1234\"\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ]\n+ expected_status = JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"debug@host\",\n+ done=False,\n+ )\n+ job = _get_default_job(\"slurm\")\n+ job.file_mounts = {}\n+ job.setup = None\n+ job.run = \"./hello_world.sh\"\n+ job_status = cluster.run_job(job)\n+ mock_slurm_client.put_recursive.assert_has_calls(\n+ [\n+ call(\n+ \"./\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ ),\n+ ],\n+ )\n+ mock_slurm_client.run_commands.assert_has_calls(\n+ [\n+ call([\"chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh\"]),\n+ ]\n+ )\n+ job_script = \"#!/bin/bash\\n\\n\" \"export var1=val1\\n\\n\" \"./hello_world.sh\\n\"\n+ mock_slurm_client.put.assert_called_once_with(\n+ job_script, \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\"\n+ )\n+ mock_slurm_client.submit_job.assert_called_once_with(\n+ \"~/oumi_launcher/20241009_130424513094/oumi_job.sh\",\n+ \"~/oumi_launcher/20241009_130424513094\",\n+ 2,\n+ \"myjob\",\n+ )\n+ mock_slurm_client.list_jobs.assert_called_once_with()\n+ assert job_status == expected_status\n+\n+\n+def test_slurm_cluster_run_job_fails(mock_time, mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug@host\", mock_slurm_client)\n+ mock_slurm_client.submit_job.return_value = \"234\"\n+ mock_slurm_client.list_jobs.return_value = [\n+ JobStatus(\n+ id=\"1234\",\n+ name=\"some name\",\n+ status=\"RUNNING\",\n+ metadata=\"\",\n+ cluster=\"mycluster\",\n+ done=False,\n+ )\n+ ]\n+ with pytest.raises(RuntimeError):\n+ _ = cluster.run_job(_get_default_job(\"slurm\"))\n+ mock_time.sleep.assert_has_calls([call(5), call(5), call(5)])\n+\n+\n+def test_slurm_cluster_down(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug-scaling@host\", mock_slurm_client)\n+ cluster.down()\n+ # Nothing to assert, this method is a no-op.\n+\n+\n+def test_slurm_cluster_stop(mock_datetime, mock_slurm_client):\n+ cluster = SlurmCluster(\"debug-scaling@host\", mock_slurm_client)\n+ cluster.stop()\n+ # Nothing to assert, this method is a no-op.\n"
}
|
[
{
"diff_hunk": "@@ -0,0 +1,158 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+import os\n+import re\n+from dataclasses import dataclass\n+from typing import Optional\n+\n+from oumi.core.configs import JobConfig\n+from oumi.core.launcher import BaseCloud, BaseCluster, JobStatus\n+from oumi.core.registry import register_cloud_builder\n+from oumi.launcher.clients.slurm_client import SlurmClient\n+from oumi.launcher.clusters.slurm_cluster import SlurmCluster\n+from oumi.utils.logging import logger\n+\n+_OUMI_SLURM_CONNECTIONS = \"OUMI_SLURM_CONNECTIONS\"\n+\n+\n+@dataclass\n+class _ConnectionInfo:\n+ \"\"\"Dataclass to hold information about a connection.\"\"\"\n+\n+ hostname: str\n+ user: str\n+\n+ def name(self):\n+ return f\"{self.user}@{self.hostname}\"\n+\n+\n+def _parse_cluster_name(name: str) -> _ConnectionInfo:\n+ \"\"\"Parses the cluster name into queue and user components.\n+\n+ Args:\n+ name: The name of the cluster.\n+\n+ Returns:\n+ _ConnectionInfo: The parsed cluster information.\n+ \"\"\"\n+ # Expected format: <user>@<hostname>\n+ connection_regex = r\"^([a-zA-Z0-9\\.\\-\\_]+)\\@([a-zA-Z0-9\\.\\-\\_]+)\"\n+ match = re.match(connection_regex, name)\n+ if not match:\n+ raise ValueError(\n+ f\"Invalid cluster name: {name}. Must be in the format 'user@hostname'.\"\n+ )\n+ return _ConnectionInfo(hostname=match.group(2), user=match.group(1))\n+\n+\n+def _get_slurm_connections() -> list[_ConnectionInfo]:",
"line": null,
"original_line": 60,
"original_start_line": null,
"path": "src/oumi/launcher/clouds/slurm_cloud.py",
"start_line": null,
"text": "@user2:\nwould it make sense to move these utils into SlurmCloud class as `@user1`-s to group it all in one place?\n\n@author:\nDone!"
}
] |
46a054afa2d99a2dc0d67e852c86ab2f81150565
|
diff --git a/src/oumi/cli/env.py b/src/oumi/cli/env.py
index 32dbd25249..c4e93e766a 100644
--- a/src/oumi/cli/env.py
+++ b/src/oumi/cli/env.py
@@ -80,6 +80,7 @@ def env():
"LOCAL_RANK",
"LOCAL_WORLD_SIZE",
"OUMI_EXTRA_DEPS_FILE",
+ "OUMI_SLURM_CONNECTIONS",
"OUMI_USE_SPOT_VM",
"RANK",
"WORLD_SIZE",
diff --git a/src/oumi/launcher/clients/slurm_client.py b/src/oumi/launcher/clients/slurm_client.py
index 4ee2fe3f61..2f45895041 100644
--- a/src/oumi/launcher/clients/slurm_client.py
+++ b/src/oumi/launcher/clients/slurm_client.py
@@ -108,7 +108,10 @@ def _split_status_line(
A JobStatus object.
"""
if len(column_lengths) != 5:
- raise ValueError(f"Expected 5 fields, but found {len(column_lengths)}.")
+ raise ValueError(
+ f"Expected 5 fields, but found {len(column_lengths)}."
+ f" Invalid line: {line}."
+ )
fields = []
# Note: We can't use a simple split() here because empty fields are allowed.
for i in range(len(column_lengths)):
@@ -320,7 +323,12 @@ def list_jobs(self) -> list[JobStatus]:
A list of JobStatus.
"""
response_format = "JobId%-30,JobName%30,User%30,State%30,Reason%30"
- command = f"sacct --user={self._user} --format='{response_format}'"
+ # Forcibly list all jobs since Jan 1, 2025.
+ # Otherwise completed jobs older than ~24 hours may not be listed.
+ command = (
+ f"sacct --user={self._user} --format='{response_format}' -X "
+ "--starttime 2025-01-01"
+ )
result = self.run_commands([command])
if result.exit_code != 0:
raise RuntimeError(f"Failed to list jobs. stderr: {result.stderr}")
@@ -329,6 +337,17 @@ def list_jobs(self) -> list[JobStatus]:
jobs = []
if len(lines) < 2:
return jobs
+ # Look for a line starting in JobID followed by a line starting with "--".
+ start_idx = -1
+ for idx in range(len(lines) - 1):
+ if lines[idx].startswith("JobID") and lines[idx + 1].startswith("--"):
+ start_idx = idx
+ break
+ if start_idx == -1:
+ raise RuntimeError(
+ f"Failed to parse job list. Unexpected format: {result.stdout}"
+ )
+ lines = lines[start_idx:]
# The first two lines are metadata headers.
# The top line is composed of column titles.
# The second line is composed of ---- characters, each the length of a column.
diff --git a/src/oumi/launcher/clouds/__init__.py b/src/oumi/launcher/clouds/__init__.py
index ac4f3096df..cb4efc0aee 100644
--- a/src/oumi/launcher/clouds/__init__.py
+++ b/src/oumi/launcher/clouds/__init__.py
@@ -35,6 +35,7 @@
from oumi.launcher.clouds.local_cloud import LocalCloud
from oumi.launcher.clouds.polaris_cloud import PolarisCloud
from oumi.launcher.clouds.sky_cloud import SkyCloud
+from oumi.launcher.clouds.slurm_cloud import SlurmCloud
from oumi.utils import logging
logging.configure_dependency_warnings()
@@ -44,4 +45,5 @@
"LocalCloud",
"PolarisCloud",
"SkyCloud",
+ "SlurmCloud",
]
diff --git a/src/oumi/launcher/clouds/slurm_cloud.py b/src/oumi/launcher/clouds/slurm_cloud.py
new file mode 100644
index 0000000000..e532c1f219
--- /dev/null
+++ b/src/oumi/launcher/clouds/slurm_cloud.py
@@ -0,0 +1,107 @@
+# Copyright 2025 - Oumi
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import Optional
+
+from oumi.core.configs import JobConfig
+from oumi.core.launcher import BaseCloud, BaseCluster, JobStatus
+from oumi.core.registry import register_cloud_builder
+from oumi.launcher.clients.slurm_client import SlurmClient
+from oumi.launcher.clusters.slurm_cluster import SlurmCluster
+
+
+class SlurmCloud(BaseCloud):
+ """A resource pool for managing jobs in Slurm clusters."""
+
+ def __init__(self):
+ """Initializes a new instance of the SlurmCloud class."""
+ # A mapping from cluster names to Slurm Cluster instances.
+ self._clusters = {}
+
+ # Initialize default connections.
+ self.initialize_clusters()
+
+ def _get_or_create_cluster(self, name: str) -> SlurmCluster:
+ """Gets the cluster with the specified name, or creates one if it doesn't exist.
+
+ Args:
+ name: The name of the cluster.
+
+ Returns:
+ SlurmCluster: The cluster instance.
+ """
+ if name not in self._clusters:
+ cluster_info = SlurmCluster.parse_cluster_name(name)
+ self._clusters[name] = SlurmCluster(
+ name,
+ SlurmClient(
+ user=cluster_info.user,
+ slurm_host=cluster_info.hostname,
+ cluster_name=cluster_info.name,
+ ),
+ )
+ return self._clusters[name]
+
+ def initialize_clusters(self) -> list[BaseCluster]:
+ """Initializes clusters for the specified user for all Slurm queues.
+
+ Returns:
+ List[SlurmCluster]: The list of initialized clusters.
+ """
+ connections = SlurmCluster.get_slurm_connections()
+ clusters = []
+ for c in connections:
+ cluster = self._get_or_create_cluster(c.name)
+ clusters.append(cluster)
+ return clusters
+
+ def up_cluster(self, job: JobConfig, name: Optional[str], **kwargs) -> JobStatus:
+ """Creates a cluster and starts the provided Job."""
+ if not job.user:
+ raise ValueError("User must be provided in the job config.")
+ if name:
+ cluster_info = SlurmCluster.parse_cluster_name(name)
+ if cluster_info.user != job.user:
+ raise ValueError(
+ f"Invalid cluster name: `{name}`. "
+ f"User must match the provided job user: `{job.user}`."
+ )
+ else:
+ raise ValueError(
+ "A cluster name must be provided for Slurm. "
+ "Cluster names are of the form 'user@hostname'."
+ )
+ cluster = self._get_or_create_cluster(cluster_info.name)
+ job_status = cluster.run_job(job)
+ if not job_status:
+ raise RuntimeError("Failed to start job.")
+ return job_status
+
+ def get_cluster(self, name) -> Optional[BaseCluster]:
+ """Gets the cluster with the specified name, or None if not found."""
+ clusters = self.list_clusters()
+ for cluster in clusters:
+ if cluster.name() == name:
+ return cluster
+ return None
+
+ def list_clusters(self) -> list[BaseCluster]:
+ """Lists the active clusters on this cloud."""
+ return list(self._clusters.values())
+
+
+@register_cloud_builder("slurm")
+def slurm_cloud_builder() -> SlurmCloud:
+ """Builds a SlurmCloud instance."""
+ return SlurmCloud()
diff --git a/src/oumi/launcher/clusters/slurm_cluster.py b/src/oumi/launcher/clusters/slurm_cluster.py
new file mode 100644
index 0000000000..c80a1a1d1e
--- /dev/null
+++ b/src/oumi/launcher/clusters/slurm_cluster.py
@@ -0,0 +1,289 @@
+# Copyright 2025 - Oumi
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import re
+import time
+import uuid
+from dataclasses import dataclass
+from datetime import datetime
+from functools import reduce
+from pathlib import Path
+from typing import Any, Optional
+
+from oumi.core.configs import JobConfig
+from oumi.core.launcher import BaseCluster, JobStatus
+from oumi.launcher.clients.slurm_client import SlurmClient
+from oumi.utils.logging import logger
+
+_OUMI_SLURM_CONNECTIONS = "OUMI_SLURM_CONNECTIONS"
+
+
+def _format_date(date: datetime) -> str:
+ """Formats the provided date as a string.
+
+ Args:
+ date: The date to format.
+
+ Returns:
+ The formatted date.
+ """
+ return date.strftime("%Y%m%d_%H%M%S%f")
+
+
+def _last_sbatch_line(script: list[str]) -> int:
+ """Finds the last SBATCH instruction line in the script.
+
+ Args:
+ script: The lines of the script.
+
+ Returns:
+ The index of the last SBATCH instruction line. -1 if not found.
+ """
+ return reduce(
+ lambda acc, val: val[0] if val[1].startswith("#SBATCH") else acc,
+ enumerate(script),
+ -1,
+ )
+
+
+def _create_job_script(job: JobConfig) -> str:
+ """Creates a job script for the specified job.
+
+ Args:
+ job: The job to create a script for.
+
+ Returns:
+ The script as a string.
+ """
+ setup_lines = [] if not job.setup else job.setup.strip().split("\n")
+ run_lines = job.run.strip().split("\n")
+ # Find the last SBATCH instruction line.
+ last_run_sbatch = _last_sbatch_line(run_lines) + 1
+ last_setup_sbatch = _last_sbatch_line(setup_lines) + 1
+ # Inject environment variables into the script after SBATCH instructions.
+ env_lines = [f"export {key}={value}" for key, value in job.envs.items()]
+ # Pad the environment variables with newlines.
+ env_lines = [""] + env_lines + [""] if env_lines else []
+ # Generate the job script.
+ # The script should have the following structure:
+ # 1. SBATCH instructions from Setup and Run commands (in that order).
+ # 2. Environment variables.
+ # 3. Setup commands.
+ # 4. Run commands.
+ output_lines = (
+ setup_lines[:last_setup_sbatch]
+ + run_lines[:last_run_sbatch]
+ + env_lines
+ + setup_lines[last_setup_sbatch:]
+ + run_lines[last_run_sbatch:]
+ )
+ # Always start the script with #!/bin/bash.
+ script_prefix = "#!/bin/bash"
+ if len(output_lines) > 0:
+ if not output_lines[0].startswith("script_prefix"):
+ output_lines.insert(0, script_prefix)
+ # Join each line. Always end the script with a new line.
+ return "\n".join(output_lines) + "\n"
+
+
+def _validate_job_config(job: JobConfig) -> None:
+ """Validates the provided job configuration.
+
+ Args:
+ job: The job to validate.
+ """
+ if not job.user:
+ raise ValueError("User must be provided for Slurm jobs.")
+ if not job.working_dir:
+ raise ValueError("Working directory must be provided for Slurm jobs.")
+ if not job.run:
+ raise ValueError("Run script must be provided for Slurm jobs.")
+ if job.num_nodes < 1:
+ raise ValueError("Number of nodes must be at least 1.")
+ if job.resources.cloud != "slurm":
+ raise ValueError(
+ f"`Resources.cloud` must be `slurm`. "
+ f"Unsupported cloud: {job.resources.cloud}"
+ )
+ # Warn that other resource parameters are unused for Slurm.
+ if job.resources.region:
+ logger.warning("Region is unused for Slurm jobs.")
+ if job.resources.zone:
+ logger.warning("Zone is unused for Slurm jobs.")
+ if job.resources.accelerators:
+ logger.warning("Accelerators are unused for Slurm jobs.")
+ if job.resources.cpus:
+ logger.warning("CPUs are unused for Slurm jobs.")
+ if job.resources.memory:
+ logger.warning("Memory is unused for Slurm jobs.")
+ if job.resources.instance_type:
+ logger.warning("Instance type is unused for Slurm jobs.")
+ if job.resources.disk_size:
+ logger.warning("Disk size is unused for Slurm jobs.")
+ if job.resources.instance_type:
+ logger.warning("Instance type is unused for Slurm jobs.")
+ # Warn that storage mounts are currently unsupported.
+ if len(job.storage_mounts.items()) > 0:
+ logger.warning("Storage mounts are currently unsupported for Slurm jobs.")
+
+
+class SlurmCluster(BaseCluster):
+ """A cluster implementation backed by a Slurm scheduler."""
+
+ @dataclass
+ class ConnectionInfo:
+ """Dataclass to hold information about a connection."""
+
+ hostname: str
+ user: str
+
+ @property
+ def name(self):
+ """Gets the name of the connection in the form user@hostname."""
+ return f"{self.user}@{self.hostname}"
+
+ def __init__(self, name: str, client: SlurmClient) -> None:
+ """Initializes a new instance of the SlurmCluster class."""
+ self._client = client
+ self._connection = self.parse_cluster_name(name)
+
+ def __eq__(self, other: Any) -> bool:
+ """Checks if two SlurmClusters are equal."""
+ if not isinstance(other, SlurmCluster):
+ return False
+ return self.name() == other.name()
+
+ @staticmethod
+ def parse_cluster_name(name: str) -> ConnectionInfo:
+ """Parses the cluster name into queue and user components.
+
+ Args:
+ name: The name of the cluster.
+
+ Returns:
+ _ConnectionInfo: The parsed cluster information.
+ """
+ # Expected format: <user>@<hostname>
+ connection_regex = r"^([a-zA-Z0-9\.\-\_]+)\@([a-zA-Z0-9\.\-\_]+$)"
+ match = re.match(connection_regex, name)
+ if not match:
+ raise ValueError(
+ f"Invalid cluster name: {name}. Must be in the format 'user@hostname'."
+ )
+ return SlurmCluster.ConnectionInfo(hostname=match.group(2), user=match.group(1))
+
+ @staticmethod
+ def get_slurm_connections() -> list[ConnectionInfo]:
+ """Gets Slurm connections from the OUMI_SLURM_CONNECTIONS env variable."""
+ connections_str = os.getenv(_OUMI_SLURM_CONNECTIONS, "")
+ if not connections_str:
+ return []
+ valid_connections = []
+
+ for connection in [h.strip() for h in connections_str.split(",")]:
+ try:
+ valid_connections.append(SlurmCluster.parse_cluster_name(connection))
+ except ValueError:
+ logger.warning(
+ f"Invalid Slurm connection string: {connection}. Skipping."
+ )
+ return valid_connections
+
+ def name(self) -> str:
+ """Gets the name of the cluster."""
+ return self._connection.name
+
+ def get_job(self, job_id: str) -> Optional[JobStatus]:
+ """Gets the jobs on this cluster if it exists, else returns None."""
+ for job in self.get_jobs():
+ if job.id == job_id:
+ return job
+ return None
+
+ def get_jobs(self) -> list[JobStatus]:
+ """Lists the jobs on this cluster."""
+ jobs = self._client.list_jobs()
+ for job in jobs:
+ job.cluster = self._connection.name
+ return jobs
+
+ def cancel_job(self, job_id: str) -> JobStatus:
+ """Cancels the specified job on this cluster."""
+ self._client.cancel(job_id)
+ job = self.get_job(job_id)
+ if job is None:
+ raise RuntimeError(f"Job {job_id} not found.")
+ return job
+
+ def run_job(self, job: JobConfig) -> JobStatus:
+ """Runs the specified job on this cluster.
+
+ For Slurm this method consists of 5 parts:
+
+ 1. Copy the working directory to ~/oumi_launcher/$JOB_NAME.
+ 2. Check if there is a conda installation at /home/$USER/miniconda3/envs/oumi.
+ If not, install it.
+ 3. Copy all file mounts.
+ 4. Create a job script with all env vars, setup, and run commands.
+ 5. CD into the working directory and submit the job.
+
+ Args:
+ job: The job to run.
+
+ Returns:
+ JobStatus: The job status.
+ """
+ _validate_job_config(job)
+ job_name = job.name or uuid.uuid1().hex
+ submission_time = _format_date(datetime.now())
+ remote_working_dir = Path(f"~/oumi_launcher/{submission_time}")
+ # Copy the working directory to ~/oumi_launcher/...
+ self._client.put_recursive(job.working_dir, str(remote_working_dir))
+ # Copy all file mounts.
+ for remote_path, local_path in job.file_mounts.items():
+ self._client.put_recursive(local_path, remote_path)
+ # Create the job script by merging envs, setup, and run commands.
+ job_script = _create_job_script(job)
+ script_path = remote_working_dir / "oumi_job.sh"
+ self._client.put(job_script, str(script_path))
+ # Set the proper CHMOD permissions.
+ self._client.run_commands([f"chmod +x {script_path}"])
+ # Submit the job.
+ job_id = self._client.submit_job(
+ str(script_path),
+ str(remote_working_dir),
+ job.num_nodes,
+ job_name,
+ )
+ max_retries = 3
+ wait_time = 5
+ for _ in range(max_retries):
+ job_status = self.get_job(job_id)
+ if job_status is not None:
+ return job_status
+ logger.info(f"Job {job_id} not found. Retrying in {wait_time} seconds.")
+ time.sleep(wait_time)
+ job_status = self.get_job(job_id)
+ if job_status is None:
+ raise RuntimeError(f"Job {job_id} not found after submission.")
+ return job_status
+
+ def stop(self) -> None:
+ """This is a no-op for Slurm clusters."""
+ pass
+
+ def down(self) -> None:
+ """This is a no-op for Slurm clusters."""
+ pass
diff --git a/tests/unit/launcher/clients/data/sacct_full.txt b/tests/unit/launcher/clients/data/sacct_full.txt
new file mode 100644
index 0000000000..6886849a77
--- /dev/null
+++ b/tests/unit/launcher/clients/data/sacct_full.txt
@@ -0,0 +1,38 @@
+Welcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-130-generic x86_64)
+
+ * Documentation: https://help.ubuntu.com
+ * Management: https://landscape.canonical.com
+ * Support: https://ubuntu.com/pro
+
+ System information as of Fri Feb 7 10:40:36 PST 2025
+
+ System load: 0.01 Processes: 475
+ Usage of /home: 48.8% of 1.60TB Users logged in: 0
+ Memory usage: 4% IPv4 address for enp6s18:
+ Swap usage: 0%
+
+ * Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK8s
+ just raised the bar for easy, resilient and secure K8s cluster deployment.
+
+ https://ubuntu.com/engage/secure-kubernetes-at-the-edge
+
+Expanded Security Maintenance for Applications is not enabled.
+
+38 updates can be applied immediately.
+29 of these updates are standard security updates.
+To see these additional updates run: apt list --upgradable
+
+7 additional security updates can be applied with ESM Apps.
+Learn more about enabling ESM Apps service at https://ubuntu.com/esm
+
+New release '24.04.1 LTS' available.
+Run 'do-release-upgrade' to upgrade to it.
+
+
+JobID JobName User State Reason
+------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------
+5 test taenin COMPLETED None
+6 test taenin COMPLETED None
+7 cluster_test taenin CANCELLED by 243050679 None
+8 job-tutorial taenin COMPLETED None
+9 job-tutorial taenin COMPLETED None
diff --git a/tests/unit/launcher/clients/test_slurm_client.py b/tests/unit/launcher/clients/test_slurm_client.py
index 14d3071f3b..e726ab702f 100644
--- a/tests/unit/launcher/clients/test_slurm_client.py
+++ b/tests/unit/launcher/clients/test_slurm_client.py
@@ -10,7 +10,8 @@
_CTRL_PATH: str = "-S ~/.ssh/control-%h-%p-%r"
_SACCT_CMD = (
- "sacct --user=user --format='JobId%-30,JobName%30,User%30,State%30,Reason%30'"
+ "sacct --user=user --format='JobId%-30,JobName%30,User%30,State%30,Reason%30' "
+ "-X --starttime 2025-01-01"
)
@@ -245,6 +246,55 @@ def test_slurm_client_list_jobs_success(mock_subprocess):
assert job_ids == expected_ids
+def test_slurm_client_list_jobs_first_login_success(mock_subprocess):
+ mock_run = Mock()
+ mock_subprocess.run.return_value = mock_run
+ mock_run.stdout = _get_test_data("sacct_full.txt").encode("utf-8")
+ mock_run.stderr = b"foo"
+ mock_run.returncode = 0
+
+ client = SlurmClient("user", "host", "cluster_name")
+ job_list = client.list_jobs()
+ mock_subprocess.run.assert_called_with(
+ _run_commands_template([_SACCT_CMD]),
+ shell=True,
+ capture_output=True,
+ timeout=180,
+ )
+ job_ids = [job.id for job in job_list]
+ expected_ids = [
+ "5",
+ "6",
+ "7",
+ "8",
+ "9",
+ ]
+ assert job_ids == expected_ids
+
+
+def test_slurm_client_list_jobs_fails_missing_header(mock_subprocess):
+ mock_run = Mock()
+ mock_subprocess.run.return_value = mock_run
+ data = _get_test_data("sacct_full.txt").encode("utf-8")
+ data = b"\n".join(data.split(b"\n")[:-7])
+ mock_run.stdout = data
+ mock_run.stderr = b"foo"
+ mock_run.returncode = 0
+ client = SlurmClient("user", "host", "cluster_name")
+
+ with pytest.raises(
+ RuntimeError, match="Failed to parse job list. Unexpected format:"
+ ):
+ client = SlurmClient("user", "host", "cluster_name")
+ _ = client.list_jobs()
+ mock_subprocess.run.assert_called_with(
+ _run_commands_template([_SACCT_CMD]),
+ shell=True,
+ capture_output=True,
+ timeout=180,
+ )
+
+
def test_slurm_client_list_jobs_handles_empty_string(mock_subprocess):
mock_run = Mock()
mock_subprocess.run.return_value = mock_run
diff --git a/tests/unit/launcher/clouds/test_slurm_cloud.py b/tests/unit/launcher/clouds/test_slurm_cloud.py
new file mode 100644
index 0000000000..777d3fa870
--- /dev/null
+++ b/tests/unit/launcher/clouds/test_slurm_cloud.py
@@ -0,0 +1,275 @@
+from unittest.mock import Mock, call, patch
+
+import pytest
+
+from oumi.core.configs import JobConfig, JobResources, StorageMount
+from oumi.core.launcher import JobStatus
+from oumi.core.registry import REGISTRY, RegistryType
+from oumi.launcher.clients.slurm_client import SlurmClient
+from oumi.launcher.clouds.slurm_cloud import SlurmCloud
+from oumi.launcher.clusters.slurm_cluster import SlurmCluster
+
+
+#
+# Fixtures
+#
[email protected]
+def mock_slurm_client():
+ with patch("oumi.launcher.clouds.slurm_cloud.SlurmClient") as client:
+ yield client
+
+
[email protected]
+def mock_slurm_cluster():
+ with patch("oumi.launcher.clouds.slurm_cloud.SlurmCluster") as cluster:
+ cluster.get_slurm_connections.return_value = []
+ cluster.parse_cluster_name = SlurmCluster.parse_cluster_name
+ yield cluster
+
+
[email protected]
+def mock_get_slurm_connections():
+ with patch(
+ "oumi.launcher.clouds.slurm_cloud.SlurmCluster.get_slurm_connections"
+ ) as get_conns:
+ get_conns.return_value = []
+ yield get_conns
+
+
[email protected]
+def mock_parse_cluster_name():
+ with patch(
+ "oumi.launcher.clouds.slurm_cloud.SlurmCluster.parse_cluster_name"
+ ) as parse_name:
+ yield parse_name
+
+
+def _get_default_job(cloud: str) -> JobConfig:
+ resources = JobResources(
+ cloud=cloud,
+ region="us-central1",
+ zone=None,
+ accelerators="A100-80GB",
+ cpus="4",
+ memory="64",
+ instance_type=None,
+ use_spot=True,
+ disk_size=512,
+ disk_tier="low",
+ )
+ return JobConfig(
+ name="myjob",
+ user="user",
+ working_dir="./",
+ num_nodes=2,
+ resources=resources,
+ envs={"var1": "val1"},
+ file_mounts={},
+ storage_mounts={
+ "~/home/remote/path/gcs/": StorageMount(
+ source="gs://mybucket/", store="gcs"
+ )
+ },
+ setup="pip install -r requirements.txt",
+ run="./hello_world.sh",
+ )
+
+
+#
+# Tests
+#
+def test_slurm_cloud_up_cluster(mock_slurm_client, mock_slurm_cluster):
+ cloud = SlurmCloud()
+ mock_client = Mock(spec=SlurmClient)
+ mock_slurm_client.side_effect = [mock_client]
+ mock_cluster = Mock(spec=SlurmCluster)
+ mock_slurm_cluster.side_effect = [mock_cluster]
+ expected_job_status = JobStatus(
+ id="job_id",
+ cluster="user@somehost",
+ name="foo",
+ status="running",
+ metadata="bar",
+ done=False,
+ )
+ mock_cluster.run_job.return_value = expected_job_status
+ job = _get_default_job("slurm")
+ job_status = cloud.up_cluster(job, "user@somehost")
+ mock_slurm_client.assert_called_once_with(
+ user="user", slurm_host="somehost", cluster_name="user@somehost"
+ )
+ mock_cluster.run_job.assert_called_once_with(job)
+ assert job_status == expected_job_status
+
+
+def test_slurm_cloud_up_cluster_fails_mismatched_user(
+ mock_slurm_client, mock_slurm_cluster
+):
+ cloud = SlurmCloud()
+ with pytest.raises(
+ ValueError,
+ match=(
+ "Invalid cluster name: `user1@somehost`. "
+ "User must match the provided job user: `user`."
+ ),
+ ):
+ _ = cloud.up_cluster(_get_default_job("slurm"), "user1@somehost")
+
+
+def test_slurm_cloud_init_with_connections(
+ mock_slurm_client, mock_get_slurm_connections
+):
+ mock_client = Mock(spec=SlurmClient)
+ mock_get_slurm_connections.return_value = [
+ SlurmCluster.ConnectionInfo(user="user1", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ ]
+ mock_slurm_client.side_effect = [mock_client, mock_client]
+ cloud = SlurmCloud()
+ cluster_names = [cluster.name() for cluster in cloud.list_clusters()]
+ cluster_names.sort()
+ assert cluster_names == [
+ "user1@host1",
+ "user2@host2",
+ ]
+ mock_slurm_client.assert_has_calls(
+ [
+ call(user="user1", slurm_host="host1", cluster_name="user1@host1"),
+ call(user="user2", slurm_host="host2", cluster_name="user2@host2"),
+ ]
+ )
+
+
+def test_slurm_cloud_init_skips_malformed_connections(
+ mock_slurm_client, mock_get_slurm_connections
+):
+ mock_client = Mock(spec=SlurmClient)
+ mock_get_slurm_connections.return_value = [
+ SlurmCluster.ConnectionInfo(user="user1", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ SlurmCluster.ConnectionInfo(user="user3", hostname="host3"),
+ ]
+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]
+ cloud = SlurmCloud()
+ cluster_names = [cluster.name() for cluster in cloud.list_clusters()]
+ cluster_names.sort()
+ assert cluster_names == [
+ "user1@host1",
+ "user2@host2",
+ "user3@host3",
+ ]
+ mock_slurm_client.assert_has_calls(
+ [
+ call(user="user1", slurm_host="host1", cluster_name="user1@host1"),
+ call(user="user2", slurm_host="host2", cluster_name="user2@host2"),
+ call(user="user3", slurm_host="host3", cluster_name="user3@host3"),
+ ]
+ )
+
+
+def test_slurm_cloud_initialize_cluster(mock_slurm_client, mock_get_slurm_connections):
+ cloud = SlurmCloud()
+ mock_get_slurm_connections.return_value = [
+ SlurmCluster.ConnectionInfo(user="user1", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ SlurmCluster.ConnectionInfo(user="user3", hostname="host3"),
+ ]
+ mock_client = Mock(spec=SlurmClient)
+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]
+ clusters = cloud.initialize_clusters()
+ clusters2 = cloud.initialize_clusters()
+ mock_slurm_client.assert_has_calls(
+ [
+ call(user="user1", slurm_host="host1", cluster_name="user1@host1"),
+ call(user="user2", slurm_host="host2", cluster_name="user2@host2"),
+ call(user="user3", slurm_host="host3", cluster_name="user3@host3"),
+ ]
+ )
+ cluster_names = [cluster.name() for cluster in clusters]
+ cluster_names.sort()
+ assert cluster_names == [
+ "user1@host1",
+ "user2@host2",
+ "user3@host3",
+ ]
+ # Verify that the second initialization returns the same clusters.
+ assert clusters == clusters2
+
+
+def test_slurm_cloud_list_clusters(mock_slurm_client, mock_get_slurm_connections):
+ cloud = SlurmCloud()
+ mock_get_slurm_connections.return_value = [
+ SlurmCluster.ConnectionInfo(user="user1", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ SlurmCluster.ConnectionInfo(user="user3", hostname="host3"),
+ ]
+ mock_client = Mock(spec=SlurmClient)
+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]
+ assert [] == cloud.list_clusters()
+ clusters = cloud.initialize_clusters()
+ mock_slurm_client.assert_has_calls(
+ [
+ call(user="user1", slurm_host="host1", cluster_name="user1@host1"),
+ call(user="user2", slurm_host="host2", cluster_name="user2@host2"),
+ call(user="user3", slurm_host="host3", cluster_name="user3@host3"),
+ ]
+ )
+ clusters = cloud.list_clusters()
+ expected_clusters = [
+ "user1@host1",
+ "user2@host2",
+ "user3@host3",
+ ]
+ cluster_names = [cluster.name() for cluster in clusters]
+ cluster_names.sort()
+ assert cluster_names == expected_clusters
+
+
+def test_slurm_cloud_get_cluster_empty(mock_slurm_client):
+ cloud = SlurmCloud()
+ # Check that there are no initial clusters.
+ assert cloud.get_cluster("debug.user") is None
+
+
+def test_slurm_cloud_get_cluster_success(mock_slurm_client, mock_get_slurm_connections):
+ mock_client = Mock(spec=SlurmClient)
+ mock_get_slurm_connections.side_effect = [
+ [],
+ [
+ SlurmCluster.ConnectionInfo(user="user1", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ SlurmCluster.ConnectionInfo(user="user3", hostname="host3"),
+ ],
+ ]
+ mock_slurm_client.side_effect = [mock_client, mock_client, mock_client]
+ cloud = SlurmCloud()
+ assert [] == cloud.list_clusters()
+ _ = cloud.initialize_clusters()
+ mock_slurm_client.assert_has_calls(
+ [
+ call(user="user1", slurm_host="host1", cluster_name="user1@host1"),
+ call(user="user2", slurm_host="host2", cluster_name="user2@host2"),
+ call(user="user3", slurm_host="host3", cluster_name="user3@host3"),
+ ]
+ )
+ expected_clusters = [
+ "user1@host1",
+ "user2@host2",
+ "user3@host3",
+ ]
+ for name in expected_clusters:
+ cluster = cloud.get_cluster(name)
+ assert cluster is not None
+ assert cluster.name() == name
+
+
+def test_slurm_cloud_get_cluster_fails(mock_slurm_client):
+ cloud = SlurmCloud()
+ mock_client = Mock(spec=SlurmClient)
+ mock_slurm_client.side_effect = [mock_client, mock_client]
+ cloud.initialize_clusters()
+ assert cloud.get_cluster("nonexistent") is None
+
+
+def test_slurm_cloud_builder_registered():
+ assert REGISTRY.contains("slurm", RegistryType.CLOUD)
diff --git a/tests/unit/launcher/clusters/test_slurm_cluster.py b/tests/unit/launcher/clusters/test_slurm_cluster.py
new file mode 100644
index 0000000000..a4bf8afb2e
--- /dev/null
+++ b/tests/unit/launcher/clusters/test_slurm_cluster.py
@@ -0,0 +1,788 @@
+import re
+from datetime import datetime
+from unittest.mock import Mock, call, patch
+
+import pytest
+
+from oumi.core.configs import JobConfig, JobResources, StorageMount
+from oumi.core.launcher import JobStatus
+from oumi.launcher.clients.slurm_client import SlurmClient
+from oumi.launcher.clusters.slurm_cluster import SlurmCluster
+
+
+#
+# Fixtures
+#
[email protected]
+def mock_slurm_client():
+ yield Mock(spec=SlurmClient)
+
+
[email protected]
+def mock_time():
+ with patch("oumi.launcher.clusters.slurm_cluster.time") as mock_t:
+ yield mock_t
+
+
[email protected]
+def mock_datetime():
+ with patch("oumi.launcher.clusters.slurm_cluster.datetime") as mock_dt:
+ mock_dt.now.return_value = datetime(2024, 10, 9, 13, 4, 24, 513094)
+ yield mock_dt
+
+
[email protected]
+def mock_os():
+ with patch("oumi.launcher.clusters.slurm_cluster.os") as os_mock:
+ os_mock.getenv.return_value = ""
+ yield os_mock
+
+
+def _get_default_job(cloud: str) -> JobConfig:
+ resources = JobResources(
+ cloud=cloud,
+ region="us-central1",
+ zone=None,
+ accelerators="A100-80GB",
+ cpus="4",
+ memory="64",
+ instance_type=None,
+ use_spot=True,
+ disk_size=512,
+ disk_tier="low",
+ )
+ return JobConfig(
+ name="myjob",
+ user="user",
+ working_dir="./",
+ num_nodes=2,
+ resources=resources,
+ envs={"var1": "val1"},
+ file_mounts={
+ "~/home/remote/path.bar": "~/local/path.bar",
+ "~/home/remote/path2.txt": "~/local/path2.txt",
+ },
+ storage_mounts={
+ "~/home/remote/path/gcs/": StorageMount(
+ source="gs://mybucket/", store="gcs"
+ )
+ },
+ setup=(
+ "#SBATCH --gpus-per-task=8 \n#SBATCH --cpus-per-task=4\n"
+ "pip install -r requirements.txt"
+ ),
+ run="./hello_world.sh",
+ )
+
+
+#
+# Tests
+#
+
+
+def test_slurm_cluster_parse_cluster_name():
+ assert SlurmCluster.parse_cluster_name("user@host") == SlurmCluster.ConnectionInfo(
+ user="user", hostname="host"
+ )
+ assert SlurmCluster.parse_cluster_name(
+ "[email protected]"
+ ) == SlurmCluster.ConnectionInfo(user="user.-dotdash", hostname="192.168.0.1")
+
+
[email protected](
+ "invalid_name",
+ [
+ "multiple@at@signs",
+ "white space@hostname",
+ "extra$!characters@hostname",
+ "@nouser",
+ "nohost@",
+ "",
+ ],
+)
+def test_slurm_cluster_parse_cluster_name_invalid(invalid_name):
+ with pytest.raises(
+ ValueError,
+ match=re.escape(
+ f"Invalid cluster name: {invalid_name}. "
+ "Must be in the format 'user@hostname'."
+ ),
+ ):
+ SlurmCluster.parse_cluster_name(invalid_name)
+
+
+def test_slurm_cluster_get_slurm_connections(mock_os):
+ mock_os.getenv.return_value = "user@host1,user@host2"
+ connections = SlurmCluster.get_slurm_connections()
+ assert connections == [
+ SlurmCluster.ConnectionInfo(user="user", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user", hostname="host2"),
+ ]
+
+
+def test_slurm_cluster_get_slurm_connections_whitespace(mock_os):
+ mock_os.getenv.return_value = "user@host1 , user2@host2"
+ connections = SlurmCluster.get_slurm_connections()
+ assert connections == [
+ SlurmCluster.ConnectionInfo(user="user", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ ]
+
+
+def test_slurm_cluster_get_slurm_connections_empty(mock_os):
+ mock_os.getenv.return_value = ""
+ connections = SlurmCluster.get_slurm_connections()
+ assert connections == []
+
+
+def test_slurm_cluster_get_slurm_connections_skips_malformed(mock_os):
+ mock_os.getenv.return_value = (
+ "user1@host1,foob@@@ar, user2@host2 , \", ', user3@host3"
+ )
+ connections = SlurmCluster.get_slurm_connections()
+ assert connections == [
+ SlurmCluster.ConnectionInfo(user="user1", hostname="host1"),
+ SlurmCluster.ConnectionInfo(user="user2", hostname="host2"),
+ SlurmCluster.ConnectionInfo(user="user3", hostname="host3"),
+ ]
+
+
+def test_slurm_cluster_name(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("demand@einstein", mock_slurm_client)
+ assert cluster.name() == "demand@einstein"
+
+ cluster = SlurmCluster("[email protected]", mock_slurm_client)
+ assert cluster.name() == "[email protected]"
+
+ cluster = SlurmCluster("debug-scaling@a", mock_slurm_client)
+ assert cluster.name() == "debug-scaling@a"
+
+ cluster = SlurmCluster("[email protected]", mock_slurm_client)
+ assert cluster.name() == "[email protected]"
+
+
+def test_slurm_cluster_get_job_valid_id(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="myjob",
+ name="some name",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ JobStatus(
+ id="final job",
+ name="name3",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ ]
+ job = cluster.get_job("myjob")
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job is not None
+ assert job.id == "myjob"
+ assert job.cluster == "debug@host"
+
+
+def test_slurm_cluster_get_job_invalid_id_empty(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = []
+ job = cluster.get_job("myjob")
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job is None
+
+
+def test_slurm_cluster_get_job_invalid_id_nonempty(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="myjob",
+ name="some name",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ JobStatus(
+ id="final job",
+ name="name3",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ ]
+ job = cluster.get_job("wrong job")
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job is None
+
+
+def test_slurm_cluster_get_jobs_nonempty(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="myjob",
+ name="some name",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ JobStatus(
+ id="final job",
+ name="name3",
+ status="running",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ ),
+ ]
+ jobs = cluster.get_jobs()
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ expected_jobs = [
+ JobStatus(
+ id="myjob",
+ name="some name",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ JobStatus(
+ id="final job",
+ name="name3",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ ]
+ assert jobs == expected_jobs
+
+
+def test_slurm_cluster_get_jobs_empty(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = []
+ jobs = cluster.get_jobs()
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ expected_jobs = []
+ assert jobs == expected_jobs
+
+
+def test_slurm_cluster_cancel_job(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("prod@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="myjob",
+ name="some name",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ JobStatus(
+ id="final job",
+ name="name3",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ ]
+ job_status = cluster.cancel_job("job2")
+ expected_status = JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="prod@host",
+ done=False,
+ )
+ mock_slurm_client.cancel.assert_called_once_with(
+ "job2",
+ )
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_cancel_job_fails(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("prod@host", mock_slurm_client)
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="job2",
+ name="some",
+ status="running",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ ),
+ ]
+ with pytest.raises(RuntimeError):
+ _ = cluster.cancel_job("myjobid")
+
+
+def test_slurm_cluster_run_job(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_successful_cmd = Mock()
+ mock_successful_cmd.exit_code = 0
+ mock_slurm_client.run_commands.return_value = mock_successful_cmd
+ mock_slurm_client.submit_job.return_value = "1234"
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ]
+ expected_status = JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ )
+ job_status = cluster.run_job(_get_default_job("slurm"))
+ mock_slurm_client.put_recursive.assert_has_calls(
+ [
+ call(
+ "./",
+ "~/oumi_launcher/20241009_130424513094",
+ ),
+ call(
+ "~/local/path.bar",
+ "~/home/remote/path.bar",
+ ),
+ call(
+ "~/local/path2.txt",
+ "~/home/remote/path2.txt",
+ ),
+ ],
+ )
+ mock_slurm_client.run_commands.assert_has_calls(
+ [
+ call(["chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh"]),
+ ]
+ )
+ job_script = (
+ "#!/bin/bash\n#SBATCH --gpus-per-task=8 \n#SBATCH --cpus-per-task=4\n\n"
+ "export var1=val1\n\n"
+ "pip install -r requirements.txt\n./hello_world.sh\n"
+ )
+ mock_slurm_client.put.assert_called_once_with(
+ job_script, "~/oumi_launcher/20241009_130424513094/oumi_job.sh"
+ )
+ mock_slurm_client.submit_job.assert_called_once_with(
+ "~/oumi_launcher/20241009_130424513094/oumi_job.sh",
+ "~/oumi_launcher/20241009_130424513094",
+ 2,
+ "myjob",
+ )
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_run_job_with_polling_succeeds(
+ mock_time, mock_datetime, mock_slurm_client
+):
+ mock_time.sleep.side_effect = [None, None, None, None, None]
+ mock_successful_cmd = Mock()
+ mock_successful_cmd.exit_code = 0
+ mock_failed_cmd = Mock()
+ mock_failed_cmd.exit_code = 1
+ mock_slurm_client.run_commands.side_effect = [
+ mock_failed_cmd,
+ mock_successful_cmd,
+ mock_successful_cmd,
+ mock_successful_cmd,
+ ]
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.submit_job.return_value = "1234"
+ mock_slurm_client.list_jobs.side_effect = [
+ [],
+ [
+ JobStatus(
+ id="1",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ],
+ [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ],
+ ]
+ expected_status = JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ )
+ job_status = cluster.run_job(_get_default_job("slurm"))
+ mock_slurm_client.put_recursive.assert_has_calls(
+ [
+ call(
+ "./",
+ "~/oumi_launcher/20241009_130424513094",
+ ),
+ call(
+ "~/local/path.bar",
+ "~/home/remote/path.bar",
+ ),
+ call(
+ "~/local/path2.txt",
+ "~/home/remote/path2.txt",
+ ),
+ ],
+ )
+ mock_slurm_client.run_commands.assert_has_calls(
+ [
+ call(["chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh"]),
+ ]
+ )
+ job_script = (
+ "#!/bin/bash\n#SBATCH --gpus-per-task=8 \n#SBATCH --cpus-per-task=4\n\n"
+ "export var1=val1\n\n"
+ "pip install -r requirements.txt\n./hello_world.sh\n"
+ )
+ mock_slurm_client.put.assert_called_once_with(
+ job_script, "~/oumi_launcher/20241009_130424513094/oumi_job.sh"
+ )
+ mock_slurm_client.submit_job.assert_called_once_with(
+ "~/oumi_launcher/20241009_130424513094/oumi_job.sh",
+ "~/oumi_launcher/20241009_130424513094",
+ 2,
+ "myjob",
+ )
+ mock_slurm_client.list_jobs.assert_has_calls([call(), call(), call()])
+ mock_time.sleep.assert_has_calls([call(5), call(5)])
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_run_job_no_name(mock_datetime, mock_slurm_client):
+ mock_successful_cmd = Mock()
+ mock_successful_cmd.exit_code = 0
+ mock_slurm_client.run_commands.return_value = mock_successful_cmd
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.submit_job.return_value = "1234"
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ]
+ expected_status = JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ )
+ job = _get_default_job("slurm")
+ job.name = None
+ with patch("oumi.launcher.clusters.slurm_cluster.uuid") as mock_uuid:
+ mock_hex = Mock()
+ mock_hex.hex = "1-2-3"
+ mock_uuid.uuid1.return_value = mock_hex
+ job_status = cluster.run_job(job)
+ mock_slurm_client.put_recursive.assert_has_calls(
+ [
+ call(
+ "./",
+ "~/oumi_launcher/20241009_130424513094",
+ ),
+ call(
+ "~/local/path.bar",
+ "~/home/remote/path.bar",
+ ),
+ call(
+ "~/local/path2.txt",
+ "~/home/remote/path2.txt",
+ ),
+ ],
+ )
+ mock_slurm_client.run_commands.assert_has_calls(
+ [
+ call(["chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh"]),
+ ]
+ )
+ job_script = (
+ "#!/bin/bash\n#SBATCH --gpus-per-task=8 \n#SBATCH --cpus-per-task=4\n\n"
+ "export var1=val1\n\n"
+ "pip install -r requirements.txt\n./hello_world.sh\n"
+ )
+ mock_slurm_client.put.assert_called_once_with(
+ job_script, "~/oumi_launcher/20241009_130424513094/oumi_job.sh"
+ )
+ mock_slurm_client.submit_job.assert_called_once_with(
+ "~/oumi_launcher/20241009_130424513094/oumi_job.sh",
+ "~/oumi_launcher/20241009_130424513094",
+ 2,
+ "1-2-3",
+ )
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_run_job_no_mounts(mock_datetime, mock_slurm_client):
+ mock_successful_cmd = Mock()
+ mock_successful_cmd.exit_code = 0
+ mock_slurm_client.run_commands.return_value = mock_successful_cmd
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.submit_job.return_value = "1234"
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ]
+ expected_status = JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ )
+ job = _get_default_job("slurm")
+ job.file_mounts = {}
+ job_status = cluster.run_job(job)
+ mock_slurm_client.put_recursive.assert_has_calls(
+ [
+ call(
+ "./",
+ "~/oumi_launcher/20241009_130424513094",
+ ),
+ ],
+ )
+ mock_slurm_client.run_commands.assert_has_calls(
+ [
+ call(["chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh"]),
+ ]
+ )
+ job_script = (
+ "#!/bin/bash\n#SBATCH --gpus-per-task=8 \n#SBATCH --cpus-per-task=4\n\n"
+ "export var1=val1\n\n"
+ "pip install -r requirements.txt\n./hello_world.sh\n"
+ )
+ mock_slurm_client.put.assert_called_once_with(
+ job_script, "~/oumi_launcher/20241009_130424513094/oumi_job.sh"
+ )
+ mock_slurm_client.submit_job.assert_called_once_with(
+ "~/oumi_launcher/20241009_130424513094/oumi_job.sh",
+ "~/oumi_launcher/20241009_130424513094",
+ 2,
+ "myjob",
+ )
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_run_job_no_pbs(mock_datetime, mock_slurm_client):
+ mock_successful_cmd = Mock()
+ mock_successful_cmd.exit_code = 0
+ mock_slurm_client.run_commands.return_value = mock_successful_cmd
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.submit_job.return_value = "1234"
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ]
+ expected_status = JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ )
+ job = _get_default_job("slurm")
+ job.file_mounts = {}
+ job.setup = "small setup"
+ job.run = "./hello_world.sh"
+ job_status = cluster.run_job(job)
+ mock_slurm_client.put_recursive.assert_has_calls(
+ [
+ call(
+ "./",
+ "~/oumi_launcher/20241009_130424513094",
+ ),
+ ],
+ )
+ mock_slurm_client.run_commands.assert_has_calls(
+ [
+ call(["chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh"]),
+ ]
+ )
+ job_script = (
+ "#!/bin/bash\n\n" "export var1=val1\n\n" "small setup\n./hello_world.sh\n"
+ )
+ mock_slurm_client.put.assert_called_once_with(
+ job_script, "~/oumi_launcher/20241009_130424513094/oumi_job.sh"
+ )
+ mock_slurm_client.submit_job.assert_called_once_with(
+ "~/oumi_launcher/20241009_130424513094/oumi_job.sh",
+ "~/oumi_launcher/20241009_130424513094",
+ 2,
+ "myjob",
+ )
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_run_job_no_setup(mock_datetime, mock_slurm_client):
+ mock_successful_cmd = Mock()
+ mock_successful_cmd.exit_code = 0
+ mock_slurm_client.run_commands.return_value = mock_successful_cmd
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.submit_job.return_value = "1234"
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ]
+ expected_status = JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="debug@host",
+ done=False,
+ )
+ job = _get_default_job("slurm")
+ job.file_mounts = {}
+ job.setup = None
+ job.run = "./hello_world.sh"
+ job_status = cluster.run_job(job)
+ mock_slurm_client.put_recursive.assert_has_calls(
+ [
+ call(
+ "./",
+ "~/oumi_launcher/20241009_130424513094",
+ ),
+ ],
+ )
+ mock_slurm_client.run_commands.assert_has_calls(
+ [
+ call(["chmod +x ~/oumi_launcher/20241009_130424513094/oumi_job.sh"]),
+ ]
+ )
+ job_script = "#!/bin/bash\n\n" "export var1=val1\n\n" "./hello_world.sh\n"
+ mock_slurm_client.put.assert_called_once_with(
+ job_script, "~/oumi_launcher/20241009_130424513094/oumi_job.sh"
+ )
+ mock_slurm_client.submit_job.assert_called_once_with(
+ "~/oumi_launcher/20241009_130424513094/oumi_job.sh",
+ "~/oumi_launcher/20241009_130424513094",
+ 2,
+ "myjob",
+ )
+ mock_slurm_client.list_jobs.assert_called_once_with()
+ assert job_status == expected_status
+
+
+def test_slurm_cluster_run_job_fails(mock_time, mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug@host", mock_slurm_client)
+ mock_slurm_client.submit_job.return_value = "234"
+ mock_slurm_client.list_jobs.return_value = [
+ JobStatus(
+ id="1234",
+ name="some name",
+ status="RUNNING",
+ metadata="",
+ cluster="mycluster",
+ done=False,
+ )
+ ]
+ with pytest.raises(RuntimeError):
+ _ = cluster.run_job(_get_default_job("slurm"))
+ mock_time.sleep.assert_has_calls([call(5), call(5), call(5)])
+
+
+def test_slurm_cluster_down(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug-scaling@host", mock_slurm_client)
+ cluster.down()
+ # Nothing to assert, this method is a no-op.
+
+
+def test_slurm_cluster_stop(mock_datetime, mock_slurm_client):
+ cluster = SlurmCluster("debug-scaling@host", mock_slurm_client)
+ cluster.stop()
+ # Nothing to assert, this method is a no-op.
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
|
oumi-ai__oumi-1422@3340d02
|
oumi-ai/oumi
|
Python
| 1,422
|
Updated oumi infer to support CLI argument for system prompt
|
# Description
This change adds an optional argument for ```system_prompt``` in the CLI for the interactive mode. The two tests have been added that test this change with and without image as an input.
<!--
Thank you for contributing to Oumi! Before sending your PR out for review, please take a quick read through this template.
When your PR is merged, its title will appear in our release notes. Make sure your title gives a clear description of your change!
After you've updated your title, please replace this section with a detailed description of your change. Include as much context as possible so your reviewers can easily understand *what* you're changing and *why*.
The more information you provide, the faster we can review your change!
-->
<!--↓↓↓↓↓↓↓↓↓↓ Describe your change below ↓↓↓↓↓↓↓↓↓↓-->
The change specifically adds a new argument in ```infer``` and ```infer_interactive``` functions. It also adds new ```Message``` object in the ```Conversation``` object for inputs with and without an image input.
Further, since this added new code, tests have been added to ensure their functioning. Two tests have been added with and without image input specifically to decouple the testing of system prompt feature independent of the image input for future triages!
<!--↑↑↑↑↑↑↑↑↑↑ Describe your change above ↑↑↑↑↑↑↑↑↑↑-->
## Related issues
<!--
Make sure to list any relevant related issues to your change. More often than not this will be the single issue fixed by your PR.
-->
<!--↓↓↓↓↓↓↓↓↓↓ List your related issues below ↓↓↓↓↓↓↓↓↓↓-->
Fixes #1386
<!--↑↑↑↑↑↑↑↑↑↑ List your related issues above ↑↑↑↑↑↑↑↑↑↑-->
## Before submitting
- [ ] This PR only changes documentation. (You can ignore the following checks in that case)
- [x] Did you read the [contributor guideline](https://github.com/oumi-ai/oumi/blob/main/CONTRIBUTING.md) Pull Request guidelines?
- [x] Did you link the issue(s) related to this PR in the section above?
- [x] Did you add / update tests where needed?
## Reviewers
At least one review from a member of `oumi-ai/oumi-staff` is required.
<!-- Add `oumi-ai/oumi-staff` as a reviewer when your PR is ready for review.
You are also welcome to add individual members of `oumi-ai/oumi-staff` as reviewers.
If no one has reviewed your PR after several days, feel free to add a comment tagging specific reviewers.
-->
|
2025-02-11T17:46:19Z
|
[Feature] Update `oumi infer` to support CLI argument for system prompt
### Feature request
Add a new command line argument for `oumi infer` command
### Motivation / references
Users may want to adjust system prompt for task-specific instructions
### Your contribution
I can help with code review, and can answer questions
Towards OPE-1019
|
@xrdaukar hello! I would like to work on this. I do understand where and how I will add the code for accepting a system-prompt, but then where am I supposed to use this input? I suppose it shall be a part of the ```InferenceConfig```?
Thanks.
Thank you @Spaarsh ! I think, at minimum, we can add a new `system_prompt` parameter to `infer()` CLI function https://github.com/oumi-ai/oumi/blob/4ad8780ab2f795171e7f5e99b9c0048a384f62d0/src/oumi/cli/infer.py#L37. (similarly to `image` param, which provides visual context for multimodal models), then modify this function https://github.com/oumi-ai/oumi/blob/4ad8780ab2f795171e7f5e99b9c0048a384f62d0/src/oumi/infer.py#L101 to insert `Role.SYSTEM` message to the conversation (as the first element). It's OK to only support it for `interactive` mode initially.
We could also consider adding system prompt param to `InferenceConfig` or `GenerationParams` but it'd be more involved and require more changes, so my suggestion is to do the minimal change as described above as the first step. @taenin fyi
+1 to @xrdaukar 's first suggestion. I think adding support just for interactive mode is a good first step. If we find folks need this for non-interactive inference we can circle back and address it.
|
[
{
"body": "### Feature request\n\nAdd a new command line argument for `oumi infer` command\n\n\n\n### Motivation / references\n\nUsers may want to adjust system prompt for task-specific instructions\n\n### Your contribution\n\nI can help with code review, and can answer questions\n\n\nTowards OPE-1019",
"number": 1386,
"title": "[Feature] Update `oumi infer` to support CLI argument for system prompt"
}
] |
20faca9b0f8a07bbf06c9f1c5876380086583f2a
|
{
"head_commit": "3340d02597ca7be20001d125cf0eccdfe7edfd05",
"head_commit_message": "Merge branch 'main' into 1386/enhancement/add-system-prompt-argument-in-oumi-infer",
"patch_to_review": "diff --git a/src/oumi/__init__.py b/src/oumi/__init__.py\nindex cf70b6580..e9eef515d 100644\n--- a/src/oumi/__init__.py\n+++ b/src/oumi/__init__.py\n@@ -124,12 +124,17 @@ def evaluate(config: EvaluationConfig) -> list[dict[str, Any]]:\n \n \n def infer_interactive(\n- config: InferenceConfig, *, input_image_bytes: bytes | None = None\n+ config: InferenceConfig,\n+ *,\n+ input_image_bytes: bytes | None = None,\n+ system_prompt: str | None = None,\n ) -> None:\n \"\"\"Interactively provide the model response for a user-provided input.\"\"\"\n import oumi.infer\n \n- return oumi.infer.infer_interactive(config, input_image_bytes=input_image_bytes)\n+ return oumi.infer.infer_interactive(\n+ config, input_image_bytes=input_image_bytes, system_prompt=system_prompt\n+ )\n \n \n def infer(\ndiff --git a/src/oumi/cli/infer.py b/src/oumi/cli/infer.py\nindex ac5327f21..b640ecead 100644\n--- a/src/oumi/cli/infer.py\n+++ b/src/oumi/cli/infer.py\n@@ -44,6 +44,16 @@ def infer(\n ),\n ),\n ] = None,\n+ system_prompt: Annotated[\n+ Optional[str],\n+ typer.Option(\n+ \"--system-prompt\",\n+ help=(\n+ \"System prompt for task-specific instructions. \"\n+ \"Only used in interactive mode.\"\n+ ),\n+ ),\n+ ] = None,\n level: cli_utils.LOG_LEVEL_TYPE = None,\n ):\n \"\"\"Run inference on a model.\n@@ -57,6 +67,7 @@ def infer(\n config: Path to the configuration file for inference.\n interactive: Whether to run in an interactive session.\n image: Path to the input image for `image+text` VLLMs.\n+ system_prompt: System prompt for task-specific instructions.\n level: The logging level for the specified command.\n \"\"\"\n extra_args = cli_utils.parse_extra_cli_args(ctx)\n@@ -93,9 +104,10 @@ def infer(\n \"`input_path`.\"\n )\n return oumi_infer_interactive(\n- parsed_config, input_image_bytes=input_image_png_bytes\n+ parsed_config,\n+ input_image_bytes=input_image_png_bytes,\n+ system_prompt=system_prompt,\n )\n-\n if parsed_config.input_path is None:\n raise ValueError(\"One of `--interactive` or `input_path` must be provided.\")\n generations = oumi_infer(parsed_config)\ndiff --git a/src/oumi/infer.py b/src/oumi/infer.py\nindex 9c1c8c3f4..ef25ec0ed 100644\n--- a/src/oumi/infer.py\n+++ b/src/oumi/infer.py\n@@ -41,7 +41,10 @@ def _get_engine(config: InferenceConfig) -> BaseInferenceEngine:\n \n \n def infer_interactive(\n- config: InferenceConfig, *, input_image_bytes: Optional[bytes] = None\n+ config: InferenceConfig,\n+ *,\n+ input_image_bytes: Optional[bytes] = None,\n+ system_prompt: Optional[str] = None,\n ) -> None:\n \"\"\"Interactively provide the model response for a user-provided input.\"\"\"\n # Create engine up front to avoid reinitializing it for each input.\n@@ -57,6 +60,7 @@ def infer_interactive(\n inputs=[\n input_text,\n ],\n+ system_prompt=system_prompt,\n input_image_bytes=input_image_bytes,\n inference_engine=inference_engine,\n )\n@@ -71,6 +75,7 @@ def infer(\n config: InferenceConfig,\n inputs: Optional[list[str]] = None,\n inference_engine: Optional[BaseInferenceEngine] = None,\n+ system_prompt: Optional[str] = None,\n *,\n input_image_bytes: Optional[bytes] = None,\n ) -> list[Conversation]:\n@@ -81,6 +86,7 @@ def infer(\n inputs: A list of inputs for inference.\n inference_engine: The engine to use for inference. If unspecified, the engine\n will be inferred from `config`.\n+ system_prompt: System prompt for task-specific instructions.\n input_image_bytes: An input PNG image bytes to be used with `image+text` VLLMs.\n Only used in interactive mode.\n \n@@ -93,15 +99,21 @@ def infer(\n # Pass None if no conversations are provided.\n conversations = None\n if inputs is not None and len(inputs) > 0:\n+ base_message = []\n+ if system_prompt:\n+ base_message.append(Message(role=Role.SYSTEM, content=system_prompt))\n if input_image_bytes is None:\n conversations = [\n- Conversation(messages=[Message(role=Role.USER, content=content)])\n+ Conversation(\n+ messages=base_message + [Message(role=Role.USER, content=content)]\n+ )\n for content in inputs\n ]\n else:\n conversations = [\n Conversation(\n- messages=[\n+ messages=base_message\n+ + [\n Message(\n role=Role.USER,\n content=[\ndiff --git a/tests/unit/cli/test_cli_infer.py b/tests/unit/cli/test_cli_infer.py\nindex f5be3b6a4..453cc7504 100644\n--- a/tests/unit/cli/test_cli_infer.py\n+++ b/tests/unit/cli/test_cli_infer.py\n@@ -161,3 +161,64 @@ def test_infer_logging_levels(app, mock_infer, mock_infer_interactive):\n assert logger.level == logging.WARNING\n _ = runner.invoke(app, [\"-i\", \"--config\", yaml_path, \"-log\", \"CRITICAL\"])\n assert logger.level == logging.CRITICAL\n+\n+\n+def test_infer_with_system_prompt(app, mock_infer_interactive):\n+ with tempfile.TemporaryDirectory() as output_temp_dir:\n+ yaml_path = str(Path(output_temp_dir) / \"infer.yaml\")\n+\n+ config: InferenceConfig = _create_inference_config()\n+ config.to_yaml(yaml_path)\n+\n+ # Test with interactive mode and system prompt\n+ result = runner.invoke(\n+ app,\n+ [\n+ \"-i\",\n+ \"--config\",\n+ yaml_path,\n+ \"--system-prompt\",\n+ \"You are a helpful assistant\",\n+ ],\n+ )\n+ assert result.exit_code == 0\n+ mock_infer_interactive.assert_called_once_with(\n+ config, system_prompt=\"You are a helpful assistant\", input_image_bytes=None\n+ )\n+ mock_infer_interactive.reset_mock()\n+\n+\n+def test_infer_with_system_prompt_and_image(app, mock_infer_interactive):\n+ with tempfile.TemporaryDirectory() as output_temp_dir:\n+ yaml_path = str(Path(output_temp_dir) / \"infer.yaml\")\n+\n+ config: InferenceConfig = _create_inference_config()\n+ config.to_yaml(yaml_path)\n+\n+ test_image = PIL.Image.new(mode=\"RGB\", size=(32, 16))\n+ temp_io_output = io.BytesIO()\n+ test_image.save(temp_io_output, format=\"PNG\")\n+ image_bytes = temp_io_output.getvalue()\n+\n+ image_path = Path(output_temp_dir) / \"test_image.png\"\n+ with image_path.open(mode=\"wb\") as f:\n+ f.write(image_bytes)\n+\n+ result = runner.invoke(\n+ app,\n+ [\n+ \"-i\",\n+ \"--config\",\n+ yaml_path,\n+ \"--system-prompt\",\n+ \"You are a helpful assistant\",\n+ \"--image\",\n+ str(image_path),\n+ ],\n+ )\n+ assert result.exit_code == 0\n+ mock_infer_interactive.assert_called_once_with(\n+ config,\n+ system_prompt=\"You are a helpful assistant\",\n+ input_image_bytes=image_bytes,\n+ )\n"
}
|
[
{
"diff_hunk": "@@ -93,15 +99,21 @@ def infer(\n # Pass None if no conversations are provided.\n conversations = None\n if inputs is not None and len(inputs) > 0:\n+ base_message = []",
"line": null,
"original_line": 102,
"original_start_line": null,
"path": "src/oumi/infer.py",
"start_line": null,
"text": "@user1:\nnit: rename base_message to system_messages"
}
] |
1c85353decc65a867871783c5a9069c08ec1679b
|
diff --git a/src/oumi/__init__.py b/src/oumi/__init__.py
index cf70b65808..e9eef515dc 100644
--- a/src/oumi/__init__.py
+++ b/src/oumi/__init__.py
@@ -124,12 +124,17 @@ def evaluate(config: EvaluationConfig) -> list[dict[str, Any]]:
def infer_interactive(
- config: InferenceConfig, *, input_image_bytes: bytes | None = None
+ config: InferenceConfig,
+ *,
+ input_image_bytes: bytes | None = None,
+ system_prompt: str | None = None,
) -> None:
"""Interactively provide the model response for a user-provided input."""
import oumi.infer
- return oumi.infer.infer_interactive(config, input_image_bytes=input_image_bytes)
+ return oumi.infer.infer_interactive(
+ config, input_image_bytes=input_image_bytes, system_prompt=system_prompt
+ )
def infer(
diff --git a/src/oumi/cli/infer.py b/src/oumi/cli/infer.py
index ac5327f212..b640eceadc 100644
--- a/src/oumi/cli/infer.py
+++ b/src/oumi/cli/infer.py
@@ -44,6 +44,16 @@ def infer(
),
),
] = None,
+ system_prompt: Annotated[
+ Optional[str],
+ typer.Option(
+ "--system-prompt",
+ help=(
+ "System prompt for task-specific instructions. "
+ "Only used in interactive mode."
+ ),
+ ),
+ ] = None,
level: cli_utils.LOG_LEVEL_TYPE = None,
):
"""Run inference on a model.
@@ -57,6 +67,7 @@ def infer(
config: Path to the configuration file for inference.
interactive: Whether to run in an interactive session.
image: Path to the input image for `image+text` VLLMs.
+ system_prompt: System prompt for task-specific instructions.
level: The logging level for the specified command.
"""
extra_args = cli_utils.parse_extra_cli_args(ctx)
@@ -93,9 +104,10 @@ def infer(
"`input_path`."
)
return oumi_infer_interactive(
- parsed_config, input_image_bytes=input_image_png_bytes
+ parsed_config,
+ input_image_bytes=input_image_png_bytes,
+ system_prompt=system_prompt,
)
-
if parsed_config.input_path is None:
raise ValueError("One of `--interactive` or `input_path` must be provided.")
generations = oumi_infer(parsed_config)
diff --git a/src/oumi/infer.py b/src/oumi/infer.py
index 9c1c8c3f40..4f4241f188 100644
--- a/src/oumi/infer.py
+++ b/src/oumi/infer.py
@@ -41,7 +41,10 @@ def _get_engine(config: InferenceConfig) -> BaseInferenceEngine:
def infer_interactive(
- config: InferenceConfig, *, input_image_bytes: Optional[bytes] = None
+ config: InferenceConfig,
+ *,
+ input_image_bytes: Optional[bytes] = None,
+ system_prompt: Optional[str] = None,
) -> None:
"""Interactively provide the model response for a user-provided input."""
# Create engine up front to avoid reinitializing it for each input.
@@ -57,6 +60,7 @@ def infer_interactive(
inputs=[
input_text,
],
+ system_prompt=system_prompt,
input_image_bytes=input_image_bytes,
inference_engine=inference_engine,
)
@@ -73,6 +77,7 @@ def infer(
inference_engine: Optional[BaseInferenceEngine] = None,
*,
input_image_bytes: Optional[bytes] = None,
+ system_prompt: Optional[str] = None,
) -> list[Conversation]:
"""Runs batch inference for a model using the provided configuration.
@@ -81,6 +86,7 @@ def infer(
inputs: A list of inputs for inference.
inference_engine: The engine to use for inference. If unspecified, the engine
will be inferred from `config`.
+ system_prompt: System prompt for task-specific instructions.
input_image_bytes: An input PNG image bytes to be used with `image+text` VLLMs.
Only used in interactive mode.
@@ -93,25 +99,35 @@ def infer(
# Pass None if no conversations are provided.
conversations = None
if inputs is not None and len(inputs) > 0:
+ system_messages = []
+ if system_prompt:
+ system_messages.append(Message(role=Role.SYSTEM, content=system_prompt))
if input_image_bytes is None:
conversations = [
- Conversation(messages=[Message(role=Role.USER, content=content)])
+ Conversation(
+ messages=(
+ system_messages + [Message(role=Role.USER, content=content)]
+ )
+ )
for content in inputs
]
else:
conversations = [
Conversation(
- messages=[
- Message(
- role=Role.USER,
- content=[
- ContentItem(
- type=Type.IMAGE_BINARY, binary=input_image_bytes
- ),
- ContentItem(type=Type.TEXT, content=content),
- ],
- ),
- ]
+ messages=(
+ system_messages
+ + [
+ Message(
+ role=Role.USER,
+ content=[
+ ContentItem(
+ type=Type.IMAGE_BINARY, binary=input_image_bytes
+ ),
+ ContentItem(type=Type.TEXT, content=content),
+ ],
+ ),
+ ]
+ )
)
for content in inputs
]
diff --git a/tests/unit/cli/test_cli_infer.py b/tests/unit/cli/test_cli_infer.py
index f5be3b6a44..60ca5fec34 100644
--- a/tests/unit/cli/test_cli_infer.py
+++ b/tests/unit/cli/test_cli_infer.py
@@ -64,7 +64,9 @@ def test_infer_runs(app, mock_infer, mock_infer_interactive):
config: InferenceConfig = _create_inference_config()
config.to_yaml(yaml_path)
_ = runner.invoke(app, ["-i", "--config", yaml_path])
- mock_infer_interactive.assert_has_calls([call(config, input_image_bytes=None)])
+ mock_infer_interactive.assert_has_calls(
+ [call(config, input_image_bytes=None, system_prompt=None)]
+ )
def test_infer_with_overrides(app, mock_infer, mock_infer_interactive):
@@ -91,7 +93,7 @@ def test_infer_with_overrides(app, mock_infer, mock_infer_interactive):
expected_config.generation.max_new_tokens = 5
expected_config.engine = InferenceEngineType.VLLM
mock_infer_interactive.assert_has_calls(
- [call(expected_config, input_image_bytes=None)]
+ [call(expected_config, input_image_bytes=None, system_prompt=None)]
)
@@ -114,7 +116,7 @@ def test_infer_runs_with_image(app, mock_infer, mock_infer_interactive):
app, ["-i", "--config", yaml_path, "--image", str(image_path)]
)
mock_infer_interactive.assert_has_calls(
- [call(config, input_image_bytes=image_bytes)]
+ [call(config, input_image_bytes=image_bytes, system_prompt=None)]
)
@@ -161,3 +163,64 @@ def test_infer_logging_levels(app, mock_infer, mock_infer_interactive):
assert logger.level == logging.WARNING
_ = runner.invoke(app, ["-i", "--config", yaml_path, "-log", "CRITICAL"])
assert logger.level == logging.CRITICAL
+
+
+def test_infer_with_system_prompt(app, mock_infer_interactive):
+ with tempfile.TemporaryDirectory() as output_temp_dir:
+ yaml_path = str(Path(output_temp_dir) / "infer.yaml")
+
+ config: InferenceConfig = _create_inference_config()
+ config.to_yaml(yaml_path)
+
+ # Test with interactive mode and system prompt
+ result = runner.invoke(
+ app,
+ [
+ "-i",
+ "--config",
+ yaml_path,
+ "--system-prompt",
+ "You are a helpful assistant",
+ ],
+ )
+ assert result.exit_code == 0
+ mock_infer_interactive.assert_called_once_with(
+ config, system_prompt="You are a helpful assistant", input_image_bytes=None
+ )
+ mock_infer_interactive.reset_mock()
+
+
+def test_infer_with_system_prompt_and_image(app, mock_infer_interactive):
+ with tempfile.TemporaryDirectory() as output_temp_dir:
+ yaml_path = str(Path(output_temp_dir) / "infer.yaml")
+
+ config: InferenceConfig = _create_inference_config()
+ config.to_yaml(yaml_path)
+
+ test_image = PIL.Image.new(mode="RGB", size=(32, 16))
+ temp_io_output = io.BytesIO()
+ test_image.save(temp_io_output, format="PNG")
+ image_bytes = temp_io_output.getvalue()
+
+ image_path = Path(output_temp_dir) / "test_image.png"
+ with image_path.open(mode="wb") as f:
+ f.write(image_bytes)
+
+ result = runner.invoke(
+ app,
+ [
+ "-i",
+ "--config",
+ yaml_path,
+ "--system-prompt",
+ "You are a helpful assistant",
+ "--image",
+ str(image_path),
+ ],
+ )
+ assert result.exit_code == 0
+ mock_infer_interactive.assert_called_once_with(
+ config,
+ system_prompt="You are a helpful assistant",
+ input_image_bytes=image_bytes,
+ )
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
oumi-ai__oumi-1439@3a61935
|
oumi-ai/oumi
|
Python
| 1,439
|
Added fetch command and modified infer command to resolve oumi://
|
# Description
<!--
Thank you for contributing to Oumi! Before sending your PR out for review, please take a quick read through this template.
When your PR is merged, its title will appear in our release notes. Make sure your title gives a clear description of your change!
After you've updated your title, please replace this section with a detailed description of your change. Include as much context as possible so your reviewers can easily understand *what* you're changing and *why*.
The more information you provide, the faster we can review your change!
-->
<!--↓↓↓↓↓↓↓↓↓↓ Describe your change below ↓↓↓↓↓↓↓↓↓↓-->
A new CLI endpoint ```fetch``` has been added to allows users to directly download configs from the github repo. The same function is then reused to allow the resolution of ```oumi://``` to download a the config file from the github repo and provide the address of this newly downloaded config file for further use by the ```infer``` command.
A test has been added in the ```tests_cli_main.py``` to check for successful registration of the ```fetch``` entry-point.
A new test file has been added named ```tests_cli_fetch.py``` that has three tests that cover the cases of default directory, explicit ```--output-dir``` passing and passing output directory via ```OUMI_DIR``` environment variable.
<!--↑↑↑↑↑↑↑↑↑↑ Describe your change above ↑↑↑↑↑↑↑↑↑↑-->
## Related issues
<!--
Make sure to list any relevant related issues to your change. More often than not this will be the single issue fixed by your PR.
-->
<!--↓↓↓↓↓↓↓↓↓↓ List your related issues below ↓↓↓↓↓↓↓↓↓↓-->
Fixes #1374 (issue)
<!--↑↑↑↑↑↑↑↑↑↑ List your related issues above ↑↑↑↑↑↑↑↑↑↑-->
## Before submitting
- [ ] This PR only changes documentation. (You can ignore the following checks in that case)
- [x] Did you read the [contributor guideline](https://github.com/oumi-ai/oumi/blob/main/CONTRIBUTING.md) Pull Request guidelines?
- [x] Did you link the issue(s) related to this PR in the section above?
- [x] Did you add / update tests where needed?
## Reviewers
At least one review from a member of `oumi-ai/oumi-staff` is required.
<!-- Add `oumi-ai/oumi-staff` as a reviewer when your PR is ready for review.
You are also welcome to add individual members of `oumi-ai/oumi-staff` as reviewers.
If no one has reviewed your PR after several days, feel free to add a comment tagging specific reviewers.
-->
|
2025-02-17T15:32:15Z
|
[Feature] Add a new "oumi fetch" command to pull configs via the CLI
### Feature request
Create a new CLI command `oumi fetch` that will download configs from the oumi github repo to a local directory specified by an environment variable.
Any CLI actions using the `oumi://` prefix will look for the config in that folder.
### Motivation / references
This makes running Oumi much less disruptive. Users could even set the environment variable to a clone of the oumi repo to run the CLI anywhere. Consider the following:
```
OUMI_DIR=~/.oumi/configs
oumi infer -c oumi://smollm/inference/135m_infer.yaml \
--generation.max_new_tokens 40 \
--generation.temperature 0.7 \
--interactive
```
### Your contribution
This change requires:
- Adding a new CLI entrypoint
- Updating the config loading logic for each CLI entrypoint to check for the `oumi://` prefix and resolve it accordingly.
|
I recently worked on a CLI related issue so the steps for this issue seem clear to me. Can this be assigned to me? I already have the code for this (I was just playing around to see if my process was correct😅)
Definitely, thanks for volunteering, @Spaarsh ! Let me know if you have any questions about this task!
|
[
{
"body": "### Feature request\n\nCreate a new CLI command `oumi fetch` that will download configs from the oumi github repo to a local directory specified by an environment variable.\n\nAny CLI actions using the `oumi://` prefix will look for the config in that folder.\n\n\n\n### Motivation / references\n\nThis makes running Oumi much less disruptive. Users could even set the environment variable to a clone of the oumi repo to run the CLI anywhere. Consider the following:\n\n```\nOUMI_DIR=~/.oumi/configs\n\n\noumi infer -c oumi://smollm/inference/135m_infer.yaml \\\n --generation.max_new_tokens 40 \\\n --generation.temperature 0.7 \\\n --interactive\n```\n\n### Your contribution\n\nThis change requires:\n\n- Adding a new CLI entrypoint\n- Updating the config loading logic for each CLI entrypoint to check for the `oumi://` prefix and resolve it accordingly.",
"number": 1374,
"title": "[Feature] Add a new \"oumi fetch\" command to pull configs via the CLI"
}
] |
a5f4be09d8e1c72ac1f2eec87ccdd2bb832a9d2d
|
{
"head_commit": "3a61935f9ea3b55faffe08de6a029b3b68829533",
"head_commit_message": "Adding fetch registration test",
"patch_to_review": "diff --git a/src/oumi/cli/cli_utils.py b/src/oumi/cli/cli_utils.py\nindex c890b1dbc..d32e78681 100644\n--- a/src/oumi/cli/cli_utils.py\n+++ b/src/oumi/cli/cli_utils.py\n@@ -15,6 +15,7 @@\n import logging\n import os\n from enum import Enum\n+from pathlib import Path\n from typing import Annotated, Optional\n \n import typer\n@@ -137,3 +138,46 @@ def set_log_level(level: Optional[LogLevel]):\n callback=set_log_level,\n ),\n ]\n+\n+\n+def resolve_oumi_prefix(\n+ config_path: str, output_dir: Optional[Path] = None\n+) -> tuple[str, Path]:\n+ \"\"\"Resolves oumi:// prefix and determines output directory.\n+\n+ Args:\n+ config_path: Path that may contain oumi:// prefix\n+ output_dir: Optional output directory override\n+\n+ Returns:\n+ tuple[str, Path]: (cleaned path, output directory)\n+ \"\"\"\n+ config_path = config_path[7:]\n+\n+ config_dir = output_dir or os.environ.get(\"OUMI_DIR\") or \"~/.oumi/configs\"\n+ config_dir = Path(config_dir).expanduser()\n+ config_dir.mkdir(parents=True, exist_ok=True)\n+\n+ return config_path, config_dir\n+\n+\n+def resolve_and_fetch_config(\n+ config_path: str, output_dir: Optional[Path] = None\n+) -> Path:\n+ \"\"\"Resolve oumi:// prefix and fetch config if needed.\n+\n+ Args:\n+ config_path: Original config path that may contain oumi:// prefix\n+ output_dir: Optional override for output directory\n+\n+ Returns:\n+ Path: Local path to the config file\n+ \"\"\"\n+ if not config_path.startswith(\"oumi://\"):\n+ return Path(config_path)\n+\n+ from oumi.cli.fetch import fetch\n+\n+ fetch(config_path, output_dir)\n+\n+ return Path(config_path)\ndiff --git a/src/oumi/cli/fetch.py b/src/oumi/cli/fetch.py\nnew file mode 100644\nindex 000000000..d42af5767\n--- /dev/null\n+++ b/src/oumi/cli/fetch.py\n@@ -0,0 +1,82 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+from pathlib import Path\n+from typing import Annotated, Optional\n+\n+import requests\n+import typer\n+import yaml\n+\n+from oumi.cli.cli_utils import resolve_oumi_prefix\n+from oumi.utils.logging import logger\n+\n+OUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main/configs/recipes\"\n+OUMI_DIR = \"~/.oumi/configs\"\n+\n+\n+def fetch(\n+ config_path: Annotated[\n+ str,\n+ typer.Argument(\n+ help=\"Path to config (e.g. oumi://smollm/inference/135m_infer.yaml)\"\n+ ),\n+ ],\n+ output_dir: Annotated[\n+ Optional[Path],\n+ typer.Option(\n+ \"--output-dir\",\n+ \"-o\",\n+ help=(\n+ \"Directory to save configs \"\n+ \"(defaults to OUMI_DIR env var or ~/.oumi/configs)\"\n+ ),\n+ ),\n+ ] = None,\n+) -> None:\n+ \"\"\"Fetch configuration files from GitHub repository.\"\"\"\n+ # Remove oumi:// prefix if present\n+ if config_path.startswith(\"oumi://\"):\n+ config_path, config_dir = resolve_oumi_prefix(config_path, output_dir)\n+\n+ else:\n+ # raise error\n+ logger.error(\"Invalid config path\")\n+ raise typer.Exit(1)\n+\n+ try:\n+ # Fetch from GitHub\n+ github_url = f\"{OUMI_GITHUB_RAW}/{config_path}\"\n+ response = requests.get(github_url)\n+ response.raise_for_status()\n+ config_content = response.text\n+\n+ # Validate YAML\n+ yaml.safe_load(config_content)\n+\n+ # Save to destination\n+ local_path = (config_dir or Path(OUMI_DIR).expanduser()) / config_path\n+ local_path.parent.mkdir(parents=True, exist_ok=True)\n+\n+ with open(local_path, \"w\") as f:\n+ f.write(config_content)\n+\n+ logger.info(f\"Successfully downloaded config to {local_path}\")\n+\n+ except requests.RequestException as e:\n+ logger.error(f\"Failed to download config from GitHub: {e}\")\n+ raise typer.Exit(1)\n+ except yaml.YAMLError:\n+ logger.error(\"Invalid YAML configuration\")\n+ raise typer.Exit(1)\ndiff --git a/src/oumi/cli/infer.py b/src/oumi/cli/infer.py\nindex b521cb0ea..59dbe5f07 100644\n--- a/src/oumi/cli/infer.py\n+++ b/src/oumi/cli/infer.py\n@@ -13,6 +13,7 @@\n # limitations under the License.\n \n import os\n+from pathlib import Path\n from typing import Annotated, Final, Optional\n \n import typer\n@@ -21,6 +22,8 @@\n from oumi.utils.logging import logger\n \n _DEFAULT_CLI_PDF_DPI: Final[int] = 200\n+OUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main/configs/recipes\"\n+OUMI_DIR = \"~/.oumi/configs\"\n \n \n def infer(\n@@ -32,6 +35,16 @@ def infer(\n help=\"Path to the configuration file for inference.\",\n ),\n ] = None,\n+ output_dir: Annotated[\n+ Optional[Path],\n+ typer.Option(\n+ \"--output-dir\",\n+ help=(\n+ \"Directory to save configs \"\n+ \"(defaults to OUMI_DIR env var or ~/.oumi/configs)\"\n+ ),\n+ ),\n+ ] = None,\n interactive: Annotated[\n bool,\n typer.Option(\"-i\", \"--interactive\", help=\"Run in an interactive session.\"),\n@@ -67,6 +80,8 @@ def infer(\n Args:\n ctx: The Typer context object.\n config: Path to the configuration file for inference.\n+ output_dir: Directory to save configs\n+ (defaults to OUMI_DIR env var or ~/.oumi/configs).\n interactive: Whether to run in an interactive session.\n image: Path to the input image for `image+text` VLLMs.\n system_prompt: System prompt for task-specific instructions.\n@@ -74,6 +89,12 @@ def infer(\n \"\"\"\n extra_args = cli_utils.parse_extra_cli_args(ctx)\n \n+ if config:\n+ if config.startswith(\"oumi://\"):\n+ _ = cli_utils.resolve_and_fetch_config(config, output_dir)\n+ cleaned_path, config_dir = cli_utils.resolve_oumi_prefix(config, output_dir)\n+ config = str(config_dir / cleaned_path)\n+\n # Delayed imports\n from oumi import infer as oumi_infer\n from oumi import infer_interactive as oumi_infer_interactive\ndiff --git a/src/oumi/cli/main.py b/src/oumi/cli/main.py\nindex 4e566346b..5cb912d1f 100644\n--- a/src/oumi/cli/main.py\n+++ b/src/oumi/cli/main.py\n@@ -21,6 +21,7 @@\n from oumi.cli.distributed_run import accelerate, torchrun\n from oumi.cli.env import env\n from oumi.cli.evaluate import evaluate\n+from oumi.cli.fetch import fetch\n from oumi.cli.infer import infer\n from oumi.cli.judge import conversations, dataset, model\n from oumi.cli.launch import cancel, down, status, stop, up, which\n@@ -108,6 +109,11 @@ def get_app() -> typer.Typer:\n \"with reasonable default values for distributed training.\"\n ),\n )\n+\n+ app.command(\n+ help=\"Fetch configuration files from GitHub repository.\",\n+ )(fetch)\n+\n return app\n \n \ndiff --git a/tests/unit/cli/test_cli_fetch.py b/tests/unit/cli/test_cli_fetch.py\nnew file mode 100644\nindex 000000000..8267b6bfc\n--- /dev/null\n+++ b/tests/unit/cli/test_cli_fetch.py\n@@ -0,0 +1,86 @@\n+import tempfile\n+from pathlib import Path\n+from unittest.mock import Mock, patch\n+\n+import pytest\n+import typer\n+from typer.testing import CliRunner\n+\n+from oumi.cli.fetch import fetch\n+\n+runner = CliRunner()\n+\n+\[email protected]\n+def app():\n+ fake_app = typer.Typer()\n+ fake_app.command()(fetch)\n+ return fake_app\n+\n+\[email protected]\n+def mock_response():\n+ response = Mock()\n+ response.text = \"key: value\"\n+ response.raise_for_status.return_value = None\n+ return response\n+\n+\[email protected]\n+def mock_requests(mock_response):\n+ with patch(\"oumi.cli.fetch.requests\") as mock:\n+ mock.get.return_value = mock_response\n+ yield mock\n+\n+\n+def test_fetch_with_explicit_output_dir(app, mock_requests):\n+ with tempfile.TemporaryDirectory() as temp_dir:\n+ # Given\n+ output_dir = Path(temp_dir)\n+ config_path = \"oumi://smollm/inference/135m_infer.yaml\"\n+ expected_path = output_dir / \"smollm/inference/135m_infer.yaml\"\n+\n+ # When\n+ result = runner.invoke(app, [config_path, \"-o\", str(output_dir)])\n+\n+ # Then\n+ assert result.exit_code == 0\n+ mock_requests.get.assert_called_once()\n+ assert expected_path.exists()\n+\n+\n+def test_fetch_with_oumi_dir_env(app, mock_requests, monkeypatch):\n+ with tempfile.TemporaryDirectory() as temp_dir:\n+ # Given\n+ config_path = \"oumi://smollm/inference/135m_infer.yaml\"\n+ expected_path = Path(temp_dir) / \"smollm/inference/135m_infer.yaml\"\n+ monkeypatch.setenv(\"OUMI_DIR\", temp_dir)\n+\n+ # When\n+ result = runner.invoke(app, [config_path])\n+\n+ # Then\n+ assert result.exit_code == 0\n+ mock_requests.get.assert_called_once()\n+ assert expected_path.exists()\n+\n+\n+def test_fetch_with_default_dir(app, mock_requests, monkeypatch):\n+ # Given\n+ config_path = \"oumi://smollm/inference/135m_infer.yaml\"\n+ expected_path = Path.home() / \".oumi/configs/smollm/inference/135m_infer.yaml\"\n+ monkeypatch.delenv(\"OUMI_DIR\", raising=False)\n+\n+ # When\n+ result = runner.invoke(app, [config_path])\n+\n+ # Then\n+ assert result.exit_code == 0\n+ mock_requests.get.assert_called_once()\n+ assert expected_path.exists()\n+\n+ # Cleanup\n+ if expected_path.exists():\n+ expected_path.unlink()\n+ if expected_path.parent.exists():\n+ expected_path.parent.rmdir()\ndiff --git a/tests/unit/cli/test_cli_main.py b/tests/unit/cli/test_cli_main.py\nindex eee903c58..be53378cd 100644\n--- a/tests/unit/cli/test_cli_main.py\n+++ b/tests/unit/cli/test_cli_main.py\n@@ -8,6 +8,7 @@\n from oumi.cli.distributed_run import accelerate, torchrun\n from oumi.cli.env import env\n from oumi.cli.evaluate import evaluate\n+from oumi.cli.fetch import fetch\n from oumi.cli.infer import infer\n from oumi.cli.judge import conversations, dataset\n from oumi.cli.launch import cancel, down, status, stop, up, which\n@@ -49,6 +50,13 @@ def mock_infer():\n yield m_infer\n \n \[email protected]\n+def mock_fetch():\n+ with patch(\"oumi.cli.main.fetch\") as m_fetch:\n+ _copy_command(m_fetch, fetch)\n+ yield m_fetch\n+\n+\n @pytest.fixture\n def mock_down():\n with patch(\"oumi.cli.main.down\") as m_down:\n@@ -147,6 +155,11 @@ def test_main_infer_registered(mock_infer):\n mock_infer.assert_called_once()\n \n \n+def test_main_fetch_registered(mock_fetch):\n+ _ = runner.invoke(get_app(), [\"fetch\", \"some/path\", \"--output-dir\", \"output/path\"])\n+ mock_fetch.assert_called_once()\n+\n+\n def test_main_eval_registered(mock_eval):\n _ = runner.invoke(\n get_app(), [\"eval\", \"--config\", \"some/path\", \"--allow_extra\" \"args\"]\n"
}
|
[
{
"diff_hunk": "@@ -137,3 +138,46 @@ def set_log_level(level: Optional[LogLevel]):\n callback=set_log_level,\n ),\n ]\n+\n+\n+def resolve_oumi_prefix(\n+ config_path: str, output_dir: Optional[Path] = None\n+) -> tuple[str, Path]:\n+ \"\"\"Resolves oumi:// prefix and determines output directory.\n+\n+ Args:\n+ config_path: Path that may contain oumi:// prefix\n+ output_dir: Optional output directory override\n+\n+ Returns:\n+ tuple[str, Path]: (cleaned path, output directory)\n+ \"\"\"\n+ config_path = config_path[7:]\n+\n+ config_dir = output_dir or os.environ.get(\"OUMI_DIR\") or \"~/.oumi/configs\"\n+ config_dir = Path(config_dir).expanduser()\n+ config_dir.mkdir(parents=True, exist_ok=True)\n+\n+ return config_path, config_dir\n+\n+\n+def resolve_and_fetch_config(\n+ config_path: str, output_dir: Optional[Path] = None\n+) -> Path:\n+ \"\"\"Resolve oumi:// prefix and fetch config if needed.\n+\n+ Args:\n+ config_path: Original config path that may contain oumi:// prefix\n+ output_dir: Optional override for output directory\n+\n+ Returns:\n+ Path: Local path to the config file\n+ \"\"\"\n+ if not config_path.startswith(\"oumi://\"):",
"line": null,
"original_line": 176,
"original_start_line": null,
"path": "src/oumi/cli/cli_utils.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n if not config_path.lower().startswith(\"oumi://\"):\r\n```"
},
{
"diff_hunk": "@@ -0,0 +1,82 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+from pathlib import Path\n+from typing import Annotated, Optional\n+\n+import requests\n+import typer\n+import yaml\n+\n+from oumi.cli.cli_utils import resolve_oumi_prefix\n+from oumi.utils.logging import logger\n+\n+OUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main/configs/recipes\"\n+OUMI_DIR = \"~/.oumi/configs\"\n+\n+\n+def fetch(\n+ config_path: Annotated[\n+ str,\n+ typer.Argument(\n+ help=\"Path to config (e.g. oumi://smollm/inference/135m_infer.yaml)\"\n+ ),\n+ ],\n+ output_dir: Annotated[\n+ Optional[Path],\n+ typer.Option(\n+ \"--output-dir\",\n+ \"-o\",\n+ help=(\n+ \"Directory to save configs \"\n+ \"(defaults to OUMI_DIR env var or ~/.oumi/configs)\"\n+ ),\n+ ),\n+ ] = None,\n+) -> None:\n+ \"\"\"Fetch configuration files from GitHub repository.\"\"\"\n+ # Remove oumi:// prefix if present\n+ if config_path.startswith(\"oumi://\"):\n+ config_path, config_dir = resolve_oumi_prefix(config_path, output_dir)\n+\n+ else:\n+ # raise error\n+ logger.error(\"Invalid config path\")\n+ raise typer.Exit(1)",
"line": null,
"original_line": 56,
"original_start_line": 50,
"path": "src/oumi/cli/fetch.py",
"start_line": null,
"text": "@user1:\nWith my change above, you can simplify this since `resolve_oumi_prefix(...)` now handles with and without the oumi prefix:\r\n\r\n```suggestion\r\n config_path, config_dir = resolve_oumi_prefix(config_path, output_dir)\r\n```"
},
{
"diff_hunk": "@@ -67,13 +80,21 @@ def infer(\n Args:\n ctx: The Typer context object.\n config: Path to the configuration file for inference.\n+ output_dir: Directory to save configs\n+ (defaults to OUMI_DIR env var or ~/.oumi/configs).\n interactive: Whether to run in an interactive session.\n image: Path to the input image for `image+text` VLLMs.\n system_prompt: System prompt for task-specific instructions.\n level: The logging level for the specified command.\n \"\"\"\n extra_args = cli_utils.parse_extra_cli_args(ctx)\n \n+ if config:\n+ if config.startswith(\"oumi://\"):\n+ _ = cli_utils.resolve_and_fetch_config(config, output_dir)\n+ cleaned_path, config_dir = cli_utils.resolve_oumi_prefix(config, output_dir)\n+ config = str(config_dir / cleaned_path)",
"line": null,
"original_line": 96,
"original_start_line": 93,
"path": "src/oumi/cli/infer.py",
"start_line": null,
"text": "@user1:\nWith my suggested changes to let `resolve_and_fetch_config` handle all prefixes, we can simplify this:\r\n\r\n```suggestion\r\n config = cli_utils.resolve_and_fetch_config(config, output_dir)\r\n```"
},
{
"diff_hunk": "@@ -21,6 +22,8 @@\n from oumi.utils.logging import logger\n \n _DEFAULT_CLI_PDF_DPI: Final[int] = 200\n+OUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main/configs/recipes\"\n+OUMI_DIR = \"~/.oumi/configs\"",
"line": null,
"original_line": 26,
"original_start_line": 25,
"path": "src/oumi/cli/infer.py",
"start_line": null,
"text": "@user1:\nLet's delete these, I believe they're unused in this file"
},
{
"diff_hunk": "@@ -108,6 +109,11 @@ def get_app() -> typer.Typer:\n \"with reasonable default values for distributed training.\"\n ),\n )\n+\n+ app.command(\n+ help=\"Fetch configuration files from GitHub repository.\",",
"line": null,
"original_line": 114,
"original_start_line": null,
"path": "src/oumi/cli/main.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n help=\"Fetch configuration files from the oumi GitHub repository.\",\r\n```\r\n\r\nSmall nit :)"
},
{
"diff_hunk": "@@ -0,0 +1,82 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+from pathlib import Path\n+from typing import Annotated, Optional\n+\n+import requests\n+import typer\n+import yaml\n+\n+from oumi.cli.cli_utils import resolve_oumi_prefix\n+from oumi.utils.logging import logger\n+\n+OUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main/configs/recipes\"\n+OUMI_DIR = \"~/.oumi/configs\"\n+\n+\n+def fetch(\n+ config_path: Annotated[\n+ str,\n+ typer.Argument(\n+ help=\"Path to config (e.g. oumi://smollm/inference/135m_infer.yaml)\"\n+ ),\n+ ],\n+ output_dir: Annotated[\n+ Optional[Path],\n+ typer.Option(\n+ \"--output-dir\",\n+ \"-o\",\n+ help=(\n+ \"Directory to save configs \"\n+ \"(defaults to OUMI_DIR env var or ~/.oumi/configs)\"\n+ ),\n+ ),\n+ ] = None,\n+) -> None:\n+ \"\"\"Fetch configuration files from GitHub repository.\"\"\"\n+ # Remove oumi:// prefix if present\n+ if config_path.startswith(\"oumi://\"):\n+ config_path, config_dir = resolve_oumi_prefix(config_path, output_dir)\n+\n+ else:\n+ # raise error\n+ logger.error(\"Invalid config path\")\n+ raise typer.Exit(1)\n+\n+ try:\n+ # Fetch from GitHub\n+ github_url = f\"{OUMI_GITHUB_RAW}/{config_path}\"",
"line": null,
"original_line": 60,
"original_start_line": null,
"path": "src/oumi/cli/fetch.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n github_url = f\"{OUMI_GITHUB_RAW}/{config_path.lstrip('/')}\"\r\n```\r\nThis lets us gracefully handle the user input if there are any leading slashes"
},
{
"diff_hunk": "@@ -0,0 +1,82 @@\n+# Copyright 2025 - Oumi\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+\n+from pathlib import Path\n+from typing import Annotated, Optional\n+\n+import requests\n+import typer\n+import yaml\n+\n+from oumi.cli.cli_utils import resolve_oumi_prefix\n+from oumi.utils.logging import logger\n+\n+OUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main/configs/recipes\"",
"line": null,
"original_line": 25,
"original_start_line": null,
"path": "src/oumi/cli/fetch.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\nOUMI_GITHUB_RAW = \"https://raw.githubusercontent.com/oumi-ai/oumi/main\"\r\n```\r\n\r\nLet's make this the top level directory so users can request samples from `/configs/examples` or `/configs/projects` as well"
},
{
"diff_hunk": "@@ -137,3 +138,46 @@ def set_log_level(level: Optional[LogLevel]):\n callback=set_log_level,\n ),\n ]\n+\n+\n+def resolve_oumi_prefix(\n+ config_path: str, output_dir: Optional[Path] = None\n+) -> tuple[str, Path]:\n+ \"\"\"Resolves oumi:// prefix and determines output directory.\n+\n+ Args:\n+ config_path: Path that may contain oumi:// prefix\n+ output_dir: Optional output directory override\n+\n+ Returns:\n+ tuple[str, Path]: (cleaned path, output directory)\n+ \"\"\"\n+ config_path = config_path[7:]",
"line": null,
"original_line": 155,
"original_start_line": null,
"path": "src/oumi/cli/cli_utils.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n oumi_prefix = \"oumi://\"\r\n if config_path.lower().startswith(oumi_prefix):\r\n config_path = config_path[len(oumi_prefix):]\r\n```\r\n\r\nThis allows users to use the `oumi://` prefix when fetching, but it's not required.\r\n\r\nSo they could call `oumi fetch /configs/recipes/some_config.yaml`\r\nor \r\n`oumi fetch oumi://configs/recipes/some_config.yaml`"
}
] |
126578417979b7a3110f19d80674336718393a9f
|
diff --git a/src/oumi/cli/cli_utils.py b/src/oumi/cli/cli_utils.py
index c890b1dbc8..b64dded85c 100644
--- a/src/oumi/cli/cli_utils.py
+++ b/src/oumi/cli/cli_utils.py
@@ -15,6 +15,7 @@
import logging
import os
from enum import Enum
+from pathlib import Path
from typing import Annotated, Optional
import typer
@@ -137,3 +138,48 @@ def set_log_level(level: Optional[LogLevel]):
callback=set_log_level,
),
]
+
+
+def resolve_oumi_prefix(
+ config_path: str, output_dir: Optional[Path] = None
+) -> tuple[str, Path]:
+ """Resolves oumi:// prefix and determines output directory.
+
+ Args:
+ config_path: Path that may contain oumi:// prefix
+ output_dir: Optional output directory override
+
+ Returns:
+ tuple[str, Path]: (cleaned path, output directory)
+ """
+ oumi_prefix = "oumi://"
+ if config_path.lower().startswith(oumi_prefix):
+ config_path = config_path[len(oumi_prefix) :]
+
+ config_dir = output_dir or os.environ.get("OUMI_DIR") or "~/.oumi/configs"
+ config_dir = Path(config_dir).expanduser()
+ config_dir.mkdir(parents=True, exist_ok=True)
+
+ return config_path, config_dir
+
+
+def resolve_and_fetch_config(
+ config_path: str, output_dir: Optional[Path] = None
+) -> Path:
+ """Resolve oumi:// prefix and fetch config if needed.
+
+ Args:
+ config_path: Original config path that may contain oumi:// prefix
+ output_dir: Optional override for output directory
+
+ Returns:
+ Path: Local path to the config file
+ """
+ if not config_path.lower().startswith("oumi://"):
+ return Path(config_path)
+
+ from oumi.cli.fetch import fetch
+
+ fetch(config_path, output_dir)
+
+ return Path(config_path)
diff --git a/src/oumi/cli/fetch.py b/src/oumi/cli/fetch.py
new file mode 100644
index 0000000000..eae07c5631
--- /dev/null
+++ b/src/oumi/cli/fetch.py
@@ -0,0 +1,88 @@
+# Copyright 2025 - Oumi
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from pathlib import Path
+from typing import Annotated, Optional
+
+import requests
+import typer
+import yaml
+from requests.exceptions import RequestException
+
+from oumi.cli.cli_utils import resolve_oumi_prefix
+from oumi.utils.logging import logger
+
+OUMI_GITHUB_RAW = "https://raw.githubusercontent.com/oumi-ai/oumi/main"
+OUMI_DIR = "~/.oumi/configs"
+
+
+def fetch(
+ config_path: Annotated[
+ str,
+ typer.Argument(
+ help="Path to config (e.g. oumi://smollm/inference/135m_infer.yaml)"
+ ),
+ ],
+ output_dir: Annotated[
+ Optional[Path],
+ typer.Option(
+ "--output-dir",
+ "-o",
+ help=(
+ "Directory to save configs "
+ "(defaults to OUMI_DIR env var or ~/.oumi/configs)"
+ ),
+ ),
+ ] = None,
+ force: Annotated[
+ bool, typer.Option("--force", "-f", help="Overwrite existing config if present")
+ ] = False,
+) -> None:
+ """Fetch configuration files from GitHub repository."""
+ # Remove oumi:// prefix if present
+ config_path, config_dir = resolve_oumi_prefix(config_path, output_dir)
+
+ try:
+ # Check destination first
+ local_path = (config_dir or Path(OUMI_DIR).expanduser()) / config_path
+ if local_path.exists() and not force:
+ msg = f"Config already exists at {local_path}. Use --force to overwrite"
+ logger.error(msg)
+ typer.echo(msg, err=True)
+ raise typer.Exit(code=1)
+
+ # Fetch from GitHub
+ github_url = f"{OUMI_GITHUB_RAW}/{config_path.lstrip('/')}"
+ response = requests.get(github_url)
+ response.raise_for_status()
+ config_content = response.text
+
+ # Validate YAML
+ yaml.safe_load(config_content)
+
+ # Save to destination
+ if local_path.exists():
+ logger.warning(f"Overwriting existing config at {local_path}")
+ local_path.parent.mkdir(parents=True, exist_ok=True)
+
+ with open(local_path, "w") as f:
+ f.write(config_content)
+ logger.info(f"Successfully downloaded config to {local_path}")
+
+ except RequestException as e:
+ logger.error(f"Failed to download config from GitHub: {e}")
+ raise typer.Exit(1)
+ except yaml.YAMLError:
+ logger.error("Invalid YAML configuration")
+ raise typer.Exit(1)
diff --git a/src/oumi/cli/infer.py b/src/oumi/cli/infer.py
index b521cb0ead..1a48a093f0 100644
--- a/src/oumi/cli/infer.py
+++ b/src/oumi/cli/infer.py
@@ -13,6 +13,7 @@
# limitations under the License.
import os
+from pathlib import Path
from typing import Annotated, Final, Optional
import typer
@@ -32,6 +33,16 @@ def infer(
help="Path to the configuration file for inference.",
),
] = None,
+ output_dir: Annotated[
+ Optional[Path],
+ typer.Option(
+ "--output-dir",
+ help=(
+ "Directory to save configs "
+ "(defaults to OUMI_DIR env var or ~/.oumi/configs)"
+ ),
+ ),
+ ] = None,
interactive: Annotated[
bool,
typer.Option("-i", "--interactive", help="Run in an interactive session."),
@@ -67,6 +78,8 @@ def infer(
Args:
ctx: The Typer context object.
config: Path to the configuration file for inference.
+ output_dir: Directory to save configs
+ (defaults to OUMI_DIR env var or ~/.oumi/configs).
interactive: Whether to run in an interactive session.
image: Path to the input image for `image+text` VLLMs.
system_prompt: System prompt for task-specific instructions.
@@ -74,6 +87,9 @@ def infer(
"""
extra_args = cli_utils.parse_extra_cli_args(ctx)
+ if config:
+ config = str(cli_utils.resolve_and_fetch_config(config, output_dir))
+
# Delayed imports
from oumi import infer as oumi_infer
from oumi import infer_interactive as oumi_infer_interactive
diff --git a/src/oumi/cli/main.py b/src/oumi/cli/main.py
index 4e566346bb..d51f62bc76 100644
--- a/src/oumi/cli/main.py
+++ b/src/oumi/cli/main.py
@@ -21,6 +21,7 @@
from oumi.cli.distributed_run import accelerate, torchrun
from oumi.cli.env import env
from oumi.cli.evaluate import evaluate
+from oumi.cli.fetch import fetch
from oumi.cli.infer import infer
from oumi.cli.judge import conversations, dataset, model
from oumi.cli.launch import cancel, down, status, stop, up, which
@@ -108,6 +109,11 @@ def get_app() -> typer.Typer:
"with reasonable default values for distributed training."
),
)
+
+ app.command(
+ help="Fetch configuration files from the oumi GitHub repository.",
+ )(fetch)
+
return app
diff --git a/tests/unit/cli/test_cli_fetch.py b/tests/unit/cli/test_cli_fetch.py
new file mode 100644
index 0000000000..695c7d7d71
--- /dev/null
+++ b/tests/unit/cli/test_cli_fetch.py
@@ -0,0 +1,197 @@
+import tempfile
+from pathlib import Path
+from unittest.mock import Mock, patch
+
+import pytest
+import typer
+from typer.testing import CliRunner
+
+from oumi.cli.fetch import fetch
+
+runner = CliRunner()
+
+
[email protected]
+def app():
+ fake_app = typer.Typer()
+ fake_app.command()(fetch)
+ return fake_app
+
+
[email protected]
+def mock_response():
+ response = Mock()
+ response.text = "key: value"
+ response.raise_for_status.return_value = None
+ return response
+
+
[email protected]
+def mock_requests(mock_response):
+ with patch("oumi.cli.fetch.requests") as mock:
+ mock.get.return_value = mock_response
+ yield mock
+
+
+def test_fetch_with_oumi_prefix_and_explicit_output_dir(app, mock_requests):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Given
+ output_dir = Path(temp_dir)
+ config_path = "oumi://configs/recipes/smollm/inference/135m_infer.yaml"
+ expected_path = output_dir / "configs/recipes/smollm/inference/135m_infer.yaml"
+
+ # When
+ result = runner.invoke(app, [config_path, "-o", str(output_dir)])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+
+
+def test_fetch_without_prefix_and_explicit_output_dir(app, mock_requests):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Given
+ output_dir = Path(temp_dir)
+ config_path = (
+ "configs/recipes/smollm/inference/135m_infer.yaml" # No oumi:// prefix
+ )
+ expected_path = output_dir / "configs/recipes/smollm/inference/135m_infer.yaml"
+
+ # When
+ result = runner.invoke(app, [config_path, "-o", str(output_dir)])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+
+
+def test_fetch_with_oumi_prefix_and_env_dir(app, mock_requests, monkeypatch):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Given
+ config_path = "oumi://configs/recipes/smollm/inference/135m_infer.yaml"
+ expected_path = (
+ Path(temp_dir) / "configs/recipes/smollm/inference/135m_infer.yaml"
+ )
+ monkeypatch.setenv("OUMI_DIR", temp_dir)
+
+ # When
+ result = runner.invoke(app, [config_path])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+
+
+def test_fetch_without_prefix_and_env_dir(app, mock_requests, monkeypatch):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Given
+ config_path = (
+ "configs/recipes/smollm/inference/135m_infer.yaml" # No oumi:// prefix
+ )
+ expected_path = (
+ Path(temp_dir) / "configs/recipes/smollm/inference/135m_infer.yaml"
+ )
+ monkeypatch.setenv("OUMI_DIR", temp_dir)
+
+ # When
+ result = runner.invoke(app, [config_path])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+
+
+def test_fetch_with_oumi_prefix_and_default_dir(app, mock_requests, monkeypatch):
+ # Given
+ config_path = "oumi://configs/recipes/smollm/inference/135m_infer.yaml"
+ expected_path = (
+ Path.home() / ".oumi/configs/configs/recipes/smollm/inference/135m_infer.yaml"
+ )
+ monkeypatch.delenv("OUMI_DIR", raising=False)
+
+ # When
+ result = runner.invoke(app, [config_path])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+
+ # Cleanup
+ if expected_path.exists():
+ expected_path.unlink()
+ if expected_path.parent.exists():
+ expected_path.parent.rmdir()
+
+
+def test_fetch_without_prefix_and_default_dir(app, mock_requests, monkeypatch):
+ # Given
+ config_path = (
+ "configs/recipes/smollm/inference/135m_infer.yaml" # No oumi:// prefix
+ )
+ expected_path = (
+ Path.home() / ".oumi/configs/configs/recipes/smollm/inference/135m_infer.yaml"
+ )
+ monkeypatch.delenv("OUMI_DIR", raising=False)
+
+ # When
+ result = runner.invoke(app, [config_path])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+
+ # Cleanup
+ if expected_path.exists():
+ expected_path.unlink()
+ if expected_path.parent.exists():
+ expected_path.parent.rmdir()
+
+
+def test_fetch_with_existing_file_no_force(app, mock_requests):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Given
+ output_dir = Path(temp_dir)
+ config_path = "oumi://configs/recipes/smollm/inference/135m_infer.yaml"
+ expected_path = output_dir / "configs/recipes/smollm/inference/135m_infer.yaml"
+
+ # Create existing file
+ expected_path.parent.mkdir(parents=True)
+ expected_path.write_text("existing content")
+
+ # When
+ result = runner.invoke(
+ app, [config_path, "-o", str(output_dir)], catch_exceptions=False
+ )
+ print(result)
+ # Then
+ assert result.exit_code == 1
+ assert "Use --force to overwrite" in result.output
+ assert mock_requests.get.call_count == 0
+ assert expected_path.read_text() == "existing content"
+
+
+def test_fetch_with_existing_file_force(app, mock_requests):
+ with tempfile.TemporaryDirectory() as temp_dir:
+ # Given
+ output_dir = Path(temp_dir)
+ config_path = "oumi://configs/recipes/smollm/inference/135m_infer.yaml"
+ expected_path = output_dir / "configs/recipes/smollm/inference/135m_infer.yaml"
+
+ # Create existing file
+ expected_path.parent.mkdir(parents=True)
+ expected_path.write_text("existing content")
+
+ # When
+ result = runner.invoke(app, [config_path, "-o", str(output_dir), "--force"])
+
+ # Then
+ assert result.exit_code == 0
+ mock_requests.get.assert_called_once()
+ assert expected_path.exists()
+ assert expected_path.read_text() == "key: value" # From mock_response
diff --git a/tests/unit/cli/test_cli_main.py b/tests/unit/cli/test_cli_main.py
index eee903c58a..be53378cd1 100644
--- a/tests/unit/cli/test_cli_main.py
+++ b/tests/unit/cli/test_cli_main.py
@@ -8,6 +8,7 @@
from oumi.cli.distributed_run import accelerate, torchrun
from oumi.cli.env import env
from oumi.cli.evaluate import evaluate
+from oumi.cli.fetch import fetch
from oumi.cli.infer import infer
from oumi.cli.judge import conversations, dataset
from oumi.cli.launch import cancel, down, status, stop, up, which
@@ -49,6 +50,13 @@ def mock_infer():
yield m_infer
[email protected]
+def mock_fetch():
+ with patch("oumi.cli.main.fetch") as m_fetch:
+ _copy_command(m_fetch, fetch)
+ yield m_fetch
+
+
@pytest.fixture
def mock_down():
with patch("oumi.cli.main.down") as m_down:
@@ -147,6 +155,11 @@ def test_main_infer_registered(mock_infer):
mock_infer.assert_called_once()
+def test_main_fetch_registered(mock_fetch):
+ _ = runner.invoke(get_app(), ["fetch", "some/path", "--output-dir", "output/path"])
+ mock_fetch.assert_called_once()
+
+
def test_main_eval_registered(mock_eval):
_ = runner.invoke(
get_app(), ["eval", "--config", "some/path", "--allow_extra" "args"]
diff --git a/tests/unit/cli/test_cli_utils.py b/tests/unit/cli/test_cli_utils.py
index 010eded49f..46adee8664 100644
--- a/tests/unit/cli/test_cli_utils.py
+++ b/tests/unit/cli/test_cli_utils.py
@@ -1,4 +1,5 @@
import os
+from pathlib import Path
from unittest import mock
import pytest
@@ -11,6 +12,7 @@
LogLevel,
configure_common_env_vars,
parse_extra_cli_args,
+ resolve_oumi_prefix,
)
@@ -125,3 +127,27 @@ def test_configure_common_env_vars_fully_preconfigured():
"ACCELERATE_LOG_LEVEL": "debug",
"TOKENIZERS_PARALLELISM": "true",
}
+
+
+def test_resolve_oumi_prefix():
+ # Test with oumi:// prefix
+ path, dir = resolve_oumi_prefix("oumi://configs/test.yaml")
+ assert path == "configs/test.yaml"
+ assert dir == Path("~/.oumi/configs").expanduser()
+
+ # Test without prefix
+ path, dir = resolve_oumi_prefix("configs/test.yaml")
+ assert path == "configs/test.yaml"
+ assert dir == Path("~/.oumi/configs").expanduser()
+
+ # Test with custom output dir
+ output_dir = Path("/tmp/custom")
+ path, dir = resolve_oumi_prefix("oumi://configs/test.yaml", output_dir)
+ assert path == "configs/test.yaml"
+ assert dir == output_dir
+
+ # Test with OUMI_DIR environment variable
+ with mock.patch.dict(os.environ, {"OUMI_DIR": "/tmp/env"}):
+ path, dir = resolve_oumi_prefix("configs/test.yaml")
+ assert path == "configs/test.yaml"
+ assert dir == Path("/tmp/env")
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
}
|
skypilot-org__skypilot-6221@dbf1369
|
skypilot-org/skypilot
|
Python
| 6,221
|
Fix db init race condition
|
<!-- Describe the changes in this PR -->
Close https://github.com/skypilot-org/skypilot/issues/6220
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [x] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [x] Unit test which repros on master and fixed on PR branch
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-07-10T16:50:35Z
|
[Core] DB init race condition
```
def initialize_and_get_db() -> sqlalchemy.engine.Engine:
global _SQLALCHEMY_ENGINE
if _SQLALCHEMY_ENGINE is not None:
return _SQLALCHEMY_ENGINE
with _DB_INIT_LOCK:
if _SQLALCHEMY_ENGINE is None:
```
Problematic timeline:
```
thread 1 -> AcquireLock -> set _SQLALCHEMY_ENGINE ----------------------------------> create table
thread 2 -----------------------------------------------> check _SQLALCHEMY_ENGINE and return, table not initialized
```
For concurrent request, a thread may get `_SQLALCHEMY_ENGINE` with partial state, which cause table not found error.
|
[
{
"body": "```\ndef initialize_and_get_db() -> sqlalchemy.engine.Engine:\n global _SQLALCHEMY_ENGINE\n if _SQLALCHEMY_ENGINE is not None:\n return _SQLALCHEMY_ENGINE\n with _DB_INIT_LOCK:\n if _SQLALCHEMY_ENGINE is None:\n```\n\nProblematic timeline:\n\n```\nthread 1 -> AcquireLock -> set _SQLALCHEMY_ENGINE ----------------------------------> create table\nthread 2 -----------------------------------------------> check _SQLALCHEMY_ENGINE and return, table not initialized\n```\n\nFor concurrent request, a thread may get `_SQLALCHEMY_ENGINE` with partial state, which cause table not found error.",
"number": 6220,
"title": "[Core] DB init race condition"
}
] |
c7b073165d36ed85f3b83df6624b439bd38b60fa
|
{
"head_commit": "dbf13691b5adee71f38ad1ead817564e63202741",
"head_commit_message": "Fix\n\nSigned-off-by: Aylei <[email protected]>",
"patch_to_review": "diff --git a/sky/global_user_state.py b/sky/global_user_state.py\nindex 6559b6a0ccc..14543ef75fa 100644\n--- a/sky/global_user_state.py\n+++ b/sky/global_user_state.py\n@@ -220,17 +220,16 @@ def replace_char_class(match):\n return like_pattern\n \n \n-def create_table():\n+def create_table(engine: sqlalchemy.engine.Engine):\n # Enable WAL mode to avoid locking issues.\n # See: issue #1441 and PR #1509\n # https://github.com/microsoft/WSL/issues/2395\n # TODO(romilb): We do not enable WAL for WSL because of known issue in WSL.\n # This may cause the database locked problem from WSL issue #1441.\n- if (_SQLALCHEMY_ENGINE.dialect.name\n- == db_utils.SQLAlchemyDialect.SQLITE.value and\n+ if (engine.dialect.name == db_utils.SQLAlchemyDialect.SQLITE.value and\n not common_utils.is_wsl()):\n try:\n- with orm.Session(_SQLALCHEMY_ENGINE) as session:\n+ with orm.Session(engine) as session:\n session.execute(sqlalchemy.text('PRAGMA journal_mode=WAL'))\n session.commit()\n except sqlalchemy_exc.OperationalError as e:\n@@ -240,12 +239,12 @@ def create_table():\n # is not critical and is likely to be enabled by other processes.\n \n # Create tables if they don't exist\n- db_utils.add_tables_to_db_sqlalchemy(Base.metadata, _SQLALCHEMY_ENGINE)\n+ db_utils.add_tables_to_db_sqlalchemy(Base.metadata, engine)\n \n # For backward compatibility.\n # TODO(zhwu): Remove this function after all users have migrated to\n # the latest version of SkyPilot.\n- with orm.Session(_SQLALCHEMY_ENGINE) as session:\n+ with orm.Session(engine) as session:\n # Add autostop column to clusters table\n db_utils.add_column_to_table_sqlalchemy(session,\n 'clusters',\n@@ -391,15 +390,15 @@ def initialize_and_get_db() -> sqlalchemy.engine.Engine:\n conn_string = skypilot_config.get_nested(('db',), None)\n if conn_string:\n logger.debug(f'using db URI from {conn_string}')\n- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine(\n- conn_string, poolclass=sqlalchemy.NullPool)\n+ engine = sqlalchemy.create_engine(conn_string,\n+ poolclass=sqlalchemy.NullPool)\n else:\n db_path = os.path.expanduser('~/.sky/state.db')\n pathlib.Path(db_path).parents[0].mkdir(parents=True,\n exist_ok=True)\n- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine('sqlite:///' +\n- db_path)\n- create_table()\n+ engine = sqlalchemy.create_engine('sqlite:///' + db_path)\n+ create_table(engine)\n+ _SQLALCHEMY_ENGINE = engine\n return _SQLALCHEMY_ENGINE\n \n \ndiff --git a/sky/jobs/state.py b/sky/jobs/state.py\nindex aac6ca82f20..5e437686fb9 100644\n--- a/sky/jobs/state.py\n+++ b/sky/jobs/state.py\n@@ -112,17 +112,16 @@\n )\n \n \n-def create_table():\n+def create_table(engine: sqlalchemy.engine.Engine):\n # Enable WAL mode to avoid locking issues.\n # See: issue #3863, #1441 and PR #1509\n # https://github.com/microsoft/WSL/issues/2395\n # TODO(romilb): We do not enable WAL for WSL because of known issue in WSL.\n # This may cause the database locked problem from WSL issue #1441.\n- if (_SQLALCHEMY_ENGINE.dialect.name\n- == db_utils.SQLAlchemyDialect.SQLITE.value and\n+ if (engine.dialect.name == db_utils.SQLAlchemyDialect.SQLITE.value and\n not common_utils.is_wsl()):\n try:\n- with orm.Session(_SQLALCHEMY_ENGINE) as session:\n+ with orm.Session(engine) as session:\n session.execute(sqlalchemy.text('PRAGMA journal_mode=WAL'))\n session.commit()\n except sqlalchemy_exc.OperationalError as e:\n@@ -132,10 +131,10 @@ def create_table():\n # is not critical and is likely to be enabled by other processes.\n \n # Create tables if they don't exist\n- db_utils.add_tables_to_db_sqlalchemy(Base.metadata, _SQLALCHEMY_ENGINE)\n+ db_utils.add_tables_to_db_sqlalchemy(Base.metadata, engine)\n \n # Backward compatibility: add columns that not exist in older databases\n- with orm.Session(_SQLALCHEMY_ENGINE) as session:\n+ with orm.Session(engine) as session:\n db_utils.add_column_to_table_sqlalchemy(session, 'spot',\n 'failure_reason',\n sqlalchemy.Text())\n@@ -228,15 +227,15 @@ def initialize_and_get_db() -> sqlalchemy.engine.Engine:\n conn_string = skypilot_config.get_nested(('db',), None)\n if conn_string:\n logger.debug(f'using db URI from {conn_string}')\n- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine(\n- conn_string, poolclass=sqlalchemy.NullPool)\n+ engine = sqlalchemy.create_engine(conn_string,\n+ poolclass=sqlalchemy.NullPool)\n else:\n db_path = os.path.expanduser('~/.sky/spot_jobs.db')\n pathlib.Path(db_path).parents[0].mkdir(parents=True,\n exist_ok=True)\n- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine('sqlite:///' +\n- db_path)\n- create_table()\n+ engine = sqlalchemy.create_engine('sqlite:///' + db_path)\n+ create_table(engine)\n+ _SQLALCHEMY_ENGINE = engine\n return _SQLALCHEMY_ENGINE\n \n \ndiff --git a/tests/test_failover.py b/tests/test_failover.py\nindex 40408887315..1f8a8e09c17 100644\n--- a/tests/test_failover.py\n+++ b/tests/test_failover.py\n@@ -28,7 +28,7 @@ def _mock_db_conn(tmp_path, monkeypatch):\n monkeypatch.setattr(global_user_state, '_SQLALCHEMY_ENGINE',\n sqlalchemy_engine)\n \n- global_user_state.create_table()\n+ global_user_state.create_table(sqlalchemy_engine)\n \n \n @pytest.mark.parametrize('enable_all_clouds', [[sky.AWS()]], indirect=True)\ndiff --git a/tests/test_global_user_state.py b/tests/test_global_user_state.py\nindex 7189e02a336..fb8f29b68ef 100644\n--- a/tests/test_global_user_state.py\n+++ b/tests/test_global_user_state.py\n@@ -1,8 +1,11 @@\n import sys\n+import threading\n+from unittest import mock\n \n import pytest\n \n import sky\n+from sky import global_user_state\n \n \n @pytest.mark.skipif(sys.platform != 'linux', reason='Only test in CI.')\n@@ -10,3 +13,54 @@ def test_enabled_clouds_empty():\n # In test environment, no cloud should be enabled.\n assert sky.global_user_state.get_cached_enabled_clouds(\n sky.clouds.cloud.CloudCapability.COMPUTE, workspace='default') == []\n+\n+\n+def test_concurrent_database_initializationd(tmp_path):\n+ \"\"\"Test that concurrent database initialization.\"\"\"\n+ # Store original state to restore later\n+ with mock.patch.object(global_user_state, '_SQLALCHEMY_ENGINE', None):\n+\n+ temp_db_path = tmp_path / 'state.db'\n+ results = []\n+ num_threads = 5\n+\n+ def worker_thread(thread_id):\n+ \"\"\"Worker thread that initializes DB and performs operations\"\"\"\n+ try:\n+ with mock.patch('os.path.expanduser',\n+ return_value=str(temp_db_path)):\n+ # Force database initialization\n+ global_user_state.initialize_and_get_db()\n+\n+ # Immediately try to use the database\n+ # This should work if tables are properly created\n+ global_user_state.get_user(f\"test_user_{thread_id}\")\n+\n+ results.append((thread_id, \"SUCCESS\", None))\n+ except Exception as e:\n+ results.append((thread_id, \"FAILED\", e))\n+\n+ threads = []\n+ for i in range(num_threads):\n+ t = threading.Thread(target=worker_thread, args=(i,))\n+ threads.append(t)\n+\n+ for t in threads:\n+ t.start()\n+\n+ for t in threads:\n+ t.join(timeout=10)\n+\n+ successes = [r for r in results if r[1] == \"SUCCESS\"]\n+ failures = [r for r in results if r[1] == \"FAILED\"]\n+\n+ print(f\"Results: {len(successes)} successes, {len(failures)} failures\")\n+ if failures:\n+ print(\"Failures:\")\n+ for thread_id, status, error in failures:\n+ print(f\" Thread {thread_id}: {error}\")\n+\n+ assert len(failures) == 0, (\n+ f\"Race condition detected: {len(failures)} threads failed. \"\n+ f\"This indicates the database initialization has race conditions. \"\n+ f\"Failures: {[(f[0], str(f[2])) for f in failures]}\")\ndiff --git a/tests/test_jobs.py b/tests/test_jobs.py\nindex f2e33729741..1136d9878de 100644\n--- a/tests/test_jobs.py\n+++ b/tests/test_jobs.py\n@@ -19,7 +19,7 @@ def _mock_db_conn(tmp_path, monkeypatch):\n monkeypatch.setattr(global_user_state, '_SQLALCHEMY_ENGINE',\n sqlalchemy_engine)\n \n- global_user_state.create_table()\n+ global_user_state.create_table(sqlalchemy_engine)\n \n \n @pytest.fixture\ndiff --git a/tests/test_jobs_and_serve.py b/tests/test_jobs_and_serve.py\nindex d13431ef22f..e9e8f2dfa53 100644\n--- a/tests/test_jobs_and_serve.py\n+++ b/tests/test_jobs_and_serve.py\n@@ -47,7 +47,7 @@ def _mock_db_conn(tmp_path, monkeypatch):\n monkeypatch.setattr(global_user_state, '_SQLALCHEMY_ENGINE',\n sqlalchemy_engine)\n \n- global_user_state.create_table()\n+ global_user_state.create_table(sqlalchemy_engine)\n \n \n def _generate_tmp_yaml(tmp_path, filename: str) -> str:\n"
}
|
[
{
"diff_hunk": "@@ -1,12 +1,66 @@\n import sys\n+import threading\n+from unittest import mock\n \n import pytest\n \n import sky\n+from sky import global_user_state\n \n \n @pytest.mark.skipif(sys.platform != 'linux', reason='Only test in CI.')\n def test_enabled_clouds_empty():\n # In test environment, no cloud should be enabled.\n assert sky.global_user_state.get_cached_enabled_clouds(\n sky.clouds.cloud.CloudCapability.COMPUTE, workspace='default') == []\n+\n+\n+def test_concurrent_database_initializationd(tmp_path):\n+ \"\"\"Test that concurrent database initialization.\"\"\"\n+ # Store original state to restore later\n+ with mock.patch.object(global_user_state, '_SQLALCHEMY_ENGINE', None):\n+\n+ temp_db_path = tmp_path / 'state.db'\n+ results = []\n+ num_threads = 5\n+\n+ def worker_thread(thread_id):\n+ \"\"\"Worker thread that initializes DB and performs operations\"\"\"\n+ try:\n+ with mock.patch('os.path.expanduser',\n+ return_value=str(temp_db_path)):\n+ # Force database initialization\n+ global_user_state.initialize_and_get_db()\n+\n+ # Immediately try to use the database\n+ # This should work if tables are properly created\n+ global_user_state.get_user(f\"test_user_{thread_id}\")",
"line": null,
"original_line": 37,
"original_start_line": null,
"path": "tests/test_global_user_state.py",
"start_line": null,
"text": "@user1:\nI'm not certain this query will succeed given the `_SQLALCHEMY_ENGINE` is patched to None at this point"
}
] |
72f76401cdc2a9a599bad4f746eb3393cc738882
|
diff --git a/sky/global_user_state.py b/sky/global_user_state.py
index 6559b6a0ccc..14543ef75fa 100644
--- a/sky/global_user_state.py
+++ b/sky/global_user_state.py
@@ -220,17 +220,16 @@ def replace_char_class(match):
return like_pattern
-def create_table():
+def create_table(engine: sqlalchemy.engine.Engine):
# Enable WAL mode to avoid locking issues.
# See: issue #1441 and PR #1509
# https://github.com/microsoft/WSL/issues/2395
# TODO(romilb): We do not enable WAL for WSL because of known issue in WSL.
# This may cause the database locked problem from WSL issue #1441.
- if (_SQLALCHEMY_ENGINE.dialect.name
- == db_utils.SQLAlchemyDialect.SQLITE.value and
+ if (engine.dialect.name == db_utils.SQLAlchemyDialect.SQLITE.value and
not common_utils.is_wsl()):
try:
- with orm.Session(_SQLALCHEMY_ENGINE) as session:
+ with orm.Session(engine) as session:
session.execute(sqlalchemy.text('PRAGMA journal_mode=WAL'))
session.commit()
except sqlalchemy_exc.OperationalError as e:
@@ -240,12 +239,12 @@ def create_table():
# is not critical and is likely to be enabled by other processes.
# Create tables if they don't exist
- db_utils.add_tables_to_db_sqlalchemy(Base.metadata, _SQLALCHEMY_ENGINE)
+ db_utils.add_tables_to_db_sqlalchemy(Base.metadata, engine)
# For backward compatibility.
# TODO(zhwu): Remove this function after all users have migrated to
# the latest version of SkyPilot.
- with orm.Session(_SQLALCHEMY_ENGINE) as session:
+ with orm.Session(engine) as session:
# Add autostop column to clusters table
db_utils.add_column_to_table_sqlalchemy(session,
'clusters',
@@ -391,15 +390,15 @@ def initialize_and_get_db() -> sqlalchemy.engine.Engine:
conn_string = skypilot_config.get_nested(('db',), None)
if conn_string:
logger.debug(f'using db URI from {conn_string}')
- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine(
- conn_string, poolclass=sqlalchemy.NullPool)
+ engine = sqlalchemy.create_engine(conn_string,
+ poolclass=sqlalchemy.NullPool)
else:
db_path = os.path.expanduser('~/.sky/state.db')
pathlib.Path(db_path).parents[0].mkdir(parents=True,
exist_ok=True)
- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine('sqlite:///' +
- db_path)
- create_table()
+ engine = sqlalchemy.create_engine('sqlite:///' + db_path)
+ create_table(engine)
+ _SQLALCHEMY_ENGINE = engine
return _SQLALCHEMY_ENGINE
diff --git a/sky/jobs/state.py b/sky/jobs/state.py
index aac6ca82f20..5e437686fb9 100644
--- a/sky/jobs/state.py
+++ b/sky/jobs/state.py
@@ -112,17 +112,16 @@
)
-def create_table():
+def create_table(engine: sqlalchemy.engine.Engine):
# Enable WAL mode to avoid locking issues.
# See: issue #3863, #1441 and PR #1509
# https://github.com/microsoft/WSL/issues/2395
# TODO(romilb): We do not enable WAL for WSL because of known issue in WSL.
# This may cause the database locked problem from WSL issue #1441.
- if (_SQLALCHEMY_ENGINE.dialect.name
- == db_utils.SQLAlchemyDialect.SQLITE.value and
+ if (engine.dialect.name == db_utils.SQLAlchemyDialect.SQLITE.value and
not common_utils.is_wsl()):
try:
- with orm.Session(_SQLALCHEMY_ENGINE) as session:
+ with orm.Session(engine) as session:
session.execute(sqlalchemy.text('PRAGMA journal_mode=WAL'))
session.commit()
except sqlalchemy_exc.OperationalError as e:
@@ -132,10 +131,10 @@ def create_table():
# is not critical and is likely to be enabled by other processes.
# Create tables if they don't exist
- db_utils.add_tables_to_db_sqlalchemy(Base.metadata, _SQLALCHEMY_ENGINE)
+ db_utils.add_tables_to_db_sqlalchemy(Base.metadata, engine)
# Backward compatibility: add columns that not exist in older databases
- with orm.Session(_SQLALCHEMY_ENGINE) as session:
+ with orm.Session(engine) as session:
db_utils.add_column_to_table_sqlalchemy(session, 'spot',
'failure_reason',
sqlalchemy.Text())
@@ -228,15 +227,15 @@ def initialize_and_get_db() -> sqlalchemy.engine.Engine:
conn_string = skypilot_config.get_nested(('db',), None)
if conn_string:
logger.debug(f'using db URI from {conn_string}')
- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine(
- conn_string, poolclass=sqlalchemy.NullPool)
+ engine = sqlalchemy.create_engine(conn_string,
+ poolclass=sqlalchemy.NullPool)
else:
db_path = os.path.expanduser('~/.sky/spot_jobs.db')
pathlib.Path(db_path).parents[0].mkdir(parents=True,
exist_ok=True)
- _SQLALCHEMY_ENGINE = sqlalchemy.create_engine('sqlite:///' +
- db_path)
- create_table()
+ engine = sqlalchemy.create_engine('sqlite:///' + db_path)
+ create_table(engine)
+ _SQLALCHEMY_ENGINE = engine
return _SQLALCHEMY_ENGINE
diff --git a/tests/test_failover.py b/tests/test_failover.py
index 40408887315..1f8a8e09c17 100644
--- a/tests/test_failover.py
+++ b/tests/test_failover.py
@@ -28,7 +28,7 @@ def _mock_db_conn(tmp_path, monkeypatch):
monkeypatch.setattr(global_user_state, '_SQLALCHEMY_ENGINE',
sqlalchemy_engine)
- global_user_state.create_table()
+ global_user_state.create_table(sqlalchemy_engine)
@pytest.mark.parametrize('enable_all_clouds', [[sky.AWS()]], indirect=True)
diff --git a/tests/test_jobs.py b/tests/test_jobs.py
index f2e33729741..1136d9878de 100644
--- a/tests/test_jobs.py
+++ b/tests/test_jobs.py
@@ -19,7 +19,7 @@ def _mock_db_conn(tmp_path, monkeypatch):
monkeypatch.setattr(global_user_state, '_SQLALCHEMY_ENGINE',
sqlalchemy_engine)
- global_user_state.create_table()
+ global_user_state.create_table(sqlalchemy_engine)
@pytest.fixture
diff --git a/tests/test_jobs_and_serve.py b/tests/test_jobs_and_serve.py
index d13431ef22f..e9e8f2dfa53 100644
--- a/tests/test_jobs_and_serve.py
+++ b/tests/test_jobs_and_serve.py
@@ -47,7 +47,7 @@ def _mock_db_conn(tmp_path, monkeypatch):
monkeypatch.setattr(global_user_state, '_SQLALCHEMY_ENGINE',
sqlalchemy_engine)
- global_user_state.create_table()
+ global_user_state.create_table(sqlalchemy_engine)
def _generate_tmp_yaml(tmp_path, filename: str) -> str:
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
oumi-ai__oumi-66@9613e04
|
oumi-ai/oumi
|
Python
| 66
|
Add text_col param that's required for SFTTrainer
|
- Replaced the usage of `data.trainer_kwargs["dataset_text_field"]` with `data.text_col` since it's a required field for SFTTrainer
- Changed the default TrainerType to HF. IMO this is more logical as a default value. More practically, I made `text_col` a required field for SFTTrainer, making it not possible to do `TrainingConfig()` in tests.
|
2024-06-07T07:30:24Z
|
trainer_kwargs: avoid listing in config.data
The `trainer_kwargs` seems better fitting to go under training and not data. There is a subtle param listed there `dataset_text_field` which is applicable also on the inference side since it affects the loaders of the data.
|
I've been wondering about this as well. What about putting dataset_text_field under DataParams? That feels more logical to me.
|
[
{
"body": "The `trainer_kwargs` seems better fitting to go under training and not data. There is a subtle param listed there `dataset_text_field` which is applicable also on the inference side since it affects the loaders of the data.",
"number": 61,
"title": "trainer_kwargs: avoid listing in config.data"
}
] |
83b17dd5b9fe614b4f67367abe1a8835690b0309
|
{
"head_commit": "9613e046970e3bbabfc7f35128d184a616da1c18",
"head_commit_message": "Add text_col param that's required for SFTTrainer",
"patch_to_review": "diff --git a/.vscode/launch.json b/.vscode/launch.json\nindex 669b90841..5af7366cc 100644\n--- a/.vscode/launch.json\n+++ b/.vscode/launch.json\n@@ -9,10 +9,12 @@\n \"args\": [\n \"data.dataset_name=yahma/alpaca-cleaned\",\n \"data.preprocessing_function_name=alpaca\",\n- \"data.trainer_kwargs.dataset_text_field=prompt\",\n+ \"data.text_col=prompt\",\n \"model.model_name=openai-community/gpt2\",\n \"model.trust_remote_code=true\",\n- \"training.logging_steps=10\",\n+ \"training.trainer_type=TRL_SFT\",\n+ \"training.max_steps=3\",\n+ \"training.logging_steps=3\",\n \"training.enable_wandb=false\",\n \"training.enable_tensorboard=false\",\n \"training.output_dir=tmp\"\ndiff --git a/configs/lema/gpt2.pt.yaml b/configs/lema/gpt2.pt.yaml\nindex 0aa8dbc72..25536318a 100644\n--- a/configs/lema/gpt2.pt.yaml\n+++ b/configs/lema/gpt2.pt.yaml\n@@ -10,10 +10,10 @@ data:\n dataset_config: \"wikitext-2-raw-v1\"\n stream: True\n pack: True\n- trainer_kwargs:\n- dataset_text_field: \"text\"\n+ text_col: \"text\"\n \n training:\n+ trainer_type: TRL_SFT\n max_steps: 2\n enable_tensorboard: False\n output_dir: \"output/gpt2.pt\"\ndiff --git a/configs/lema/phi3.lora.yaml b/configs/lema/phi3.lora.yaml\nindex 6694b2267..967c01d50 100644\n--- a/configs/lema/phi3.lora.yaml\n+++ b/configs/lema/phi3.lora.yaml\n@@ -5,12 +5,12 @@ model:\n data:\n dataset_name: \"yahma/alpaca-cleaned\"\n preprocessing_function_name: \"alpaca\"\n- trainer_kwargs:\n- dataset_text_field: \"prompt\"\n+ text_col: \"prompt\"\n \n training:\n optimizer: \"adamw_torch\"\n use_peft: true\n+ trainer_type: TRL_SFT\n output_dir: \"output/phi3.lora\"\n \n peft:\ndiff --git a/configs/lema/phi3.sft.nvidia.24g.yaml b/configs/lema/phi3.sft.nvidia.24g.yaml\nindex 7e87c1306..21cdfa35f 100644\n--- a/configs/lema/phi3.sft.nvidia.24g.yaml\n+++ b/configs/lema/phi3.sft.nvidia.24g.yaml\n@@ -6,8 +6,7 @@ model:\n data:\n dataset_name: \"yahma/alpaca-cleaned\"\n preprocessing_function_name: \"alpaca\"\n- trainer_kwargs:\n- dataset_text_field: \"prompt\"\n+ text_col: \"prompt\"\n \n training:\n optimizer: \"adamw_torch\"\ndiff --git a/configs/lema/zephyr.7b/sft/config_full.yaml b/configs/lema/zephyr.7b/sft/config_full.yaml\nindex 020ed3216..b09c79b88 100644\n--- a/configs/lema/zephyr.7b/sft/config_full.yaml\n+++ b/configs/lema/zephyr.7b/sft/config_full.yaml\n@@ -15,12 +15,11 @@ data:\n split: train_sft\n stream: True\n pack: True\n+ text_col: \"text\"\n preprocessing_function_name: trl_sft_ultrachat_200k\n preprocessing_function_kwargs:\n num_proc: 6\n batched: False\n- trainer_kwargs:\n- dataset_text_field: text\n \n training:\n optimizer: adamw_torch\ndiff --git a/configs/lema/zephyr.7b/sft/config_qlora.yaml b/configs/lema/zephyr.7b/sft/config_qlora.yaml\nindex 43278ac18..5252605c8 100644\n--- a/configs/lema/zephyr.7b/sft/config_qlora.yaml\n+++ b/configs/lema/zephyr.7b/sft/config_qlora.yaml\n@@ -15,12 +15,11 @@ data:\n split: train_sft\n stream: True\n pack: True\n+ text_col: \"text\"\n preprocessing_function_name: trl_sft_ultrachat_200k\n preprocessing_function_kwargs:\n num_proc: 6\n batched: false\n- trainer_kwargs:\n- dataset_text_field: text\n \n training:\n optimizer: adamw_torch\ndiff --git a/notebooks/LeMa - Colab Setup Example.ipynb b/notebooks/LeMa - Colab Setup Example.ipynb\nindex ced4a397c..3c30618ee 100644\n--- a/notebooks/LeMa - Colab Setup Example.ipynb\t\n+++ b/notebooks/LeMa - Colab Setup Example.ipynb\t\n@@ -27,12 +27,13 @@\n },\n \"source\": [\n \"#### 1. Setting up read-only github token\\n\",\n- \"Since the Github repository is private, we need to generate a `read-only` user token scoped for the `lema` repo.\\n\",\n- \"1. In Github.com, go to `Settings -> Developer settings -> Personal access tokens -> Generate new token`\\n\",\n- \"2. See example [here](https://drive.google.com/file/d/1zxd8r7qkPfl34mfGK83m_13oLGFGghW1/view?usp=share_link) on how to fill the form. The only permission that should be granted is `Contents`, in `read-only` mode\\n\",\n- \"3. Add the github token to your colab environment secrets (Key icon in the left menu)\\n\",\n+ \"Since the Github repository is private, we need to generate a `Read-only` user token scoped for the `lema` repo.\\n\",\n+ \"1. In Github.com, go to `Settings -> Developer settings -> Personal access tokens -> Fine-grained tokens -> Generate new token`.\\n\",\n+ \"1. See example [here](https://drive.google.com/file/d/1zxd8r7qkPfl34mfGK83m_13oLGFGghW1/view?usp=share_link) on how to fill the form. The only permission that should be granted is `Repository permissions -> Contents -> Read-only`.\\n\",\n+ \"1. Click `Generate token`, copy the token, and save it somewhere safe (as you can't access it again).\\n\",\n+ \"1. Create a colab environment secret (Key icon in the left menu) with `repo-token` as the name and your token as the value.\\n\",\n \"\\n\",\n- \"This only needs to be done once!\"\n+ \"This only needs to be done once! However, you have to wait for the token request to be approved for this to work.\"\n ]\n },\n {\n@@ -55,7 +56,7 @@\n \"from google.colab import userdata\\n\",\n \"\\n\",\n \"github_repo_token = userdata.get(\\\"repo-token\\\") # Setup token in your notebook secrets\\n\",\n- \"github_username = \\\"<GITHUB_USERNAME>\\\" # Change your github username\\n\",\n+ \"github_username = \\\"<GITHUB_USERNAME>\\\" # Change to your github username\\n\",\n \"\\n\",\n \"!git clone https://$github_username:[email protected]/openlema/lema.git\"\n ]\n@@ -77,37 +78,50 @@\n },\n \"outputs\": [],\n \"source\": [\n- \"%pip install -e lema[all]\"\n+ \"%pip install -e 'lema[all]'\"\n ]\n },\n {\n \"cell_type\": \"markdown\",\n- \"metadata\": {\n- \"id\": \"3ASgNcAx0lZ_\"\n- },\n+ \"metadata\": {},\n \"source\": [\n- \"## Training\\n\",\n- \"Make sure to enable GPU runtime for faster training\"\n+ \"#### 4. Importing LeMa\"\n+ ]\n+ },\n+ {\n+ \"cell_type\": \"code\",\n+ \"execution_count\": null,\n+ \"metadata\": {},\n+ \"outputs\": [],\n+ \"source\": [\n+ \"import lema\\n\",\n+ \"from lema.core.types import (\\n\",\n+ \" DataParams,\\n\",\n+ \" EvaluationConfig,\\n\",\n+ \" ModelParams,\\n\",\n+ \" TrainerType,\\n\",\n+ \" TrainingConfig,\\n\",\n+ \" TrainingParams,\\n\",\n+ \")\"\n ]\n },\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n- \"id\": \"SYSpXqvP0sbT\"\n+ \"id\": \"3ASgNcAx0lZ_\"\n },\n \"source\": [\n- \"#### Using `lema` module\"\n+ \"## Training\\n\",\n+ \"Make sure to enable GPU runtime for faster training.\"\n ]\n },\n {\n- \"cell_type\": \"code\",\n- \"execution_count\": null,\n+ \"cell_type\": \"markdown\",\n \"metadata\": {\n- \"id\": \"IYBkZmipsQL6\"\n+ \"id\": \"SYSpXqvP0sbT\"\n },\n- \"outputs\": [],\n \"source\": [\n- \"import lema\"\n+ \"#### Using `lema` module\"\n ]\n },\n {\n@@ -118,13 +132,22 @@\n },\n \"outputs\": [],\n \"source\": [\n- \"lema.train(\\n\",\n- \" model_name=\\\"microsoft/Phi-3-mini-4k-instruct\\\",\\n\",\n- \" dataset_name=\\\"yahma/alpaca-cleaned\\\",\\n\",\n- \" preprocessing_function_name=\\\"alpaca\\\",\\n\",\n- \" output_dir=\\\"train/\\\",\\n\",\n- \" trust_remote_code=True,\\n\",\n- \")\"\n+ \"config = TrainingConfig(\\n\",\n+ \" data=DataParams(\\n\",\n+ \" dataset_name=\\\"yahma/alpaca-cleaned\\\",\\n\",\n+ \" preprocessing_function_name=\\\"alpaca\\\",\\n\",\n+ \" text_col=\\\"prompt\\\",\\n\",\n+ \" ),\\n\",\n+ \" model=ModelParams(\\n\",\n+ \" model_name=\\\"microsoft/Phi-3-mini-4k-instruct\\\",\\n\",\n+ \" trust_remote_code=True,\\n\",\n+ \" ),\\n\",\n+ \" training=TrainingParams(\\n\",\n+ \" trainer_type=TrainerType.TRL_SFT,\\n\",\n+ \" output_dir=\\\"train/\\\",\\n\",\n+ \" ),\\n\",\n+ \")\\n\",\n+ \"lema.train(config)\"\n ]\n },\n {\n@@ -147,9 +170,10 @@\n \"!lema-train \\\\\\n\",\n \" \\\"data.dataset_name=yahma/alpaca-cleaned\\\" \\\\\\n\",\n \" \\\"data.preprocessing_function_name=alpaca\\\" \\\\\\n\",\n- \" \\\"data.trainer_kwargs.dataset_text_field=prompt\\\" \\\\\\n\",\n+ \" \\\"data.text_col=prompt\\\" \\\\\\n\",\n \" \\\"model.model_name=microsoft/Phi-3-mini-4k-instruct\\\" \\\\\\n\",\n \" \\\"model.trust_remote_code=true\\\" \\\\\\n\",\n+ \" \\\"training.trainer_type=TRL_SFT/\\\" \\\\\\n\",\n \" \\\"training.output_dir=train/\\\"\"\n ]\n },\n@@ -177,13 +201,18 @@\n },\n \"outputs\": [],\n \"source\": [\n- \"lema.evaluate(\\n\",\n- \" model_name=\\\"train/best.pt\\\", # model output\\n\",\n- \" dataset_name=\\\"yahma/alpaca-cleaned\\\",\\n\",\n- \" preprocessing_function_name=\\\"alpaca\\\",\\n\",\n- \" output_dir=\\\"eval/\\\",\\n\",\n- \" trust_remote_code=True,\\n\",\n- \")\"\n+ \"config = EvaluationConfig(\\n\",\n+ \" data=DataParams(\\n\",\n+ \" dataset_name=\\\"yahma/alpaca-cleaned\\\",\\n\",\n+ \" preprocessing_function_name=\\\"alpaca\\\",\\n\",\n+ \" ),\\n\",\n+ \" model=ModelParams(\\n\",\n+ \" model_name=\\\"train/best.pt\\\",\\n\",\n+ \" trust_remote_code=True,\\n\",\n+ \" ),\\n\",\n+ \")\\n\",\n+ \"\\n\",\n+ \"lema.evaluate(config)\"\n ]\n },\n {\n@@ -202,10 +231,9 @@\n \"!lema-evaluate \\\\\\n\",\n \" \\\"data.dataset_name=yahma/alpaca-cleaned\\\" \\\\\\n\",\n \" \\\"data.preprocessing_function_name=alpaca\\\" \\\\\\n\",\n- \" \\\"data.trainer_kwargs.dataset_text_field=prompt\\\" \\\\\\n\",\n+ \" \\\"data.text_col=prompt\\\" \\\\\\n\",\n \" \\\"model.model_name=microsoft/Phi-3-mini-4k-instruct\\\" \\\\\\n\",\n- \" \\\"model.trust_remote_code=true\\\" \\\\\\n\",\n- \" \\\"training.output_dir=eval/\\\"\"\n+ \" \\\"model.trust_remote_code=true\\\"\"\n ]\n }\n ],\ndiff --git a/scripts/finetune_sft_phi3.sh b/scripts/finetune_sft_phi3.sh\nindex c0bf44fd9..53eb2560f 100755\n--- a/scripts/finetune_sft_phi3.sh\n+++ b/scripts/finetune_sft_phi3.sh\n@@ -5,6 +5,6 @@ python -m lema.train \\\n \"model.model_name=microsoft/Phi-3-mini-4k-instruct\" \\\n \"data.dataset_name=yahma/alpaca-cleaned\" \\\n \"data.preprocessing_function_name=alpaca\" \\\n- \"data.trainer_kwargs.dataset_text_field=prompt\" \\\n+ \"data.text_col=prompt\" \\\n \"training.output_dir=train/\" \\\n \"model.trust_remote_code=true\"\ndiff --git a/src/lema/builders/data.py b/src/lema/builders/data.py\nindex dbe10961d..b2c02c2aa 100644\n--- a/src/lema/builders/data.py\n+++ b/src/lema/builders/data.py\n@@ -79,12 +79,16 @@ def build_dataset(\n dataset_kwargs = {}\n if config.model.model_max_length:\n dataset_kwargs[\"seq_length\"] = config.model.model_max_length\n-\n+ # Our preprocessing functions take a dict as input and return a dict as output.\n+ # formatting_func must return a str, so we fetch the target str from the dict.\n+ if preprocessing_fn:\n+ dataset_kwargs[\"formatting_func\"] = lambda x: preprocessing_fn(x)[\n+ data_params.text_col\n+ ]\n dataset = ConstantLengthDataset(\n tokenizer,\n dataset,\n- dataset_text_field=data_params.get_dataset_text_field(),\n- formatting_func=preprocessing_fn,\n+ dataset_text_field=data_params.text_col,\n **dataset_kwargs,\n )\n elif data_params.preprocessing_function_name:\ndiff --git a/src/lema/core/types.py b/src/lema/core/types.py\nindex baf2b4e70..0bd7352fa 100644\n--- a/src/lema/core/types.py\n+++ b/src/lema/core/types.py\n@@ -7,7 +7,7 @@\n from omegaconf import MISSING, OmegaConf\n from peft.utils.peft_types import TaskType\n \n-_DATASET_TEXT_FIELD = \"dataset_text_field\"\n+from lema.logging import logger\n \n \n #\n@@ -30,7 +30,7 @@ class TrainerType(Enum):\n class TrainingParams:\n optimizer: str = \"adamw_torch\"\n use_peft: bool = False\n- trainer_type: TrainerType = TrainerType.TRL_SFT\n+ trainer_type: TrainerType = TrainerType.HF\n enable_gradient_checkpointing: bool = False\n output_dir: str = \"output\"\n per_device_train_batch_size: int = 8\n@@ -137,29 +137,21 @@ def _default_factory_preprocessing_kwargs() -> dict:\n defaults[\"batched\"] = True # Note the default of huggingface is False.\n return defaults\n \n+ # The dataset column name containing the text to train on. Required for SFTTrainer.\n+ text_col: Optional[str] = None\n preprocessing_function_name: Optional[str] = None\n preprocessing_function_kwargs: Dict[str, Any] = field(\n default_factory=_default_factory_preprocessing_kwargs\n )\n trainer_kwargs: Dict[str, Any] = field(default_factory=dict)\n \n- def get_dataset_text_field(self) -> Optional[str]:\n- \"\"\"Get the `dataset_text_field` value if present.\"\"\"\n- return self.trainer_kwargs.get(_DATASET_TEXT_FIELD)\n-\n def __post_init__(self):\n- \"\"\"Verify params if packing is enabled.\"\"\"\n+ \"\"\"Verify params.\"\"\"\n if self.pack:\n if not self.stream:\n raise ValueError(\"`stream` must be enabled if `pack` is enabled.\")\n- if (\n- not self.preprocessing_function_name\n- and _DATASET_TEXT_FIELD not in self.trainer_kwargs\n- ):\n- raise ValueError(\n- \"Either `trainer_kwargs['dataset_text_field']` \"\n- \"or `preprocessing_function_name` must be specified.\"\n- )\n+ if not self.text_col:\n+ raise ValueError(\"`text_col` must be specified if `pack` is enabled.\")\n \n \n @dataclass\n@@ -269,6 +261,24 @@ class TrainingConfig(BaseConfig):\n training: TrainingParams = field(default_factory=TrainingParams)\n peft: PeftParams = field(default_factory=PeftParams)\n \n+ def __post_init__(self):\n+ \"\"\"Verify/populate params.\"\"\"\n+ if self.training.trainer_type == TrainerType.TRL_SFT:\n+ if not self.data.text_col:\n+ raise ValueError(\"`text_col` must be specified for TRL_SFT Trainer.\")\n+ existing_dataset_text_field = self.data.trainer_kwargs.get(\n+ \"dataset_text_field\"\n+ )\n+ if (\n+ existing_dataset_text_field is not None\n+ and existing_dataset_text_field != self.data.text_col\n+ ):\n+ logger.warning(\n+ \"Overriding existing `dataset_text_field` value \"\n+ f'\"{existing_dataset_text_field}\" with \"{self.data.text_col}\"'\n+ )\n+ self.data.trainer_kwargs[\"dataset_text_field\"] = self.data.text_col\n+\n \n @dataclass\n class EvaluationConfig(BaseConfig):\ndiff --git a/tests/test_evaluate.py b/tests/test_evaluate.py\nindex c69ef06f1..3683c4480 100644\n--- a/tests/test_evaluate.py\n+++ b/tests/test_evaluate.py\n@@ -9,9 +9,6 @@ def test_basic_evaluate():\n data=DataParams(\n dataset_name=\"yahma/alpaca-cleaned\",\n preprocessing_function_name=\"alpaca\",\n- trainer_kwargs={\n- \"dataset_text_field\": \"prompt\",\n- },\n ),\n model=ModelParams(\n model_name=\"openai-community/gpt2\",\ndiff --git a/tests/test_train.py b/tests/test_train.py\nindex e9a2719b1..47e57967c 100644\n--- a/tests/test_train.py\n+++ b/tests/test_train.py\n@@ -1,7 +1,13 @@\n import tempfile\n \n from lema import train\n-from lema.core.types import DataParams, ModelParams, TrainingConfig, TrainingParams\n+from lema.core.types import (\n+ DataParams,\n+ ModelParams,\n+ TrainerType,\n+ TrainingConfig,\n+ TrainingParams,\n+)\n \n \n def test_basic_train():\n@@ -11,15 +17,14 @@ def test_basic_train():\n data=DataParams(\n dataset_name=\"yahma/alpaca-cleaned\",\n preprocessing_function_name=\"alpaca\",\n- trainer_kwargs={\n- \"dataset_text_field\": \"prompt\",\n- },\n+ text_col=\"prompt\",\n ),\n model=ModelParams(\n model_name=\"openai-community/gpt2\",\n trust_remote_code=True,\n ),\n training=TrainingParams(\n+ trainer_type=TrainerType.TRL_SFT,\n max_steps=3,\n logging_steps=3,\n enable_wandb=False,\n@@ -38,9 +43,7 @@ def test_custom_train():\n data=DataParams(\n dataset_name=\"yahma/alpaca-cleaned\",\n preprocessing_function_name=\"alpaca\",\n- trainer_kwargs={\n- \"dataset_text_field\": \"prompt\",\n- },\n+ text_col=\"prompt\",\n ),\n model=ModelParams(\n model_name=\"learning-machines/sample\",\n@@ -48,6 +51,7 @@ def test_custom_train():\n trust_remote_code=False,\n ),\n training=TrainingParams(\n+ trainer_type=TrainerType.TRL_SFT,\n max_steps=3,\n logging_steps=3,\n enable_wandb=False,\n@@ -58,3 +62,32 @@ def test_custom_train():\n )\n \n train(config)\n+\n+\n+def test_pack_train():\n+ output_temp_dir = tempfile.mkdtemp()\n+\n+ config: TrainingConfig = TrainingConfig(\n+ data=DataParams(\n+ dataset_name=\"Salesforce/wikitext\",\n+ dataset_config=\"wikitext-2-raw-v1\",\n+ stream=True,\n+ pack=True,\n+ text_col=\"text\",\n+ ),\n+ model=ModelParams(\n+ model_name=\"openai-community/gpt2\",\n+ model_max_length=1024,\n+ trust_remote_code=True,\n+ ),\n+ training=TrainingParams(\n+ trainer_type=TrainerType.TRL_SFT,\n+ max_steps=1,\n+ logging_steps=1,\n+ enable_wandb=False,\n+ enable_tensorboard=False,\n+ output_dir=output_temp_dir,\n+ ),\n+ )\n+\n+ train(config)\n"
}
|
[
{
"diff_hunk": "@@ -27,12 +27,13 @@\n },\n \"source\": [\n \"#### 1. Setting up read-only github token\\n\",\n- \"Since the Github repository is private, we need to generate a `read-only` user token scoped for the `lema` repo.\\n\",\n- \"1. In Github.com, go to `Settings -> Developer settings -> Personal access tokens -> Generate new token`\\n\",\n- \"2. See example [here](https://drive.google.com/file/d/1zxd8r7qkPfl34mfGK83m_13oLGFGghW1/view?usp=share_link) on how to fill the form. The only permission that should be granted is `Contents`, in `read-only` mode\\n\",\n- \"3. Add the github token to your colab environment secrets (Key icon in the left menu)\\n\",\n+ \"Since the Github repository is private, we need to generate a `Read-only` user token scoped for the `lema` repo.\\n\",\n+ \"1. In Github.com, go to `Settings -> Developer settings -> Personal access tokens -> Fine-grained tokens -> Generate new token`.\\n\",\n+ \"1. See example [here](https://drive.google.com/file/d/1zxd8r7qkPfl34mfGK83m_13oLGFGghW1/view?usp=share_link) on how to fill the form. The only permission that should be granted is `Repository permissions -> Contents -> Read-only`.\\n\",\n+ \"1. Click `Generate token`, copy the token, and save it somewhere safe (as you can't access it again).\\n\",\n+ \"1. Create a colab environment secret (Key icon in the left menu) with `repo-token` as the name and your token as the value.\\n\",\n \"\\n\",\n- \"This only needs to be done once!\"\n+ \"This only needs to be done once! However, you have to wait for the token request to be approved for this to work.\"",
"line": null,
"original_line": 36,
"original_start_line": null,
"path": "notebooks/LeMa - Colab Setup Example.ipynb",
"start_line": null,
"text": "@user1:\nnit: for some reason I don't get any notifications when someone requests a token, could add a note about slacking me or Nikolai to have that approved quickly?"
}
] |
cf87fa367213f0b38ee390908791a660e51e1697
|
diff --git a/.vscode/launch.json b/.vscode/launch.json
index 669b90841f..5af7366ccf 100644
--- a/.vscode/launch.json
+++ b/.vscode/launch.json
@@ -9,10 +9,12 @@
"args": [
"data.dataset_name=yahma/alpaca-cleaned",
"data.preprocessing_function_name=alpaca",
- "data.trainer_kwargs.dataset_text_field=prompt",
+ "data.text_col=prompt",
"model.model_name=openai-community/gpt2",
"model.trust_remote_code=true",
- "training.logging_steps=10",
+ "training.trainer_type=TRL_SFT",
+ "training.max_steps=3",
+ "training.logging_steps=3",
"training.enable_wandb=false",
"training.enable_tensorboard=false",
"training.output_dir=tmp"
diff --git a/configs/lema/gpt2.pt.yaml b/configs/lema/gpt2.pt.yaml
index 0aa8dbc729..25536318aa 100644
--- a/configs/lema/gpt2.pt.yaml
+++ b/configs/lema/gpt2.pt.yaml
@@ -10,10 +10,10 @@ data:
dataset_config: "wikitext-2-raw-v1"
stream: True
pack: True
- trainer_kwargs:
- dataset_text_field: "text"
+ text_col: "text"
training:
+ trainer_type: TRL_SFT
max_steps: 2
enable_tensorboard: False
output_dir: "output/gpt2.pt"
diff --git a/configs/lema/phi3.lora.yaml b/configs/lema/phi3.lora.yaml
index 6694b22674..967c01d50d 100644
--- a/configs/lema/phi3.lora.yaml
+++ b/configs/lema/phi3.lora.yaml
@@ -5,12 +5,12 @@ model:
data:
dataset_name: "yahma/alpaca-cleaned"
preprocessing_function_name: "alpaca"
- trainer_kwargs:
- dataset_text_field: "prompt"
+ text_col: "prompt"
training:
optimizer: "adamw_torch"
use_peft: true
+ trainer_type: TRL_SFT
output_dir: "output/phi3.lora"
peft:
diff --git a/configs/lema/phi3.sft.nvidia.24g.yaml b/configs/lema/phi3.sft.nvidia.24g.yaml
index 7e87c1306a..21cdfa35f4 100644
--- a/configs/lema/phi3.sft.nvidia.24g.yaml
+++ b/configs/lema/phi3.sft.nvidia.24g.yaml
@@ -6,8 +6,7 @@ model:
data:
dataset_name: "yahma/alpaca-cleaned"
preprocessing_function_name: "alpaca"
- trainer_kwargs:
- dataset_text_field: "prompt"
+ text_col: "prompt"
training:
optimizer: "adamw_torch"
diff --git a/configs/lema/zephyr.7b/sft/config_full.yaml b/configs/lema/zephyr.7b/sft/config_full.yaml
index 020ed32166..b09c79b88d 100644
--- a/configs/lema/zephyr.7b/sft/config_full.yaml
+++ b/configs/lema/zephyr.7b/sft/config_full.yaml
@@ -15,12 +15,11 @@ data:
split: train_sft
stream: True
pack: True
+ text_col: "text"
preprocessing_function_name: trl_sft_ultrachat_200k
preprocessing_function_kwargs:
num_proc: 6
batched: False
- trainer_kwargs:
- dataset_text_field: text
training:
optimizer: adamw_torch
diff --git a/configs/lema/zephyr.7b/sft/config_qlora.yaml b/configs/lema/zephyr.7b/sft/config_qlora.yaml
index 43278ac18c..5252605c8a 100644
--- a/configs/lema/zephyr.7b/sft/config_qlora.yaml
+++ b/configs/lema/zephyr.7b/sft/config_qlora.yaml
@@ -15,12 +15,11 @@ data:
split: train_sft
stream: True
pack: True
+ text_col: "text"
preprocessing_function_name: trl_sft_ultrachat_200k
preprocessing_function_kwargs:
num_proc: 6
batched: false
- trainer_kwargs:
- dataset_text_field: text
training:
optimizer: adamw_torch
diff --git a/notebooks/LeMa - Colab Setup Example.ipynb b/notebooks/LeMa - Colab Setup Example.ipynb
index ced4a397c5..c5ab37aeda 100644
--- a/notebooks/LeMa - Colab Setup Example.ipynb
+++ b/notebooks/LeMa - Colab Setup Example.ipynb
@@ -27,10 +27,12 @@
},
"source": [
"#### 1. Setting up read-only github token\n",
- "Since the Github repository is private, we need to generate a `read-only` user token scoped for the `lema` repo.\n",
- "1. In Github.com, go to `Settings -> Developer settings -> Personal access tokens -> Generate new token`\n",
- "2. See example [here](https://drive.google.com/file/d/1zxd8r7qkPfl34mfGK83m_13oLGFGghW1/view?usp=share_link) on how to fill the form. The only permission that should be granted is `Contents`, in `read-only` mode\n",
- "3. Add the github token to your colab environment secrets (Key icon in the left menu)\n",
+ "Since the Github repository is private, we need to generate a `Read-only` user token scoped for the `lema` repo.\n",
+ "1. In Github.com, go to `Settings -> Developer settings -> Personal access tokens -> Fine-grained tokens -> Generate new token`.\n",
+ "1. See example [here](https://drive.google.com/file/d/1zxd8r7qkPfl34mfGK83m_13oLGFGghW1/view?usp=share_link) on how to fill the form. The only permission that should be granted is `Repository permissions -> Contents -> Read-only`.\n",
+ "1. Click `Generate token`, copy the token, and save it somewhere safe (as you can't access it again).\n",
+ "1. Message Oussama or Nikolai on Slack to get the token approved.\n",
+ "1. Create a colab environment secret (Key icon in the left menu) with `repo-token` as the name and your token as the value.\n",
"\n",
"This only needs to be done once!"
]
@@ -55,7 +57,7 @@
"from google.colab import userdata\n",
"\n",
"github_repo_token = userdata.get(\"repo-token\") # Setup token in your notebook secrets\n",
- "github_username = \"<GITHUB_USERNAME>\" # Change your github username\n",
+ "github_username = \"<GITHUB_USERNAME>\" # Change to your github username\n",
"\n",
"!git clone https://$github_username:[email protected]/openlema/lema.git"
]
@@ -77,37 +79,50 @@
},
"outputs": [],
"source": [
- "%pip install -e lema[all]"
+ "%pip install -e 'lema[all]'"
]
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "3ASgNcAx0lZ_"
- },
+ "metadata": {},
"source": [
- "## Training\n",
- "Make sure to enable GPU runtime for faster training"
+ "#### 4. Importing LeMa"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import lema\n",
+ "from lema.core.types import (\n",
+ " DataParams,\n",
+ " EvaluationConfig,\n",
+ " ModelParams,\n",
+ " TrainerType,\n",
+ " TrainingConfig,\n",
+ " TrainingParams,\n",
+ ")"
]
},
{
"cell_type": "markdown",
"metadata": {
- "id": "SYSpXqvP0sbT"
+ "id": "3ASgNcAx0lZ_"
},
"source": [
- "#### Using `lema` module"
+ "## Training\n",
+ "Make sure to enable GPU runtime for faster training."
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {
- "id": "IYBkZmipsQL6"
+ "id": "SYSpXqvP0sbT"
},
- "outputs": [],
"source": [
- "import lema"
+ "#### Using `lema` module"
]
},
{
@@ -118,13 +133,22 @@
},
"outputs": [],
"source": [
- "lema.train(\n",
- " model_name=\"microsoft/Phi-3-mini-4k-instruct\",\n",
- " dataset_name=\"yahma/alpaca-cleaned\",\n",
- " preprocessing_function_name=\"alpaca\",\n",
- " output_dir=\"train/\",\n",
- " trust_remote_code=True,\n",
- ")"
+ "config = TrainingConfig(\n",
+ " data=DataParams(\n",
+ " dataset_name=\"yahma/alpaca-cleaned\",\n",
+ " preprocessing_function_name=\"alpaca\",\n",
+ " text_col=\"prompt\",\n",
+ " ),\n",
+ " model=ModelParams(\n",
+ " model_name=\"microsoft/Phi-3-mini-4k-instruct\",\n",
+ " trust_remote_code=True,\n",
+ " ),\n",
+ " training=TrainingParams(\n",
+ " trainer_type=TrainerType.TRL_SFT,\n",
+ " output_dir=\"train/\",\n",
+ " ),\n",
+ ")\n",
+ "lema.train(config)"
]
},
{
@@ -147,9 +171,10 @@
"!lema-train \\\n",
" \"data.dataset_name=yahma/alpaca-cleaned\" \\\n",
" \"data.preprocessing_function_name=alpaca\" \\\n",
- " \"data.trainer_kwargs.dataset_text_field=prompt\" \\\n",
+ " \"data.text_col=prompt\" \\\n",
" \"model.model_name=microsoft/Phi-3-mini-4k-instruct\" \\\n",
" \"model.trust_remote_code=true\" \\\n",
+ " \"training.trainer_type=TRL_SFT/\" \\\n",
" \"training.output_dir=train/\""
]
},
@@ -177,13 +202,18 @@
},
"outputs": [],
"source": [
- "lema.evaluate(\n",
- " model_name=\"train/best.pt\", # model output\n",
- " dataset_name=\"yahma/alpaca-cleaned\",\n",
- " preprocessing_function_name=\"alpaca\",\n",
- " output_dir=\"eval/\",\n",
- " trust_remote_code=True,\n",
- ")"
+ "config = EvaluationConfig(\n",
+ " data=DataParams(\n",
+ " dataset_name=\"yahma/alpaca-cleaned\",\n",
+ " preprocessing_function_name=\"alpaca\",\n",
+ " ),\n",
+ " model=ModelParams(\n",
+ " model_name=\"train/best.pt\",\n",
+ " trust_remote_code=True,\n",
+ " ),\n",
+ ")\n",
+ "\n",
+ "lema.evaluate(config)"
]
},
{
@@ -202,10 +232,9 @@
"!lema-evaluate \\\n",
" \"data.dataset_name=yahma/alpaca-cleaned\" \\\n",
" \"data.preprocessing_function_name=alpaca\" \\\n",
- " \"data.trainer_kwargs.dataset_text_field=prompt\" \\\n",
+ " \"data.text_col=prompt\" \\\n",
" \"model.model_name=microsoft/Phi-3-mini-4k-instruct\" \\\n",
- " \"model.trust_remote_code=true\" \\\n",
- " \"training.output_dir=eval/\""
+ " \"model.trust_remote_code=true\""
]
}
],
diff --git a/scripts/finetune_sft_phi3.sh b/scripts/finetune_sft_phi3.sh
index c0bf44fd9f..53eb2560f5 100755
--- a/scripts/finetune_sft_phi3.sh
+++ b/scripts/finetune_sft_phi3.sh
@@ -5,6 +5,6 @@ python -m lema.train \
"model.model_name=microsoft/Phi-3-mini-4k-instruct" \
"data.dataset_name=yahma/alpaca-cleaned" \
"data.preprocessing_function_name=alpaca" \
- "data.trainer_kwargs.dataset_text_field=prompt" \
+ "data.text_col=prompt" \
"training.output_dir=train/" \
"model.trust_remote_code=true"
diff --git a/src/lema/builders/data.py b/src/lema/builders/data.py
index dbe10961d3..b2c02c2aa1 100644
--- a/src/lema/builders/data.py
+++ b/src/lema/builders/data.py
@@ -79,12 +79,16 @@ def build_dataset(
dataset_kwargs = {}
if config.model.model_max_length:
dataset_kwargs["seq_length"] = config.model.model_max_length
-
+ # Our preprocessing functions take a dict as input and return a dict as output.
+ # formatting_func must return a str, so we fetch the target str from the dict.
+ if preprocessing_fn:
+ dataset_kwargs["formatting_func"] = lambda x: preprocessing_fn(x)[
+ data_params.text_col
+ ]
dataset = ConstantLengthDataset(
tokenizer,
dataset,
- dataset_text_field=data_params.get_dataset_text_field(),
- formatting_func=preprocessing_fn,
+ dataset_text_field=data_params.text_col,
**dataset_kwargs,
)
elif data_params.preprocessing_function_name:
diff --git a/src/lema/core/types.py b/src/lema/core/types.py
index baf2b4e70c..91fb5e3361 100644
--- a/src/lema/core/types.py
+++ b/src/lema/core/types.py
@@ -7,7 +7,7 @@
from omegaconf import MISSING, OmegaConf
from peft.utils.peft_types import TaskType
-_DATASET_TEXT_FIELD = "dataset_text_field"
+from lema.logging import logger
#
@@ -30,7 +30,7 @@ class TrainerType(Enum):
class TrainingParams:
optimizer: str = "adamw_torch"
use_peft: bool = False
- trainer_type: TrainerType = TrainerType.TRL_SFT
+ trainer_type: TrainerType = TrainerType.HF
enable_gradient_checkpointing: bool = False
output_dir: str = "output"
per_device_train_batch_size: int = 8
@@ -137,29 +137,21 @@ def _default_factory_preprocessing_kwargs() -> dict:
defaults["batched"] = True # Note the default of huggingface is False.
return defaults
+ # The dataset column name containing the text to train on. Required for SFTTrainer.
+ text_col: Optional[str] = None
preprocessing_function_name: Optional[str] = None
preprocessing_function_kwargs: Dict[str, Any] = field(
default_factory=_default_factory_preprocessing_kwargs
)
trainer_kwargs: Dict[str, Any] = field(default_factory=dict)
- def get_dataset_text_field(self) -> Optional[str]:
- """Get the `dataset_text_field` value if present."""
- return self.trainer_kwargs.get(_DATASET_TEXT_FIELD)
-
def __post_init__(self):
- """Verify params if packing is enabled."""
+ """Verify params."""
if self.pack:
if not self.stream:
raise ValueError("`stream` must be enabled if `pack` is enabled.")
- if (
- not self.preprocessing_function_name
- and _DATASET_TEXT_FIELD not in self.trainer_kwargs
- ):
- raise ValueError(
- "Either `trainer_kwargs['dataset_text_field']` "
- "or `preprocessing_function_name` must be specified."
- )
+ if not self.text_col:
+ raise ValueError("`text_col` must be specified if `pack` is enabled.")
@dataclass
@@ -269,6 +261,26 @@ class TrainingConfig(BaseConfig):
training: TrainingParams = field(default_factory=TrainingParams)
peft: PeftParams = field(default_factory=PeftParams)
+ def __post_init__(self):
+ """Verify/populate params."""
+ if self.training.trainer_type == TrainerType.TRL_SFT:
+ if not self.data.text_col:
+ raise ValueError("`text_col` must be specified for TRL_SFT Trainer.")
+
+ # Set `dataset_text_field` in `trainer_kwargs` since it's requried for
+ # `SFTTrainer`, and warn users if their value will be overridden.
+ existing_dataset_text_field = self.data.trainer_kwargs.get(
+ "dataset_text_field"
+ )
+ if (
+ existing_dataset_text_field is not None
+ ) and existing_dataset_text_field != self.data.text_col:
+ logger.warning(
+ "Overriding existing `dataset_text_field` value "
+ f'"{existing_dataset_text_field}" with "{self.data.text_col}"'
+ )
+ self.data.trainer_kwargs["dataset_text_field"] = self.data.text_col
+
@dataclass
class EvaluationConfig(BaseConfig):
diff --git a/tests/test_evaluate.py b/tests/test_evaluate.py
index c69ef06f13..3683c44801 100644
--- a/tests/test_evaluate.py
+++ b/tests/test_evaluate.py
@@ -9,9 +9,6 @@ def test_basic_evaluate():
data=DataParams(
dataset_name="yahma/alpaca-cleaned",
preprocessing_function_name="alpaca",
- trainer_kwargs={
- "dataset_text_field": "prompt",
- },
),
model=ModelParams(
model_name="openai-community/gpt2",
diff --git a/tests/test_train.py b/tests/test_train.py
index e9a2719b12..67f9941500 100644
--- a/tests/test_train.py
+++ b/tests/test_train.py
@@ -1,7 +1,15 @@
import tempfile
+import pytest
+
from lema import train
-from lema.core.types import DataParams, ModelParams, TrainingConfig, TrainingParams
+from lema.core.types import (
+ DataParams,
+ ModelParams,
+ TrainerType,
+ TrainingConfig,
+ TrainingParams,
+)
def test_basic_train():
@@ -11,15 +19,14 @@ def test_basic_train():
data=DataParams(
dataset_name="yahma/alpaca-cleaned",
preprocessing_function_name="alpaca",
- trainer_kwargs={
- "dataset_text_field": "prompt",
- },
+ text_col="prompt",
),
model=ModelParams(
model_name="openai-community/gpt2",
trust_remote_code=True,
),
training=TrainingParams(
+ trainer_type=TrainerType.TRL_SFT,
max_steps=3,
logging_steps=3,
enable_wandb=False,
@@ -38,9 +45,7 @@ def test_custom_train():
data=DataParams(
dataset_name="yahma/alpaca-cleaned",
preprocessing_function_name="alpaca",
- trainer_kwargs={
- "dataset_text_field": "prompt",
- },
+ text_col="prompt",
),
model=ModelParams(
model_name="learning-machines/sample",
@@ -48,6 +53,7 @@ def test_custom_train():
trust_remote_code=False,
),
training=TrainingParams(
+ trainer_type=TrainerType.TRL_SFT,
max_steps=3,
logging_steps=3,
enable_wandb=False,
@@ -58,3 +64,35 @@ def test_custom_train():
)
train(config)
+
+
+# Currently takes a long time to run because packing is very slow.
+# TODO: Change `skip` to `e2e` after #62 is fixed.
[email protected]
+def test_pack_train():
+ output_temp_dir = tempfile.mkdtemp()
+
+ config: TrainingConfig = TrainingConfig(
+ data=DataParams(
+ dataset_name="Salesforce/wikitext",
+ dataset_config="wikitext-2-raw-v1",
+ stream=True,
+ pack=True,
+ text_col="text",
+ ),
+ model=ModelParams(
+ model_name="openai-community/gpt2",
+ model_max_length=1024,
+ trust_remote_code=True,
+ ),
+ training=TrainingParams(
+ trainer_type=TrainerType.TRL_SFT,
+ max_steps=1,
+ logging_steps=1,
+ enable_wandb=False,
+ enable_tensorboard=False,
+ output_dir=output_temp_dir,
+ ),
+ )
+
+ train(config)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Code Refactoring / Architectural Improvement"
}
|
skypilot-org__skypilot-5838@891923b
|
skypilot-org/skypilot
|
Python
| 5,838
|
[UX] Improve error message when one node fails
|
<!-- Describe the changes in this PR -->
Fixes #4232. Prints:
> ERROR: Job X failed with return code list: [137, 1, 137 137] (A worker failed with return code 1, SkyPilot cleaned up the processes on other nodes with return code 137)
<!-- Describe the tests ran -->
Manually ran with a yaml file that runs `exit 1`
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [x] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [ ] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-06-02T22:14:42Z
|
[Core/UX] Improve the display of returncode for multi-node
<!-- Describe the bug report / feature request here -->
When a user's job is running on multiple nodes and one node fails with a return code, e.g. 1, SkyPilot will kill the processes on the other nodes, with a return code 137. It is confusing to users to see a list of return code like the following: `ERROR: Job 1 failed with return code list: [1, 137, 137]`
Instead, we should show message like the following:
```
ERROR: Job 1 failed with returncode: 1 on one node worker-2, SkyPilot cleaned the processes on other nodes with returncode 137
```
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
|
[
{
"body": "<!-- Describe the bug report / feature request here -->\r\n\r\nWhen a user's job is running on multiple nodes and one node fails with a return code, e.g. 1, SkyPilot will kill the processes on the other nodes, with a return code 137. It is confusing to users to see a list of return code like the following: `ERROR: Job 1 failed with return code list: [1, 137, 137]`\r\n\r\nInstead, we should show message like the following:\r\n```\r\nERROR: Job 1 failed with returncode: 1 on one node worker-2, SkyPilot cleaned the processes on other nodes with returncode 137\r\n```\r\n\r\n<!-- If relevant, fill in versioning info to help us troubleshoot -->\r\n_Version & Commit info:_\r\n* `sky -v`: PLEASE_FILL_IN\r\n* `sky -c`: PLEASE_FILL_IN\r\n",
"number": 4232,
"title": "[Core/UX] Improve the display of returncode for multi-node"
}
] |
f54f8eae7b1edb76c7fc3ef3be42dc833642b120
|
{
"head_commit": "891923bad03945f386e7f401b192648670f14fc4",
"head_commit_message": "Removed rank number",
"patch_to_review": "diff --git a/sky/backends/cloud_vm_ray_backend.py b/sky/backends/cloud_vm_ray_backend.py\nindex 147a08e63d5..cfae58e65a3 100644\n--- a/sky/backends/cloud_vm_ray_backend.py\n+++ b/sky/backends/cloud_vm_ray_backend.py\n@@ -699,6 +699,11 @@ def add_epilogue(self) -> None:\n # 139 is the return code of SIGSEGV, i.e. Segmentation Fault.\n if any(r == 139 for r in returncodes):\n reason = '(likely due to Segmentation Fault)'\n+ if any(r == 137 for r in returncodes):\n+ # Find the first non-137 return code and its index\n+ non_137 = next((i, r) for i, r in enumerate(returncodes) if r != 137)\n+ # +1 because the worker rank is 0-based, but I think the worker number is 1-based\n+ reason = f'(A Worker failed with return code {{non_137[1]}}, SkyPilot cleaned up the processes on other nodes with return code 137)'\n print('ERROR: {colorama.Fore.RED}Job {self.job_id} failed with '\n 'return code list:{colorama.Style.RESET_ALL}',\n returncodes,\n"
}
|
[
{
"diff_hunk": "@@ -699,6 +699,11 @@ def add_epilogue(self) -> None:\n # 139 is the return code of SIGSEGV, i.e. Segmentation Fault.\n if any(r == 139 for r in returncodes):\n reason = '(likely due to Segmentation Fault)'\n+ if any(r == 137 for r in returncodes):\n+ # Find the first non-137 return code and its index\n+ non_137 = next((i, r) for i, r in enumerate(returncodes) if r != 137)\n+ # +1 because the worker rank is 0-based, but I think the worker number is 1-based",
"line": null,
"original_line": 705,
"original_start_line": 703,
"path": "sky/backends/cloud_vm_ray_backend.py",
"start_line": null,
"text": "@user1:\nWe can remove the index stuff now. (The `enumerate` call and the comments.)"
}
] |
3d057ae4f8475bc4d63bfcfa386a86995c6c8dfd
|
diff --git a/sky/backends/cloud_vm_ray_backend.py b/sky/backends/cloud_vm_ray_backend.py
index 147a08e63d5..04e523dbfa1 100644
--- a/sky/backends/cloud_vm_ray_backend.py
+++ b/sky/backends/cloud_vm_ray_backend.py
@@ -699,6 +699,10 @@ def add_epilogue(self) -> None:
# 139 is the return code of SIGSEGV, i.e. Segmentation Fault.
if any(r == 139 for r in returncodes):
reason = '(likely due to Segmentation Fault)'
+ if any(r == 137 for r in returncodes):
+ # Find the first non-137 return code
+ non_137 = next(r for r in returncodes if r != 137)
+ reason = f'(A Worker failed with return code {{non_137}}, SkyPilot cleaned up the processes on other nodes with return code 137)'
print('ERROR: {colorama.Fore.RED}Job {self.job_id} failed with '
'return code list:{colorama.Style.RESET_ALL}',
returncodes,
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
|
skypilot-org__skypilot-5926@372ba7c
|
skypilot-org/skypilot
|
Python
| 5,926
|
[k8s] Better GPU Label Formatter Support for CoreWeave
|
<!-- Describe the changes in this PR -->
Fixes #5628
so if I add the invalid node labels to a GKE cluster, then I can properly see that the coreweave gpu label formatter is being picked up. For obvious reasons, `show-gpus` fail (as the cluster doesn't actually have H100 gpus with that label).
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [X] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [ ] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-06-09T13:30:05Z
|
[k8s] Handle invalid formats in GPU label formatter detection
A user had a Coreweave cluster with both GKE and CoreWeave labels
```
cloud.google.com/gke-accelerator=H100_NVLINK_80GB
gpu.nvidia.com/class=H100_NVLINK_80GB
gpu.nvidia.com/count=8
gpu.nvidia.com/model=H100_NVLINK_80GB
gpu.nvidia.com/vram=81
```
Since the GKE label value is in the wrong format (it typically has `-` as a separator and starts with `nvidia`), the user got:
```
$ sky show-gpus --cloud kubernetes
Invalid accelerator name in GKE cluster: H100_NVLINK_80GB
```
While it's very odd that the cluster comes with both labels, our label formatter should be robust to such setups. If a label format is detected but has invalid values, we should raise a warning and fall back to the next available label format.
|
@romilbhardwaj @kyuds I was able to successfully launch a SkyPilot cluster on a CoreWeave Kubernetes cluster, but I had to do a few hacky things. I'm sure you can find a better solution for SkyPilot.
1. I ran into a similar invalid GKE cluster accelerator name error, so I put the `CoreWeaveLabelFormatter` at the start of the `LABEL_FORMATTER_REGISTRY`
2. I ran into an issue where after launching the cluster, launching would hang indefinitely. Checking the pod would yield an error saying "Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: no runtime for "containerd" is configured" The only runtime class in the cluster was one named `nvidia`, but used the `containerd` handler. I didn't want to break the cluster for anyone else, so I created a new runtime class called `nvidia-fixed` with the `nvidia` handler. I had to change the code in a couple places to set `...['runtimeClassName'] = 'nvidia-fixed'`.
After making those changes, launching worked, though there was a `414 Request-URI Too Large` error at the end that didn't seem to affect the actual launching, and I had to wait a bit before successfully SSHing into the cluster.
sorry was a bit backlogged. I have a preliminary pr with an attempt to address this issue, but I don't have a coreweave cluster to test the second point mentioned by @bradhilton ... In theory this should work though
|
[
{
"body": "A user had a Coreweave cluster with both GKE and CoreWeave labels\n```\n cloud.google.com/gke-accelerator=H100_NVLINK_80GB\n gpu.nvidia.com/class=H100_NVLINK_80GB\n gpu.nvidia.com/count=8\n gpu.nvidia.com/model=H100_NVLINK_80GB\n gpu.nvidia.com/vram=81\n```\n\nSince the GKE label value is in the wrong format (it typically has `-` as a separator and starts with `nvidia`), the user got:\n```\n$ sky show-gpus --cloud kubernetes\nInvalid accelerator name in GKE cluster: H100_NVLINK_80GB\n```\n\nWhile it's very odd that the cluster comes with both labels, our label formatter should be robust to such setups. If a label format is detected but has invalid values, we should raise a warning and fall back to the next available label format.",
"number": 5628,
"title": "[k8s] Handle invalid formats in GPU label formatter detection"
}
] |
1e6d2a58148b13e238edd1a47a56b50266da25b8
|
{
"head_commit": "372ba7c5c5a943fb184807b71e7e073d97b856b3",
"head_commit_message": "simple fix",
"patch_to_review": "diff --git a/sky/provision/kubernetes/utils.py b/sky/provision/kubernetes/utils.py\nindex 0b792bfaad5..3dbfd18f6a1 100644\n--- a/sky/provision/kubernetes/utils.py\n+++ b/sky/provision/kubernetes/utils.py\n@@ -460,6 +460,14 @@ def get_accelerator_from_label_value(cls, value: str) -> str:\n raise ValueError(\n f'Invalid accelerator name in GKE cluster: {value}')\n \n+ @classmethod\n+ def validate_label_value(cls, value: str) -> Tuple[bool, str]:\n+ try:\n+ _ = cls.get_accelerator_from_label_value(value)\n+ return True, ''\n+ except ValueError as e:\n+ return False, str(e)\n+\n \n class GFDLabelFormatter(GPULabelFormatter):\n \"\"\"GPU Feature Discovery label formatter\n@@ -564,17 +572,29 @@ def detect_gpu_label_formatter(\n for label, value in node.metadata.labels.items():\n node_labels[node.metadata.name].append((label, value))\n \n- label_formatter = None\n-\n # Check if the node labels contain any of the GPU label prefixes\n for lf in LABEL_FORMATTER_REGISTRY:\n+ skip = False\n for _, label_list in node_labels.items():\n- for label, _ in label_list:\n+ for label, value in label_list:\n if lf.match_label_key(label):\n- label_formatter = lf()\n- return label_formatter, node_labels\n+ valid, reason = lf.validate_label_value(value)\n+ if valid:\n+ return lf(), node_labels\n+ else:\n+ logger.warning(f'Gpu label {label} matched for label '\n+ f'formatter {lf.__class__.__name__}, '\n+ f'but has invalid value {value}. '\n+ f'Reason: {reason}. '\n+ 'Skipping...')\n+ skip = True\n+ break\n+ if skip:\n+ break\n+ if skip:\n+ continue\n \n- return label_formatter, node_labels\n+ return None, node_labels\n \n \n class Autoscaler:\n"
}
|
[
{
"diff_hunk": "@@ -564,17 +572,29 @@ def detect_gpu_label_formatter(\n for label, value in node.metadata.labels.items():\n node_labels[node.metadata.name].append((label, value))\n \n- label_formatter = None\n-\n # Check if the node labels contain any of the GPU label prefixes\n for lf in LABEL_FORMATTER_REGISTRY:\n+ skip = False\n for _, label_list in node_labels.items():\n- for label, _ in label_list:\n+ for label, value in label_list:\n if lf.match_label_key(label):\n- label_formatter = lf()\n- return label_formatter, node_labels\n+ valid, reason = lf.validate_label_value(value)\n+ if valid:\n+ return lf(), node_labels\n+ else:\n+ logger.warning(f'Gpu label {label} matched for label '",
"line": null,
"original_line": 585,
"original_start_line": null,
"path": "sky/provision/kubernetes/utils.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n logger.warning(f'GPU label {label} matched for label '\r\n```\r\n\r\nEdit: fix typo\n\n@author:\nthis works? like not having the f-string?\n\n@author:\nchanged Gpu to all caps"
}
] |
2a8b156f8bc8091ab2c9ccff895293054e08743b
|
diff --git a/sky/provision/kubernetes/utils.py b/sky/provision/kubernetes/utils.py
index 5931ce88f83..e3d1e937766 100644
--- a/sky/provision/kubernetes/utils.py
+++ b/sky/provision/kubernetes/utils.py
@@ -461,6 +461,14 @@ def get_accelerator_from_label_value(cls, value: str) -> str:
raise ValueError(
f'Invalid accelerator name in GKE cluster: {value}')
+ @classmethod
+ def validate_label_value(cls, value: str) -> Tuple[bool, str]:
+ try:
+ _ = cls.get_accelerator_from_label_value(value)
+ return True, ''
+ except ValueError as e:
+ return False, str(e)
+
class GFDLabelFormatter(GPULabelFormatter):
"""GPU Feature Discovery label formatter
@@ -565,17 +573,29 @@ def detect_gpu_label_formatter(
for label, value in node.metadata.labels.items():
node_labels[node.metadata.name].append((label, value))
- label_formatter = None
-
# Check if the node labels contain any of the GPU label prefixes
for lf in LABEL_FORMATTER_REGISTRY:
+ skip = False
for _, label_list in node_labels.items():
- for label, _ in label_list:
+ for label, value in label_list:
if lf.match_label_key(label):
- label_formatter = lf()
- return label_formatter, node_labels
+ valid, reason = lf.validate_label_value(value)
+ if valid:
+ return lf(), node_labels
+ else:
+ logger.warning(f'GPU label {label} matched for label '
+ f'formatter {lf.__class__.__name__}, '
+ f'but has invalid value {value}. '
+ f'Reason: {reason}. '
+ 'Skipping...')
+ skip = True
+ break
+ if skip:
+ break
+ if skip:
+ continue
- return label_formatter, node_labels
+ return None, node_labels
class Autoscaler:
diff --git a/tests/unit_tests/kubernetes/test_kubernetes_utils.py b/tests/unit_tests/kubernetes/test_kubernetes_utils.py
index 865cf696765..431387026ce 100644
--- a/tests/unit_tests/kubernetes/test_kubernetes_utils.py
+++ b/tests/unit_tests/kubernetes/test_kubernetes_utils.py
@@ -277,3 +277,31 @@ def test_get_all_kube_context_names():
# Clean up temporary files
os.unlink(f1.name)
os.unlink(f2.name)
+
+
+def test_detect_gpu_label_formatter_invalid_label_skip():
+ """Tests that on finding a matching label, the
+ detect_gpu_label_formatter method will skip if
+ the label value is invalid."""
+
+ # this is an invalid GKE gpu label
+ valid, _ = utils.GKELabelFormatter.validate_label_value('H100_NVLINK_80GB')
+ assert not valid
+
+ # make node mocks with incorrect labels, as shown in
+ # https://github.com/skypilot-org/skypilot/issues/5628
+ mock_node = mock.MagicMock()
+ mock_node.metadata.name = 'node'
+ mock_node.metadata.labels = {
+ 'cloud.google.com/gke-accelerator': 'H100_NVLINK_80GB',
+ 'gpu.nvidia.com/class': 'H100_NVLINK_80GB',
+ 'gpu.nvidia.com/count': '8',
+ 'gpu.nvidia.com/model': 'H100_NVLINK_80GB',
+ 'gpu.nvidia.com/vram': '81'
+ }
+
+ with mock.patch('sky.provision.kubernetes.utils.get_kubernetes_nodes',
+ return_value=[mock_node]):
+ lf, _ = utils.detect_gpu_label_formatter('whatever')
+ assert lf is not None
+ assert isinstance(lf, utils.CoreWeaveLabelFormatter)
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
skypilot-org__skypilot-5812@54d7185
|
skypilot-org/skypilot
|
Python
| 5,812
|
[k8s] Wait for apt installation to complete before proceeding with setup
|
As a part of nimbus (https://github.com/skypilot-org/skypilot/pull/4393) we parallelized dependency setup in the k8s pods. However, we did not add a wait for SSH setup to complete. Waiting for SSH is critical for rsync operations to succeed.
A recent archive.ubuntu.com outage caused delayed SSH installs, resulting in #5794 and #5792.
This PR fixes it by waiting for SSH install to complete.
Note: we are not updating APT mirrors in this PR to avoid hardcoding mirrors.
Closes #5794 and #5792.
Tested:
- [x] `sky launch -c test --infra kubernetes --image-id docker:nvcr.io/nvidia/pytorch:24.05-py3`
- [x] `sky launch -c test --infra kubernetes --image-id docker:ubuntu:latest`
- [x] Back compat - `sky launch` with master, then `sky launch` again with this branch
Note: we are not updating the mirror
|
2025-05-29T21:16:31Z
|
[k8s] `sky launch --infra k8s --image-id continuumio/miniconda3:latest` experience undeterministic ssh issue
```
E 05-28 23:26:07 sdk.py:1599] Using Python 3.10.17 environment at: /root/skypilot-runtime
--
| E 05-28 23:26:07 sdk.py:1599] skypilot 1.0.0.dev0
| E 05-28 23:26:07 sdk.py:1599] === Skypilot wheel installation completed in 0 secs ===
| E 05-28 23:26:07 sdk.py:1599] DefaultTasksMax=infinity
| E 05-28 23:26:07 sdk.py:1599] === Setup system configs and fuse completed in 0 secs ===
| E 05-28 23:26:07 sdk.py:1599] System has not been booted with systemd as init system (PID 1). Can't operate.
| E 05-28 23:26:07 sdk.py:1599] Failed to connect to bus: Host is down
| E 05-28 23:26:07 sdk.py:1599] ssh: unrecognized service
| E 05-28 23:26:07 sdk.py:1599]
| E 05-28 23:26:07 sdk.py:1599] ===== stderr =====command terminated with exit code 1
```
|
Looks like archive.ubuntu.com has been having issues, causing SSH install to fail:
```
Get:32 http://security.ubuntu.com/ubuntu noble-updates/main amd64 openssh-server amd64 1:9.6p1-3ubuntu13.11 [509 kB]
Get:42 http://security.ubuntu.com/ubuntu noble-updates/main amd64 libglib2.0-0t64 amd64 2.80.0-6ubuntu3.4 [1544 kB]
Get:43 http://security.ubuntu.com/ubuntu noble-updates/main amd64 gir1.2-glib-2.0 amd64 2.80.0-6ubuntu3.4 [183 kB]
Get:46 http://security.ubuntu.com/ubuntu noble-updates/main amd64 libglib2.0-data all 2.80.0-6ubuntu3.4 [48.7 kB]
Get:49 http://security.ubuntu.com/ubuntu noble-updates/main amd64 libxml2 amd64 2.9.14+dfsg-1.3ubuntu3.3 [762 kB]
Get:55 http://security.ubuntu.com/ubuntu noble-updates/main amd64 python3-pkg-resources all 68.1.2-2ubuntu1.1 [168 kB]
Get:65 http://security.ubuntu.com/ubuntu noble-updates/main amd64 python3-cryptography amd64 41.0.7-4ubuntu0.1 [810 kB]
Fetched 11.8 MB in 1min 2s (192 kB/s)
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python3-defaults/python3-minimal_3.12.3-0ubuntu2_amd64.deb Unable to connect to archive.ubuntu.com:80:
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/m/media-types/media-types_10.1.0_all.deb Unable to connect to archive.ubuntu.com:80:
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/n/netbase/netbase_6.4_all.deb Unable to connect to archive.ubuntu.com:80:
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/r/readline/readline-common_8.2-4build1_all.deb Unable to connect to archive.ubuntu.com:80:
....
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Optional package openssh-server installation failed, skip.
```
Ping to `archive.ubuntu.com` is also borked:
<img width="540" alt="Image" src="https://github.com/user-attachments/assets/9208c245-8fd7-4bee-b405-82121f8d706b" />
|
[
{
"body": "```\nE 05-28 23:26:07 sdk.py:1599] Using Python 3.10.17 environment at: /root/skypilot-runtime\n--\n | E 05-28 23:26:07 sdk.py:1599] skypilot 1.0.0.dev0\n | E 05-28 23:26:07 sdk.py:1599] === Skypilot wheel installation completed in 0 secs ===\n | E 05-28 23:26:07 sdk.py:1599] DefaultTasksMax=infinity\n | E 05-28 23:26:07 sdk.py:1599] === Setup system configs and fuse completed in 0 secs ===\n | E 05-28 23:26:07 sdk.py:1599] System has not been booted with systemd as init system (PID 1). Can't operate.\n | E 05-28 23:26:07 sdk.py:1599] Failed to connect to bus: Host is down\n | E 05-28 23:26:07 sdk.py:1599] ssh: unrecognized service\n | E 05-28 23:26:07 sdk.py:1599]\n | E 05-28 23:26:07 sdk.py:1599] ===== stderr =====command terminated with exit code 1\n```",
"number": 5794,
"title": "[k8s] `sky launch --infra k8s --image-id continuumio/miniconda3:latest` experience undeterministic ssh issue"
}
] |
f3edb2141cf7bfea30c552e2812c14f56d7149ab
|
{
"head_commit": "54d7185ad080b6e99c6e93c0c6b380d73c701e32",
"head_commit_message": "Remove mirror update",
"patch_to_review": "diff --git a/sky/templates/kubernetes-ray.yml.j2 b/sky/templates/kubernetes-ray.yml.j2\nindex f2de21cc091..0c8e9ccee1e 100644\n--- a/sky/templates/kubernetes-ray.yml.j2\n+++ b/sky/templates/kubernetes-ray.yml.j2\n@@ -395,6 +395,13 @@ available_node_types:\n # STEP 1: Run apt update, install missing packages, and set up ssh.\n (\n (\n+ # For backwards compatibility, we put a marker file in the pod\n+ # to indicate that the apt ssh setup step will write a completion\n+ # marker file (/tmp/apt_ssh_setup_complete) to the pod.\n+ # TODO: Remove this marker file and it's usage in setup_commands\n+ # after v0.11.0 release.\n+ touch /tmp/apt_ssh_setup_started\n+\n DEBIAN_FRONTEND=noninteractive $(prefix_cmd) apt-get update > /tmp/apt-update.log 2>&1 || \\\n echo \"Warning: apt-get update failed. Continuing anyway...\" >> /tmp/apt-update.log\n # Install both fuse2 and fuse3 for compatibility for all possible fuse adapters in advance,\n@@ -402,7 +409,7 @@ available_node_types:\n PACKAGES=\"rsync curl wget netcat gcc patch pciutils fuse fuse3 openssh-server\";\n \n # Separate packages into two groups: packages that are installed first\n- # so that curl, rsync and wget are available sooner to unblock the following\n+ # so that curl, rsync, ssh and wget are available sooner to unblock the following\n # conda installation and rsync.\n # Also, we install fuse first to avoid confliction with fuse3.\n set -e\n@@ -494,6 +501,8 @@ available_node_types:\n $(prefix_cmd) service ssh restart;\n $(prefix_cmd) sed -i \"s/mesg n/tty -s \\&\\& mesg n/\" ~/.profile;\n \n+ touch /tmp/apt_ssh_setup_complete\n+ echo \"=== SSH setup completed ===\"\n ) > /tmp/${STEPS[0]}.log 2>&1 || {\n echo \"Error: ${STEPS[0]} failed. Continuing anyway...\" > /tmp/${STEPS[0]}.failed\n cat /tmp/${STEPS[0]}.log\n@@ -791,6 +800,15 @@ setup_commands:\n {%- endfor %}\n STEPS=(\"apt-ssh-setup\" \"runtime-setup\" \"env-setup\")\n start_epoch=$(date +%s);\n+ \n+ # Wait for SSH setup to complete before proceeding\n+ if [ -f /tmp/apt_ssh_setup_started ]; then\n+ echo \"=== Logs for asynchronous SSH setup ===\";\n+ [ -f /tmp/apt_ssh_setup_complete ] && cat /tmp/${STEPS[0]}.log ||\n+ { tail -f -n +1 /tmp/${STEPS[0]}.log & TAIL_PID=$!; echo \"Tail PID: $TAIL_PID\"; until [ -f /tmp/apt_ssh_setup_complete ]; do sleep 0.5; done; kill $TAIL_PID || true; };\n+ [ -f /tmp/${STEPS[0]}.failed ] && { echo \"Error: ${STEPS[0]} failed. Exiting.\"; exit 1; } || true;\n+ fi\n+ \n echo \"=== Logs for asynchronous ray and skypilot installation ===\";\n if [ -f /tmp/skypilot_is_nimbus ]; then\n echo \"=== Logs for asynchronous ray and skypilot installation ===\";\n"
}
|
[
{
"diff_hunk": "@@ -395,14 +395,21 @@ available_node_types:\n # STEP 1: Run apt update, install missing packages, and set up ssh.\n (\n (\n+ # For backwards compatibility, we put a marker file in the pod\n+ # to indicate that the apt ssh setup step will write a completion\n+ # marker file (/tmp/apt_ssh_setup_complete) to the pod.\n+ # TODO: Remove this marker file and it's usage in setup_commands",
"line": null,
"original_line": 401,
"original_start_line": null,
"path": "sky/templates/kubernetes-ray.yml.j2",
"start_line": null,
"text": "@user1:\n```suggestion\r\n # TODO: Remove this marker file and its usage in setup_commands\r\n```"
}
] |
3d4b4df29041c8724e7923ddd2e7c78e4ac07bc7
|
diff --git a/sky/templates/kubernetes-ray.yml.j2 b/sky/templates/kubernetes-ray.yml.j2
index f2de21cc091..70fa6db2c30 100644
--- a/sky/templates/kubernetes-ray.yml.j2
+++ b/sky/templates/kubernetes-ray.yml.j2
@@ -395,6 +395,13 @@ available_node_types:
# STEP 1: Run apt update, install missing packages, and set up ssh.
(
(
+ # For backwards compatibility, we put a marker file in the pod
+ # to indicate that the apt ssh setup step will write a completion
+ # marker file (/tmp/apt_ssh_setup_complete) to the pod.
+ # TODO: Remove this marker file and its usage in setup_commands
+ # after v0.11.0 release.
+ touch /tmp/apt_ssh_setup_started
+
DEBIAN_FRONTEND=noninteractive $(prefix_cmd) apt-get update > /tmp/apt-update.log 2>&1 || \
echo "Warning: apt-get update failed. Continuing anyway..." >> /tmp/apt-update.log
# Install both fuse2 and fuse3 for compatibility for all possible fuse adapters in advance,
@@ -402,7 +409,7 @@ available_node_types:
PACKAGES="rsync curl wget netcat gcc patch pciutils fuse fuse3 openssh-server";
# Separate packages into two groups: packages that are installed first
- # so that curl, rsync and wget are available sooner to unblock the following
+ # so that curl, rsync, ssh and wget are available sooner to unblock the following
# conda installation and rsync.
# Also, we install fuse first to avoid confliction with fuse3.
set -e
@@ -494,6 +501,8 @@ available_node_types:
$(prefix_cmd) service ssh restart;
$(prefix_cmd) sed -i "s/mesg n/tty -s \&\& mesg n/" ~/.profile;
+ touch /tmp/apt_ssh_setup_complete
+ echo "=== SSH setup completed ==="
) > /tmp/${STEPS[0]}.log 2>&1 || {
echo "Error: ${STEPS[0]} failed. Continuing anyway..." > /tmp/${STEPS[0]}.failed
cat /tmp/${STEPS[0]}.log
@@ -791,6 +800,15 @@ setup_commands:
{%- endfor %}
STEPS=("apt-ssh-setup" "runtime-setup" "env-setup")
start_epoch=$(date +%s);
+
+ # Wait for SSH setup to complete before proceeding
+ if [ -f /tmp/apt_ssh_setup_started ]; then
+ echo "=== Logs for asynchronous SSH setup ===";
+ [ -f /tmp/apt_ssh_setup_complete ] && cat /tmp/${STEPS[0]}.log ||
+ { tail -f -n +1 /tmp/${STEPS[0]}.log & TAIL_PID=$!; echo "Tail PID: $TAIL_PID"; until [ -f /tmp/apt_ssh_setup_complete ]; do sleep 0.5; done; kill $TAIL_PID || true; };
+ [ -f /tmp/${STEPS[0]}.failed ] && { echo "Error: ${STEPS[0]} failed. Exiting."; exit 1; } || true;
+ fi
+
echo "=== Logs for asynchronous ray and skypilot installation ===";
if [ -f /tmp/skypilot_is_nimbus ]; then
echo "=== Logs for asynchronous ray and skypilot installation ===";
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
skypilot-org__skypilot-5602@79d0b57
|
skypilot-org/skypilot
|
Python
| 5,602
|
[UX] Support `--infra` option and deprecate `cloud/region/zone`
|
<!-- Describe the changes in this PR -->
This PR updates the UX for the output, and add support for infra
Also, fixes #5600
## Main changes
1. Support `--infra` and deprecate `--cloud`, `--region`, and `--zone`
2. Show `INFRA` in outputs instead of cloud/region/zone
3. Reorder/polishing table outputs
```
sky launch --infra aws/us-east-1
Considered resources (1 node):
------------------------------------------------------------------------------
INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
------------------------------------------------------------------------------
AWS (us-east-1) m6i.2xlarge 8 32 - 0.38 ✔
------------------------------------------------------------------------------
Launching a new cluster 'sky-472f-zhwu'. Proceed? [Y/n]:
```
```
sky launch --infra gcp ✭ ✱
Considered resources (1 node):
------------------------------------------------------------------------------------
INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
------------------------------------------------------------------------------------
GCP (us-central1-a) n2-standard-8 8 32 - 0.39 ✔
------------------------------------------------------------------------------------
Launching a new cluster 'sky-c16f-zhwu'. Proceed? [Y/n]:
```
```
sky launch --cloud k8s ✭ ✱
The --cloud, --region, and --zone options are deprecated. Use --infra instead.
Considered resources (1 node):
-------------------------------------------------------------------------------------------
INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
-------------------------------------------------------------------------------------------
Kubernetes (gke_sky-dev...ypilotalpha) - 2 2 - 0.00 ✔
-------------------------------------------------------------------------------------------
Launching a new cluster 'sky-5d56-zhwu'. Proceed? [Y/n]:
```
## Optimizer table
Original
```
sky launch --gpus H100:8
Considered resources (1 node):
-------------------------------------------------------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
-------------------------------------------------------------------------------------------------------------------------------------
Kubernetes 2CPU--2GB--8H100 2 8 H100:8 gke_sky-dev-xxx 0.00 ✔
RunPod 8x_H100_SECURE 128 640 H100:8 CA 19.12
Lambda gpu_8x_h100_sxm5 208 1800 H100:8 us-east-1 23.92
GCP a3-highgpu-8g 208 1872 H100:8 us-central1-a 46.02
AWS p5.48xlarge 192 2048 H100:8 us-east-1 98.32
-------------------------------------------------------------------------------------------------------------------------------------
Launching a new cluster 'sky-b84f-zhwu'. Proceed? [Y/n]:
```
New
```
sky launch --gpus H100:8
Considered resources (1 node):
---------------------------------------------------------------------------------------------------------
INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
---------------------------------------------------------------------------------------------------------
Kubernetes (gke_sky-dev...ypilotalpha) - 2 8 H100:8 0.00 ✔
RunPod (CA) 8x_H100_SECURE 128 640 H100:8 19.12
Lambda (us-east-1) gpu_8x_h100_sxm5 208 1800 H100:8 23.92
GCP (us-central1-a) a3-highgpu-8g 208 1872 H100:8 46.02
AWS (us-east-1) p5.48xlarge 192 2048 H100:8 98.32
---------------------------------------------------------------------------------------------------------
Launching a new cluster 'sky-3d09-zhwu'. Proceed? [Y/n]:
```
## Status table
Original:
```
sky status -u
Clusters
NAME USER LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
test-normal zhwu 17 hrs ago 1x GCP(n2-standard-2) UP - sky launch -c test-normal...
dashboard-workspace zhwu 6 days ago 1x GCP(n2-standard-4, ports=['46580']) INIT - sky launch -c dashboard-w...
benchmark zhwu 2 weeks ago 1x Nebius(gpu-h200-sxm_8gpu-128vcpu-1600gb, mem=750+, {'H200': 8}, dis... STOPPED - sky exec -c benchmark qwe...
sky-jobs-controller-9ce1ce58 zhwu 14 hrs ago 1x GCP(n2-highmem-4, disk_size=50) STOPPED 10m sky jobs queue -r
```
New:
```
sky status -u
Clusters
NAME USER INFRA RESOURCES STATUS AUTOSTOP LAUNCHED
sky-f4e8-zhwu zhwu Kubernetes (gke_sky-dev-xxx...) 1x(gpus=H100:1, cpus=2, mem=8, ...) UP - 2 hrs ago
test-normal zhwu GCP (us-central1-a) 1x(cpus=2, mem=8, type=n2-standard-2, ...) UP - 21 hrs ago
dashboard-workspace zhwu GCP (us-central1-a) 1x(cpus=4, mem=16, type=n2-standard-4, ...) INIT - 7 days ago
benchmark zhwu Nebius (eu-west1) 1x(gpus=H200:8, type=gpu-h200-sxm_8g..., ...) STOPPED - 2 weeks ago
sky-jobs-controller-9ce1ce58 zhwu GCP (us-central1-a) 1x(cpus=4, mem=32, type=n2-highmem-4, ...) STOPPED 10m 17 hrs ago
```
## `sky check`
Original
```
To enable a cloud, follow the hints above and rerun: sky check
If any problems remain, refer to detailed docs at: https://docs.skypilot.co/en/latest/getting-started/installation.html
🎉 Enabled clouds 🎉
AWS [compute, storage]
GCP [compute, storage]
Kubernetes [compute]
Active context: gke_sky-dev-xxx
Lambda [compute]
RunPod [compute]
Using SkyPilot API server: http://127.0.0.1:46580
```
New
```
To enable a cloud, follow the hints above and rerun: sky check
If any problems remain, refer to detailed docs at: https://docs.skypilot.co/en/latest/getting-started/installation.html
🎉 Enabled infra 🎉
AWS [compute, storage]
GCP [compute, storage]
Kubernetes [compute]
Active context: gke_sky-dev-xxx
Lambda [compute]
RunPod [compute]
Using SkyPilot API server: http://127.0.0.1:46580
```
## `sky show-gpus`
```
sky show-gpus --infra k8s/my-h100-cluster
Kubernetes GPUs
Context: my-h100-cluster
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
H100 1, 2, 4, 8 5 of 8 free
H200 1, 2, 4, 8 8 of 8 free
Kubernetes per-node GPU availability
CONTEXT NODE GPU UTILIZATION
my-h100-cluster gke-xxx-default-pool-ff931856-6uvd - 0 of 0 free
my-h100-cluster gke-xxx-largecpu-05dae726-1usy H100 5 of 8 free
my-h100-cluster gke-xxx-largecpu-05dae726-4rxa H200 8 of 8 free
```
## Dashboard
Original
<img width="1707" alt="image" src="https://github.com/user-attachments/assets/d7f82129-ef9e-482c-9f04-7055e288b64e" />
New
<img width="1691" alt="image" src="https://github.com/user-attachments/assets/76716e8a-fbcf-4b98-a0be-de80f58b83a3" />
### Cluster
Original

New
<img width="1706" alt="image" src="https://github.com/user-attachments/assets/c325874c-4f5f-492d-9efa-26ce59cc1900" />
<img width="1709" alt="image" src="https://github.com/user-attachments/assets/6f7133b1-3eba-4e1f-bf3f-30952c224c20" />
## TODOs
- [x] Update `sky check`
- [x] Update `sky show-gpus`
- [x] Update smoke tests
- [x] Check ordered and any_of logic
- [x] Update docs with the new table
- [x] Test with an old jobs controller
- [x] Test with old YAML that has `cloud`, `region`, `zone` only
## Future TODOs
- [ ] Update the doc to infra centric instead of cloud/region centric (cc'ing @concretevitamin)
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [x] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [ ] Any manual or new tests for this PR (please specify below)
- [x] Old controller, `jobs queue`, `jobs logs`, `jobs cancel` without `sky jobs launch` first
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [x] `/smoke-test --aws` [4b79915](https://github.com/skypilot-org/skypilot/pull/5602/commits/4b799157940b27b9f77b6308f2d6069c3403a0e7)
- [x] `/smoke-test --gcp` [4b79915](https://github.com/skypilot-org/skypilot/pull/5602/commits/4b799157940b27b9f77b6308f2d6069c3403a0e7) (except for `test_tpu`)
- [x] `/smoke-test --kubernetes` [583360f](https://github.com/skypilot-org/skypilot/pull/5602/commits/583360f9670b667bd32795bf7d705776d5ae4901)
- [x] `/smoke-test --azure` [be75cbe](https://github.com/skypilot-org/skypilot/pull/5602/commits/be75cbe33f8b6cf9c21e58ed8e8d757b0e829aa4)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [x] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
- [x] `/quicktest-core` passed on [e39e339](https://github.com/skypilot-org/skypilot/pull/5602/commits/e39e3396de60a4fe6b1aaf003b61c67fec2b69f7)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-05-16T16:47:08Z
|
[UX] Failover reason table has too narrow width
```
resources:
accelerators: { H100:8, A100-80GB:8 }
cpus: 32+
disk_size: 512 # Ensure model checkpoints can fit.
disk_tier: best
ports: 8081 # Expose to internet traffic.
```
Launching the above on RunPod:
```
⨯ Failed to provision resources. View logs: sky api logs -l sky-2025-05-16-08-45-14-060233/provision.log
sky.exceptions.ResourcesUnavailableError: Failed to provision all possible launchable resources. Relax the task's resource requirements: 1x RunPod(cpus=32+, {'H100': 8}, disk_tier=best, disk_size=512, ports=['8081'])
To keep retrying until the cluster is up, use the `--retry-until-up` flag.
Reasons for provision failures (for details, please check the log above):
Resource Reason
RunPod(8x_A100- Failed to acquire resources in all zones in CA for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in CZ for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in IS for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in NL for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in NO for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in RO for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in SE for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_A100- Failed to acquire resources in all zones in US for
80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
disk_size=512, ports=['8081'])}.
ports=['8081'])
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in CA for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in CZ for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in IS for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in NL for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in NO for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in RO for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in SE for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
RunPod(8x_H100_SECURE, Failed to acquire resources in all zones in US for
{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,
disk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,
disk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,
ports=['8081']) ports=['8081'])}.
```
The table should have more human-readable widths.
|
[
{
"body": "```\nresources:\n accelerators: { H100:8, A100-80GB:8 }\n cpus: 32+\n disk_size: 512 # Ensure model checkpoints can fit.\n disk_tier: best\n ports: 8081 # Expose to internet traffic.\n```\nLaunching the above on RunPod:\n\n```\n⨯ Failed to provision resources. View logs: sky api logs -l sky-2025-05-16-08-45-14-060233/provision.log\nsky.exceptions.ResourcesUnavailableError: Failed to provision all possible launchable resources. Relax the task's resource requirements: 1x RunPod(cpus=32+, {'H100': 8}, disk_tier=best, disk_size=512, ports=['8081'])\nTo keep retrying until the cluster is up, use the `--retry-until-up` flag.\nReasons for provision failures (for details, please check the log above):\nResource Reason\nRunPod(8x_A100- Failed to acquire resources in all zones in CA for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in CZ for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in IS for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in NL for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in NO for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in RO for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in SE for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_A100- Failed to acquire resources in all zones in US for\n80GB_SECURE, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\n{'A100-80GB': 8}, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_tier=best, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\ndisk_size=512, ports=['8081'])}.\nports=['8081'])\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in CA for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in CZ for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in IS for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in NL for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in NO for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in RO for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in SE for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\nRunPod(8x_H100_SECURE, Failed to acquire resources in all zones in US for\n{'H100': 8}, {RunPod(cpus=32+, {'H100': 8}, disk_tier=best,\ndisk_tier=best, disk_size=512, ports=['8081']), RunPod(cpus=32+,\ndisk_size=512, {'A100-80GB': 8}, disk_tier=best, disk_size=512,\nports=['8081']) ports=['8081'])}.\n```\nThe table should have more human-readable widths.",
"number": 5600,
"title": "[UX] Failover reason table has too narrow width"
}
] |
6801ad4f316d6be31fe15b324b0c443495c5f1a2
|
{
"head_commit": "79d0b570d047a044f2a35d4cbc26a44aacc5b379",
"head_commit_message": "Add escape",
"patch_to_review": "diff --git a/docs/source/cloud-setup/cloud-permissions/aws.rst b/docs/source/cloud-setup/cloud-permissions/aws.rst\nindex 91e4eea2d5d..8a25b67756d 100644\n--- a/docs/source/cloud-setup/cloud-permissions/aws.rst\n+++ b/docs/source/cloud-setup/cloud-permissions/aws.rst\n@@ -90,10 +90,10 @@ Example of mixing the default profile and another profile:\n .. code-block:: console\n \n $ # A cluster launched under the default AWS identity.\n- $ sky launch --cloud aws -c default\n+ $ sky launch --infra aws -c default\n \n $ # A cluster launched under a different profile.\n- $ AWS_PROFILE=AdministratorAccess-12345 sky launch --cloud aws -c other-profile-cluster\n+ $ AWS_PROFILE=AdministratorAccess-12345 sky launch --infra aws -c other-profile-cluster\n \n If you are using a :ref:`remote API server <sky-api-server>`, the AWS credentials are configured on the remote server. Overriding ``AWS_PROFILE`` on the client side won't work.\n \ndiff --git a/docs/source/cloud-setup/cloud-permissions/gcp.rst b/docs/source/cloud-setup/cloud-permissions/gcp.rst\nindex 40e42ee6d78..f0a2bbb34be 100644\n--- a/docs/source/cloud-setup/cloud-permissions/gcp.rst\n+++ b/docs/source/cloud-setup/cloud-permissions/gcp.rst\n@@ -69,7 +69,7 @@ The easiest way to grant permissions to a user access your GCP project without t\n roles/iam.securityAdmin\n \n .. note::\n- If the ``roles/iam.securityAdmin`` role is undesirable, you can do the following. First, include the role and have any user (e.g., the admin) run ``sky launch --cloud gcp`` successfully once. This is to create the necessary service account. Then, replace the role ``roles/iam.securityAdmin`` with ``roles/iam.roleViewer`` in the list above.\n+ If the ``roles/iam.securityAdmin`` role is undesirable, you can do the following. First, include the role and have any user (e.g., the admin) run ``sky launch --infra gcp`` successfully once. This is to create the necessary service account. Then, replace the role ``roles/iam.securityAdmin`` with ``roles/iam.roleViewer`` in the list above.\n \n \n Optionally, to use TPUs, add the following role:\ndiff --git a/docs/source/cloud-setup/quota.rst b/docs/source/cloud-setup/quota.rst\nindex f30862b75fd..ce2e76ed327 100644\n--- a/docs/source/cloud-setup/quota.rst\n+++ b/docs/source/cloud-setup/quota.rst\n@@ -17,7 +17,7 @@ AWS\n \n 1. Go to the `EC2 Quotas console <https://console.aws.amazon.com/servicequotas/home/services/ec2/quotas>`_.\n 2. **Select a region** on the top right.\n-3. Choose an EC2 instance type from the list (e.g, ``Running On-Demand P instances`` or ``All P Spot Instance Requests``). Use ``sky show-gpus --cloud aws --all`` or check `here <https://aws.amazon.com/ec2/instance-types/>`__ for more instance types.\n+3. Choose an EC2 instance type from the list (e.g, ``Running On-Demand P instances`` or ``All P Spot Instance Requests``). Use ``sky show-gpus --infra aws --all`` or check `here <https://aws.amazon.com/ec2/instance-types/>`__ for more instance types.\n 4. Click the quota name, and then choose **Request quota increase**.\n 5. For **Change quota value**, enter the new value.\n 6. Choose **Request**.\n@@ -57,7 +57,7 @@ OCI\n 1. Go to the `OCI Limits, Quotas and Usage console <https://cloud.oracle.com/limits>`_ to check your current resources status.\n 2. Click the **request a service limit increase** link on the page if you want to increase quotas.\n 3. Choose a **Service Category** from the list (e.g, ``Compute``). \n-4. Choose a **Resource** from the list (e.g, ``GPUs for GPU.A10 based VM and BM Instances``). Use ``sky show-gpus --cloud oci --all`` or check `here <https://docs.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm>`__ for more instance types.\n+4. Choose a **Resource** from the list (e.g, ``GPUs for GPU.A10 based VM and BM Instances``). Use ``sky show-gpus --infra oci --all`` or check `here <https://docs.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm>`__ for more instance types.\n 5. Enter the **Limit** field for your new limit and **Reason for request** for justification.\n 6. Click **Create Support Request** to submit.\n 7. You may check `OCI Service Limits <https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#computelimits>`_ for more information.\ndiff --git a/docs/source/compute/gpus.rst b/docs/source/compute/gpus.rst\nindex 4c30021b7ba..3fbcd583cbc 100644\n--- a/docs/source/compute/gpus.rst\n+++ b/docs/source/compute/gpus.rst\n@@ -26,7 +26,7 @@ You can query the accelerators available in your Kubernetes clusters with:\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n \n \n .. code-block:: text\ndiff --git a/docs/source/examples/auto-failover.rst b/docs/source/examples/auto-failover.rst\nindex 596e9d2c415..b01988f01a7 100644\n--- a/docs/source/examples/auto-failover.rst\n+++ b/docs/source/examples/auto-failover.rst\n@@ -91,11 +91,11 @@ GCP, where it succeeded after one region:\n \n Considered resources (1 node):\n ----------------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n ----------------------------------------------------------------------------------------------------\n- Azure Standard_ND96asr_v4 96 900 A100:8 eastus 27.20 ✔\n- GCP a2-highgpu-8g 96 680 A100:8 us-central1-a 29.39\n- AWS p4d.24xlarge 96 1152 A100:8 us-east-1 32.77\n+ Azure (eastus) Standard_ND96asr_v4 96 900 A100:8 27.20 ✔\n+ GCP (us-central1-a) a2-highgpu-8g 96 680 A100:8 29.39\n+ AWS (us-east-1) p4d.24xlarge 96 1152 A100:8 32.77\n ----------------------------------------------------------------------------------------------------\n Launching a new cluster 'a100-8'. Proceed? [Y/n]:\n \n@@ -135,11 +135,11 @@ A10, L4, and A10g GPUs, using :code:`sky launch task.yaml`.\n $ sky launch task.yaml\n ...\n -----------------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n -----------------------------------------------------------------------------------------------------\n- Azure Standard_NV6ads_A10_v5 6 55 A10:1 eastus 0.45 ✔\n- GCP g2-standard-4 4 16 L4:1 us-east4-a 0.70\n- AWS g5.xlarge 4 16 A10G:1 us-east-1 1.01\n+ Azure (eastus) Standard_NV6ads_A10_v5 6 55 A10:1 0.45 ✔\n+ GCP (us-east4-a) g2-standard-4 4 16 L4:1 0.70\n+ AWS (us-east-1) g5.xlarge 4 16 A10G:1 1.01\n -----------------------------------------------------------------------------------------------------\n \n \n@@ -165,11 +165,10 @@ If a task would like to specify multiple candidate resources (not only GPUs), th\n \n resources:\n ordered: # Candidate resources in a preference order\n- - cloud: gcp\n+ - infra: gcp\n accelerators: A100-80GB\n - instance_type: g5.xlarge\n- - cloud: azure\n- region: eastus\n+ - infra: azure/eastus\n accelerators: A100\n \n \n@@ -178,11 +177,10 @@ If a task would like to specify multiple candidate resources (not only GPUs), th\n \n resources:\n any_of: # Candidate resources that can be chosen in any order\n- - cloud: gcp\n+ - infra: gcp\n accelerators: A100-80GB\n - instance_type: g5.xlarge\n- - cloud: azure\n- region: eastus\n+ - infra: azure/eastus\n accelerators: A100\n \n .. tip::\n@@ -198,18 +196,18 @@ If a task would like to specify multiple candidate resources (not only GPUs), th\n accelerators: {A10g:8, A10:8, L4:8, A100:8}\n any_of:\n # AWS:\n- - region: us-east-1\n- - region: us-east-2\n- - region: us-west-1\n- - region: us-west-2\n+ - infra: aws/us-east-1\n+ - infra: aws/us-east-2\n+ - infra: aws/us-west-1\n+ - infra: aws/us-west-2\n # GCP\n- - region: us-central1\n- - region: us-east1\n- - region: us-east4\n- - region: us-west1\n- - region: us-west2\n- - region: us-west3\n- - region: us-west4\n+ - infra: gcp/us-central1\n+ - infra: gcp/us-east1\n+ - infra: gcp/us-east4\n+ - infra: gcp/us-west1\n+ - infra: gcp/us-west2\n+ - infra: gcp/us-west3\n+ - infra: gcp/us-west4\n \n .. hint::\n \n@@ -224,12 +222,12 @@ This will generate the following output:\n \n Considered resources (1 node):\n ---------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n ---------------------------------------------------------------------------------------------\n- GCP g2-standard-96 96 384 L4:8 us-east4-a 7.98 ✔\n- AWS g5.48xlarge 192 768 A10G:8 us-east-1 16.29\n- GCP a2-highgpu-8g 96 680 A100:8 us-east1-b 29.39\n- AWS p4d.24xlarge 96 1152 A100:8 us-east-1 32.77\n+ GCP (us-east4-a) g2-standard-96 96 384 L4:8 7.98 ✔\n+ AWS (us-east-1) g5.48xlarge 192 768 A10G:8 16.29\n+ GCP (us-east1-b) a2-highgpu-8g 96 680 A100:8 29.39\n+ AWS (us-east-1) p4d.24xlarge 96 1152 A100:8 32.77\n ---------------------------------------------------------------------------------------------\n \n Launching a new cluster 'mycluster'. Proceed? [Y/n]:\ndiff --git a/docs/source/examples/managed-jobs.rst b/docs/source/examples/managed-jobs.rst\nindex 5d033fbd3f7..2365a88203d 100644\n--- a/docs/source/examples/managed-jobs.rst\n+++ b/docs/source/examples/managed-jobs.rst\n@@ -19,9 +19,9 @@ To start a managed job, use :code:`sky jobs launch`:\n Managed job 'myjob' will be launched on (estimated):\n Considered resources (1 node):\n ------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n ------------------------------------------------------------------------------------------\n- AWS m6i.2xlarge 8 32 - us-east-1 0.38 ✔\n+ AWS (us-east-1) m6i.2xlarge 8 32 - 0.38 ✔\n ------------------------------------------------------------------------------------------\n Launching a managed job 'myjob'. Proceed? [Y/n]: Y\n ... <job is submitted and launched>\n@@ -446,7 +446,7 @@ To submit the pipeline, the same command :code:`sky jobs launch` is used. The pi\n Fetching managed job statuses...\n Managed jobs\n In progress jobs: 1 RECOVERING\n- ID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n+ ID TASK NAME REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n 8 pipeline - 50 mins ago 47m 45s - 1 RECOVERING\n ↳ 0 train 1x [V100:8][Spot|On-demand] 50 mins ago 47m 45s - 1 RECOVERING\n ↳ 1 eval 1x [T4:1] - - - 0 PENDING\n@@ -560,8 +560,7 @@ To achieve the above, you can specify custom configs in :code:`~/.sky/config.yam\n resources:\n # All configs below are optional.\n # Specify the location of the jobs controller.\n- cloud: gcp\n- region: us-central1\n+ infra: gcp/us-central1\n # Bump cpus to allow more managed jobs to be launched concurrently. (Default: 4+)\n cpus: 8+\n # Bump memory to allow more managed jobs to be running at once.\n@@ -584,10 +583,10 @@ To see your current jobs controller, use :code:`sky status`.\n $ sky status --refresh\n \n Clusters\n- NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND\n- my-cluster-1 1 week ago 1x AWS(m6i.4xlarge) STOPPED - sky launch --cpus 16 --cloud...\n- my-other-cluster 1 week ago 1x GCP(n2-standard-16) STOPPED - sky launch --cloud gcp --...\n- sky-jobs-controller-919df126 1 day ago 1x AWS(r6i.xlarge, disk_size=50) STOPPED 10m sky jobs launch --cpus 2 ...\n+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED \n+ my-cluster-1 AWS (us-east-1) 1x(cpus=16, type=m6i.4xlarge) STOPPED - 1 week ago \n+ my-other-cluster GCP (us-central1) 1x(cpus=16, type=n2-standard-16) STOPPED - 1 week ago \n+ sky-jobs-controller-919df126 AWS (us-east-1) 1x(cpus=2, type=r6i.xlarge, disk_size=50) STOPPED 10m 1 day ago \n \n Managed jobs\n No in-progress managed jobs.\n@@ -642,7 +641,7 @@ For maximum parallelism, the following configuration is recommended:\n controller:\n resources:\n # In our testing, aws > gcp > azure\n- cloud: aws\n+ infra: aws\n cpus: 128\n # Azure does not have 128+ CPU instances, so use 96 instead\n # cpus: 96\ndiff --git a/docs/source/getting-started/installation.rst b/docs/source/getting-started/installation.rst\nindex c14c1718a48..58a34e8f787 100644\n--- a/docs/source/getting-started/installation.rst\n+++ b/docs/source/getting-started/installation.rst\n@@ -21,7 +21,7 @@ Install SkyPilot using pip:\n conda create -y -n sky python=3.10\n conda activate sky\n \n- # Choose your cloud:\n+ # Choose your infra:\n \n pip install \"skypilot[kubernetes]\"\n pip install \"skypilot[aws]\"\n@@ -50,7 +50,7 @@ Install SkyPilot using pip:\n conda create -y -n sky python=3.10\n conda activate sky\n \n- # Choose your cloud:\n+ # Choose your infra:\n \n pip install \"skypilot-nightly[kubernetes]\"\n pip install \"skypilot-nightly[aws]\"\n@@ -83,7 +83,7 @@ Install SkyPilot using pip:\n git clone https://github.com/skypilot-org/skypilot.git\n cd skypilot\n \n- # Choose your cloud:\n+ # Choose your infra:\n \n pip install -e \".[kubernetes]\"\n pip install -e \".[aws]\"\ndiff --git a/docs/source/getting-started/quickstart.rst b/docs/source/getting-started/quickstart.rst\nindex 2a7fecf7709..e8e54b88c80 100644\n--- a/docs/source/getting-started/quickstart.rst\n+++ b/docs/source/getting-started/quickstart.rst\n@@ -32,7 +32,7 @@ Copy the following YAML into a ``hello_sky.yaml`` file:\n \n resources:\n # Optional; if left out, automatically pick the cheapest cloud.\n- cloud: aws\n+ infra: aws\n # 8x NVIDIA A100 GPU\n accelerators: A100:8\n \n@@ -126,9 +126,9 @@ This may show multiple clusters, if you have created several:\n \n .. code-block::\n \n- NAME LAUNCHED RESOURCES COMMAND STATUS\n- mygcp 1 day ago 1x GCP(n1-highmem-8) sky launch -c mygcp --cloud gcp STOPPED\n- mycluster 4 mins ago 1x AWS(p4d.24xlarge, {'A100': 8}) sky exec mycluster hello_sky.yaml UP\n+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED \n+ mygcp GCP (us-central1-a) 1x(cpus=4, mem=16, type=n2-standard-4, ...) STOPPED - 1 day ago \n+ mycluster AWS (us-east-1) 1x(gpus=A100:8, type=p4d.24xlarge, ...) UP - 4 mins ago \n \n See here for a list of all possible :ref:`cluster states <sky-status>`.\n \ndiff --git a/docs/source/reference/api-server/api-server.rst b/docs/source/reference/api-server/api-server.rst\nindex 42fe4b43b31..b8a4fbe7021 100644\n--- a/docs/source/reference/api-server/api-server.rst\n+++ b/docs/source/reference/api-server/api-server.rst\n@@ -128,16 +128,16 @@ To see other users' clusters and the job/serve controllers, use the ``-u`` flag.\n \n $ sky status -u\n Clusters\n- NAME USER LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND\n- my-cluster-2 my-user 2 hrs ago 1x GCP(n2-standard-8) STOPPED - sky launch task-2.yaml\n- other-cluster other-user 1 week ago 1x AWS(m6i.16xlarge) UP - sky launch --cloud aws...\n- my-cluster-1 my-user 2 months ago 1x AWS(m6i.4xlarge) STOPPED - sky launch task-1.yaml\n- sky-jobs-controller-7c3d4ff7 root 2 days ago 1x AWS(r6i.xlarge, disk_size=50) STOPPED 10m sky jobs launch --env PART...\n+ NAME USER LAUNCHED INFRA RESOURCES STATUS AUTOSTOP\n+ my-cluster-2 my-user 2 hrs ago GCP (us-central1-a) 1x(cpus=8, mem=32, type=n2-standard-8, ...) STOPPED - \n+ other-cluster other-user 1 week ago AWS (us-east-1) 1x(cpus=64, mem=256, type=m6i.16xlarge, ...) UP - \n+ my-cluster-1 my-user 2 months ago AWS (us-east-1) 1x(cpus=16, mem=64, type=m6i.4xlarge, ...) STOPPED - \n+ sky-jobs-controller-7c3d4ff7 root 2 days ago AWS (us-east-1) 1x(cpus=4, mem=32, type=r6i.xlarge, ...) STOPPED 10m \n \n $ sky jobs queue -u\n Fetching managed job statuses...\n Managed jobs\n- ID TASK NAME USER RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n+ ID TASK NAME USER REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n 3 - job-2 my-user 1x[CPU:2] 2 days ago 2m 10s 1m 14s 0 CANCELLED\n 2 - other-job other-user 1x[CPU:2] 2 days ago 11m 54s 10m 52s 0 CANCELLED\n 1 - job-1 my-use 1x[CPU:2] 5 days ago 1m 7s 3s 0 SUCCEEDED\ndiff --git a/docs/source/reference/async.rst b/docs/source/reference/async.rst\nindex 32316bfbfe1..1ce2e6015ae 100644\n--- a/docs/source/reference/async.rst\n+++ b/docs/source/reference/async.rst\n@@ -38,10 +38,10 @@ For example, when a user runs ``sky launch -c my-cluster``, the following output\n $ sky launch -c my-cluster --cpus 2\n Considered resources (1 node):\n ---------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n ---------------------------------------------------------------------------------------------\n- Kubernetes 2CPU--2GB 2 2 - in-cluster 0.00 ✔\n- AWS m6i.large 2 8 - us-east-1 0.10\n+ Kubernetes (my-cluster) 2CPU--2GB 2 2 - 0.00 ✔\n+ AWS (us-east-1) m6i.large 2 8 - 0.098 \n ---------------------------------------------------------------------------------------------\n Launching a new cluster 'my-cluster'. Proceed? [Y/n]:\n ⚙︎ Launching on Kubernetes.\ndiff --git a/docs/source/reference/auto-stop.rst b/docs/source/reference/auto-stop.rst\nindex f034c520d3b..9f5778aa82a 100644\n--- a/docs/source/reference/auto-stop.rst\n+++ b/docs/source/reference/auto-stop.rst\n@@ -58,15 +58,15 @@ To view the status of the cluster, use ``sky status [--refresh]``:\n .. code-block:: bash\n \n $ sky status\n- NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND\n- mycluster 1 min ago 2x AWS(m4.2xlarge) UP 10 min sky launch -d -c ...\n- mycluster2 1 min ago 2x AWS(m4.2xlarge) UP 10 min(down) sky launch -d -c ...\n+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED\n+ mycluster AWS (us-east-1) 2x(cpus=8, type=m4.2xlarge) UP 10 min 1 min ago\n+ mycluster2 AWS (us-east-1) 2x(cpus=8, type=m4.2xlarge) UP 10 min(down) 1 min ago\n \n # Refresh the statuses by querying the cloud providers\n $ sky status --refresh\n I 06-27 13:36:11 backend_utils.py:2273] Autodowned cluster: mycluster2\n- NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND\n- mycluster 11 min ago 2x AWS(m4.2xlarge) STOPPED 10 min sky launch -d -c ...\n+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED\n+ mycluster AWS (us-east-1) 2x(cpus=8, type=m4.2xlarge) STOPPED 10 min 11 min ago\n \n Note that :code:`sky status` shows the cached statuses, which can be outdated for clusters with autostop/autodown scheduled.\n To query the latest statuses of those clusters, use :code:`sky status --refresh`.\ndiff --git a/docs/source/reference/config.rst b/docs/source/reference/config.rst\nindex 58bb7b66094..18460060836 100644\n--- a/docs/source/reference/config.rst\n+++ b/docs/source/reference/config.rst\n@@ -37,8 +37,7 @@ Below is the configuration syntax and some example values. See detailed explanat\n :ref:`bucket <config-yaml-jobs-bucket>`: s3://my-bucket/\n controller:\n :ref:`resources <config-yaml-jobs-controller-resources>`: # same spec as 'resources' in a task YAML\n- cloud: gcp\n- region: us-central1\n+ infra: gcp/us-central1\n cpus: 4+ # number of vCPUs, max concurrent spot jobs = 2 * cpus\n disk_size: 100\n :ref:`autostop <config-yaml-jobs-controller-autostop>`:\n@@ -214,8 +213,7 @@ Example:\n controller:\n resources: # same spec as 'resources' in a task YAML\n # optionally set specific cloud/region\n- cloud: gcp\n- region: us-central1\n+ infra: gcp/us-central1\n # default resources:\n cpus: 4+\n memory: 8x\ndiff --git a/docs/source/reference/kubernetes/kubernetes-deployment.rst b/docs/source/reference/kubernetes/kubernetes-deployment.rst\nindex 3324999007f..d42826e5b63 100644\n--- a/docs/source/reference/kubernetes/kubernetes-deployment.rst\n+++ b/docs/source/reference/kubernetes/kubernetes-deployment.rst\n@@ -143,11 +143,11 @@ Deploying on Google Cloud GKE\n \n $ sky check\n \n-5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --cloud k8s`\n+5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --infra k8s`\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n GPU REQUESTABLE_QTY_PER_NODE UTILIZATION\n L4 1, 2, 4 6 of 8 free\n A100 1, 2 2 of 4 free\n@@ -198,11 +198,11 @@ Deploying on Amazon EKS\n \n $ sky check\n \n-5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --cloud k8s`\n+5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --infra k8s`\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n GPU REQUESTABLE_QTY_PER_NODE UTILIZATION\n A100 1, 2 2 of 2 free\n \ndiff --git a/docs/source/reference/kubernetes/kubernetes-getting-started.rst b/docs/source/reference/kubernetes/kubernetes-getting-started.rst\nindex b6ef3fba103..d6da95398f1 100644\n--- a/docs/source/reference/kubernetes/kubernetes-getting-started.rst\n+++ b/docs/source/reference/kubernetes/kubernetes-getting-started.rst\n@@ -111,15 +111,15 @@ Once your cluster administrator has :ref:`setup a Kubernetes cluster <kubernetes\n \n Considered resources (1 node):\n ---------------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n ---------------------------------------------------------------------------------------------------\n- Kubernetes 2CPU--2GB 2 2 - kubernetes 0.00 ✔\n- AWS m6i.large 2 8 - us-east-1 0.10\n- Azure Standard_D2s_v5 2 8 - eastus 0.10\n- GCP n2-standard-2 2 8 - us-central1 0.10\n- IBM bx2-8x32 8 32 - us-east 0.38\n- Lambda gpu_1x_a10 30 200 A10:1 us-east-1 0.60\n- ---------------------------------------------------------------------------------------------------.\n+ Kubernetes (kind-skypilot) - 2 2 - 0.00 ✔\n+ AWS (us-east-1) m6i.large 2 8 - 0.10\n+ Azure (eastus) Standard_D2s_v5 2 8 - 0.10\n+ GCP (us-central1-a) n2-standard-2 2 8 - 0.10\n+ IBM (us-east) bx2-8x32 8 32 - 0.38\n+ Lambda (us-east-1) gpu_1x_a10 30 200 A10:1 0.60\n+ ----------------------------------------------------------------------------------------------------\n \n \n .. note::\n@@ -152,28 +152,28 @@ Unlike :code:`sky status` which lists only the SkyPilot resources launched by th\n $ sky status --k8s\n Kubernetes cluster state (context: mycluster)\n SkyPilot clusters\n- USER NAME LAUNCHED RESOURCES STATUS\n- alice infer-svc-1 23 hrs ago 1x Kubernetes(cpus=1, mem=1, {'L4': 1}) UP\n- alice sky-jobs-controller-80b50983 2 days ago 1x Kubernetes(cpus=4, mem=4) UP\n- alice sky-serve-controller-80b50983 23 hrs ago 1x Kubernetes(cpus=4, mem=4) UP\n- bob dev 1 day ago 1x Kubernetes(cpus=2, mem=8, {'H100': 1}) UP\n- bob multinode-dev 1 day ago 2x Kubernetes(cpus=2, mem=2) UP\n- bob sky-jobs-controller-2ea485ea 2 days ago 1x Kubernetes(cpus=4, mem=4) UP\n+ USER NAME LAUNCHED INFRA RESOURCES STATUS\n+ alice infer-svc-1 23 hrs ago Kubernetes 1x(cpus=1, mem=1, L4:1) UP\n+ alice sky-jobs-controller-80b50983 2 days ago Kubernetes 1x(cpus=4, mem=4) UP\n+ alice sky-serve-controller-80b50983 23 hrs ago Kubernetes 1x(cpus=4, mem=4) UP\n+ bob dev 1 day ago Kubernetes 1x(cpus=2, mem=8, H100:1) UP\n+ bob multinode-dev 1 day ago Kubernetes 2x(cpus=2, mem=2) UP\n+ bob sky-jobs-controller-2ea485ea 2 days ago Kubernetes 1x(cpus=4, mem=4) UP\n \n Managed jobs\n In progress tasks: 1 STARTING\n- USER ID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n+ USER ID TASK NAME REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n alice 1 - eval 1x[CPU:1+] 2 days ago 49s 8s 0 SUCCEEDED\n bob 4 - pretrain 1x[H100:4] 1 day ago 1h 1m 11s 1h 14s 0 SUCCEEDED\n bob 3 - bigjob 1x[CPU:16] 1 day ago 1d 21h 11m 4s - 0 STARTING\n bob 2 - failjob 1x[CPU:1+] 1 day ago 54s 9s 0 FAILED\n bob 1 - shortjob 1x[CPU:1+] 2 days ago 1h 1m 19s 1h 16s 0 SUCCEEDED\n \n-You can also inspect the real-time GPU usage on the cluster with :code:`sky show-gpus --cloud k8s`.\n+You can also inspect the real-time GPU usage on the cluster with :code:`sky show-gpus --infra k8s`.\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n Kubernetes GPUs\n GPU REQUESTABLE_QTY_PER_NODE UTILIZATION\n L4 1, 2, 4 12 of 12 free\ndiff --git a/docs/source/reference/kubernetes/kubernetes-priorities.rst b/docs/source/reference/kubernetes/kubernetes-priorities.rst\nindex ed8cec474ea..f7911ed9a1b 100644\n--- a/docs/source/reference/kubernetes/kubernetes-priorities.rst\n+++ b/docs/source/reference/kubernetes/kubernetes-priorities.rst\n@@ -70,7 +70,7 @@ We use two simple counter jobs in this example:\n \n # high-priority-job.yaml\n resources:\n- cloud: kubernetes\n+ infra: kubernetes\n cpus: 4\n \n run: |\n@@ -91,7 +91,7 @@ We use two simple counter jobs in this example:\n \n # low-priority-job.yaml\n resources:\n- cloud: kubernetes\n+ infra: kubernetes\n cpus: 4\n \n run: |\ndiff --git a/docs/source/reference/kubernetes/kubernetes-setup.rst b/docs/source/reference/kubernetes/kubernetes-setup.rst\nindex 2fa3e80f119..e6b9ef101a3 100644\n--- a/docs/source/reference/kubernetes/kubernetes-setup.rst\n+++ b/docs/source/reference/kubernetes/kubernetes-setup.rst\n@@ -217,7 +217,7 @@ You can also check the GPUs available on your nodes by running:\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n Kubernetes GPUs\n GPU REQUESTABLE_QTY_PER_NODE UTILIZATION\n L4 1, 2, 4 12 of 12 free\ndiff --git a/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst b/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst\nindex 3abb8e42076..41ac13be372 100644\n--- a/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst\n+++ b/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst\n@@ -87,7 +87,7 @@ Next, try running a simple hello world task to verify that SkyPilot can launch t\n \n .. code-block:: bash\n \n- $ sky launch -y -c mycluster --cloud k8s -- \"echo hello world\"\n+ $ sky launch -y -c mycluster --infra k8s -- \"echo hello world\"\n # Task should run and print \"hello world\" to the console\n \n # Once you have verified that the task runs, you can delete it\n@@ -174,7 +174,7 @@ Run :code:`sky check` to verify that SkyPilot can see your GPUs.\n # Should show `Kubernetes: Enabled` and should not print any warnings about GPU support.\n \n # List the available GPUs in your cluster\n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n \n Step B4 - Try launching a dummy GPU task\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n@@ -184,7 +184,7 @@ Next, try running a simple GPU task to verify that SkyPilot can launch GPU tasks\n .. code-block:: bash\n \n # Replace the GPU type from the sky show-gpus output in the task launch command\n- $ sky launch -y -c mygpucluster --cloud k8s --gpu <gpu-type>:1 -- \"nvidia-smi\"\n+ $ sky launch -y -c mygpucluster --infra k8s --gpu <gpu-type>:1 -- \"nvidia-smi\"\n \n # Task should run and print the nvidia-smi output to the console\n \n@@ -298,7 +298,7 @@ Next, try running a simple task with a service to verify that SkyPilot can launc\n \n .. code-block:: bash\n \n- $ sky launch -y -c myserver --cloud k8s --ports 8080 -- \"python -m http.server 8080\"\n+ $ sky launch -y -c myserver --infra k8s --ports 8080 -- \"python -m http.server 8080\"\n \n # Obtain the endpoint of the service\n $ sky status --endpoint 8080 myserver\ndiff --git a/docs/source/reference/kubernetes/multi-kubernetes.rst b/docs/source/reference/kubernetes/multi-kubernetes.rst\nindex 3d70cebb253..acbf2cde6f6 100644\n--- a/docs/source/reference/kubernetes/multi-kubernetes.rst\n+++ b/docs/source/reference/kubernetes/multi-kubernetes.rst\n@@ -96,11 +96,11 @@ To check the enabled Kubernetes clusters, you can run ``sky check k8s``.\n ├── my-h100-cluster\n └── my-tpu-cluster\n \n-To check GPUs available in a Kubernetes cluster, you can run ``sky show-gpus --cloud k8s``.\n+To check GPUs available in a Kubernetes cluster, you can run ``sky show-gpus --infra k8s``.\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n Kubernetes GPUs\n GPU UTILIZATION\n H100 16 of 16 free \n@@ -128,31 +128,33 @@ through the Kubernetes clusters in the same order as they are specified in the f\n \n .. code-block:: console\n \n- $ sky launch --gpus H100 --cloud k8s echo 'Hello World'\n+ $ sky launch --gpus H100 --infra k8s echo 'Hello World'\n \n Considered resources (1 node):\n- ------------------------------------------------------------------------------------------------------------\n- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN\n- ------------------------------------------------------------------------------------------------------------\n- Kubernetes 2CPU--8GB--1H100 2 8 H100:1 my-h100-cluster-gke 0.00 ✔\n- Kubernetes 2CPU--8GB--1H100 2 8 H100:1 my-h100-cluster-eks 0.00\n- ------------------------------------------------------------------------------------------------------------\n+ ---------------------------------------------------------------------------------------------------------\n+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN\n+ ---------------------------------------------------------------------------------------------------------\n+ Kubernetes (my-eks-cluster) 2CPU--2GB 2 2 - 0.00 ✔\n+ Kubernetes (gke-skypilot) 4CPU--8GB 4 8 - 0.00 \n+ AWS (us-east-1) m6i.large 2 8 - 0.10 \n+ GCP (us-central1-a) n2-standard-2 2 8 - 0.10 \n+ ---------------------------------------------------------------------------------------------------------\n \n \n Launching in a specific Kubernetes cluster\n ------------------------------------------\n \n-SkyPilot uses the ``region`` field to denote a Kubernetes context. You can point to a Kubernetes cluster\n-by specifying the ``--region`` with the context name for that cluster.\n+SkyPilot uses the ``infra`` field to denote a Kubernetes context. You can point to a Kubernetes cluster\n+by specifying the ``--infra`` with the context name for that cluster.\n \n .. code-block:: console\n \n \n $ # Launch in a specific Kubernetes cluster.\n- $ sky launch --cloud k8s --region my-tpu-cluster echo 'Hello World'\n+ $ sky launch --infra k8s/my-tpu-cluster echo 'Hello World'\n \n $ # Check the GPUs available in a Kubernetes cluster\n- $ sky show-gpus --cloud k8s --region my-h100-cluster ✭ ✱\n+ $ sky show-gpus --infra k8s/my-h100-cluster\n Kubernetes GPUs\n Context: my-h100-cluster\n GPU REQUESTABLE_QTY_PER_NODE UTILIZATION\n@@ -163,7 +165,7 @@ by specifying the ``--region`` with the context name for that cluster.\n my-h100-cluster gke-skypilotalpha-largecpu-05dae726-1usy H100 8 of 8 free \n my-h100-cluster gke-skypilotalpha-largecpu-05dae726-4rxa H100 8 of 8 free \n \n-When launching a SkyPilot cluster or task, you can also specify the context name with ``--region`` to launch the cluster or task in.\n+When launching a SkyPilot cluster or task, you can also specify the context name with ``--infra`` to launch the cluster or task in.\n \n \n Dynamically updating clusters to use\ndiff --git a/docs/source/reference/yaml-spec.rst b/docs/source/reference/yaml-spec.rst\nindex 9c49cec3915..b800227aca8 100644\n--- a/docs/source/reference/yaml-spec.rst\n+++ b/docs/source/reference/yaml-spec.rst\n@@ -26,10 +26,9 @@ Below is the configuration syntax and some example values. See details under ea\n :ref:`num_nodes <yaml-spec-num-nodes>`: 4\n \n :ref:`resources <yaml-spec-resources>`:\n- # Location.\n- :ref:`cloud <yaml-spec-resources-cloud>`: aws\n- :ref:`region <yaml-spec-resources-region>`: us-east-1\n- :ref:`zone <yaml-spec-resources-zone>`: us-east-1a\n+ # Infra to use. For example: ``aws``, ``aws/us-east-1``, ``kubernetes``,\n+ # or, ``kubernetes/my-h100-cluster-context``.\n+ :ref:`infra <yaml-spec-resources-infra>`: aws\n \n # Hardware.\n :ref:`accelerators <yaml-spec-resources-accelerators>`: H100:8\n@@ -49,17 +48,14 @@ Below is the configuration syntax and some example values. See details under ea\n my-label: my-value\n \n :ref:`any_of <yaml-spec-resources-any-of>`:\n- - cloud: aws\n- region: us-west-2\n+ - infra: aws/us-west-2\n accelerators: H100\n- - cloud: gcp\n+ - infra: gcp/us-central1\n accelerators: H100\n \n :ref:`ordered <yaml-spec-resources-ordered>`:\n- - cloud: aws\n- region: us-east-1\n- - cloud: aws\n- region: us-west-2\n+ - infra: aws/us-east-1\n+ - infra: aws/us-west-2\n \n :ref:`job_recovery <yaml-spec-resources-job-recovery>`: none\n \n@@ -151,58 +147,50 @@ Per-node resource requirements (optional).\n .. code-block:: yaml\n \n resources:\n- cloud: aws\n+ infra: aws\n instance_type: p3.8xlarge\n \n \n-.. _yaml-spec-resources-cloud:\n+.. _yaml-spec-resources-infra:\n \n-``resources.cloud``\n+``resources.infra``\n ~~~~~~~~~~~~~~~~~~~\n \n-The cloud to use (optional).\n \n-.. code-block:: yaml\n-\n- resources:\n- cloud: aws\n+Infrastructure to use (optional). Format: ``<cloud>``, ``<cloud>/<region>``, ``<cloud>/<region>/<zone>``, ``kubernetes/<context-name>``.\n \n-OR\n+Examples: ``aws``, ``aws/us-east-1``, ``aws/us-east-1/us-east-1a``, ``aws/*/us-east-1a``, ``kubernetes/my-cluster-context``.\n \n .. code-block:: yaml\n \n resources:\n- cloud: gcp\n+ infra: aws\n \n \n-.. _yaml-spec-resources-region:\n-\n-``resources.region``\n-~~~~~~~~~~~~~~~~~~~~\n+.. code-block:: yaml\n \n-The region to use (optional).\n+ resources:\n+ infra: kubernetes\n \n-Auto-failover will be disabled if this is specified.\n+You can also specify a specific region, zone or kubernetes context.\n \n .. code-block:: yaml\n \n resources:\n- region: us-east-1\n+ infra: aws/us-east-1\n \n \n-.. _yaml-spec-resources-zone:\n-\n-``resources.zone``\n-~~~~~~~~~~~~~~~~~~\n+.. code-block:: yaml\n \n-The zone to use (optional).\n+ resources:\n+ infra: aws/us-east-1/us-east-1a\n \n-Auto-failover will be disabled if this is specified.\n \n .. code-block:: yaml\n \n resources:\n- zone: us-east-1a\n+ infra: kubernetes/my-h100-cluster-context\n+\n \n \n .. _yaml-spec-resources-accelerators:\n@@ -658,12 +646,10 @@ Example:\n .. code-block:: yaml\n \n resources:\n+ accelerators: H100\n any_of:\n- - cloud: aws\n- region: us-west-2\n- accelerators: H100\n- - cloud: gcp\n- accelerators: H100\n+ - infra: aws/us-west-2\n+ - infra: gcp/us-central1\n \n .. _yaml-spec-resources-ordered:\n \n@@ -683,10 +669,8 @@ Example:\n \n resources:\n ordered:\n- - cloud: aws\n- region: us-east-1\n- - cloud: aws\n- region: us-west-2\n+ - infra: aws/us-east-1\n+ - infra: aws/us-west-2\n \n .. _yaml-spec-resources-job-recovery:\n \ndiff --git a/docs/source/reservations/existing-machines.rst b/docs/source/reservations/existing-machines.rst\nindex 1a6c14db730..e2a44b45310 100644\n--- a/docs/source/reservations/existing-machines.rst\n+++ b/docs/source/reservations/existing-machines.rst\n@@ -106,7 +106,7 @@ Deploying SkyPilot\n ✔ Remote k3s is running.\n ✔ Nvidia GPU Operator installed successfully.\n Cluster deployment done. You can now run tasks on this cluster.\n- E.g., run a task with: sky launch --cloud kubernetes -- echo hello world.\n+ E.g., run a task with: sky launch --infra kubernetes -- echo hello world.\n 🎉 Remote cluster deployed successfully.\n \n \n@@ -120,7 +120,7 @@ Deploying SkyPilot\n \n .. code-block:: console\n \n- $ sky show-gpus --cloud k8s\n+ $ sky show-gpus --infra k8s\n Kubernetes GPUs\n GPU REQUESTABLE_QTY_PER_NODE UTILIZATION\n L4 1, 2, 4 12 of 12\n@@ -135,7 +135,7 @@ Deploying SkyPilot\n my-cluster-4 H100 8 of 8\n my-cluster-5 H100 8 of 8\n \n- $ sky launch --cloud k8s --gpus H100:1 -- nvidia-smi\n+ $ sky launch --infra k8s --gpus H100:1 -- nvidia-smi\n \n .. tip::\n \n@@ -194,27 +194,27 @@ You can then configure SkyPilot to use :ref:`multiple Kubernetes clusters <multi\n \n # ~/.sky/config.yaml\n allowed_contexts:\n- - cluster1\n- - cluster2\n+ - cluster1-ctx\n+ - cluster2-ctx\n \n \n .. code-block:: bash\n \n # Run on cluster1\n- sky launch --cloud k8s --region cluster1 -- echo \"Running on cluster 1\"\n+ sky launch --infra k8s/cluster1-ctx -- echo \"Running on cluster 1\"\n \n # Run on cluster2\n- sky launch --cloud k8s --region cluster2 -- echo \"Running on cluster 2\"\n+ sky launch --infra k8s/cluster2-ctx -- echo \"Running on cluster 2\"\n \n # Let SkyPilot automatically select the cluster with available resources\n- sky launch --cloud k8s -- echo \"Running on SkyPilot selected cluster\"\n+ sky launch --infra k8s -- echo \"Running on SkyPilot selected cluster\"\n \n You can view the available clusters and GPUs using:\n \n .. code-block:: bash\n \n # List GPUs on cluster1\n- sky show-gpus --cloud k8s --region cluster1\n+ sky show-gpus --infra k8s/cluster1-ctx\n \n # List GPUs on cluster2\n- sky show-gpus --cloud k8s --region cluster2\n+ sky show-gpus --infra k8s/cluster2-ctx\ndiff --git a/docs/source/reservations/reservations.rst b/docs/source/reservations/reservations.rst\nindex a846f80e2f8..9dbf37a022d 100644\n--- a/docs/source/reservations/reservations.rst\n+++ b/docs/source/reservations/reservations.rst\n@@ -77,7 +77,7 @@ For example, if you are launching a cluster with the following SkyPilot YAML:\n .. code-block:: yaml\n \n resources:\n- cloud: aws\n+ infra: aws\n accelerators: A100:8\n \n num_nodes: 2\n@@ -95,7 +95,7 @@ SkyPilot will utilize the capacity reservation/block as follows:\n \n .. hint::\n \n- If you have a capacity block with a starting time in the future, you can run ``sky jobs launch --region us-east-1 --gpus H100:8 task.yaml`` to let SkyPilot automatically wait until the starting time is reached. Namely, you don't have to wake up at 4:30am PDT to launch your job on a newly available capacity block.\n+ If you have a capacity block with a starting time in the future, you can run ``sky jobs launch --infra aws/us-east-1 --gpus H100:8 task.yaml`` to let SkyPilot automatically wait until the starting time is reached. Namely, you don't have to wake up at 4:30am PDT to launch your job on a newly available capacity block.\n \n \n GCP reservations\n@@ -163,7 +163,7 @@ In case you want to specify the DWS configuration for each job/cluster, you can\n provision_timeout: 900\n \n resources:\n- cloud: gcp\n+ infra: gcp\n accelerators: A100:8\n \n num_nodes: 4\n@@ -188,7 +188,7 @@ To launch a SkyPilot cluster or job on GKE with DWS, you can specify the DWS con\n provision_timeout: 900\n \n resources:\n- cloud: kubernetes\n+ infra: kubernetes\n accelerators: A100:8\n labels:\n kueue.x-k8s.io/queue-name: dws-local-queue\ndiff --git a/docs/source/running-jobs/many-jobs.rst b/docs/source/running-jobs/many-jobs.rst\nindex 60e122b5f91..15e66ccd32c 100644\n--- a/docs/source/running-jobs/many-jobs.rst\n+++ b/docs/source/running-jobs/many-jobs.rst\n@@ -301,10 +301,10 @@ Job statuses can be checked via ``sky jobs queue``:\n Fetching managed jobs...\n Managed jobs\n In progress tasks: 10 RUNNING\n- ID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n- 10 - train-job10 1x[V100:4] 5 mins ago 5m 5s 1m 12s 0 RUNNING\n- 9 - train-job9 1x[V100:4] 6 mins ago 6m 11s 2m 23s 0 RUNNING\n- 8 - train-job8 1x[V100:4] 7 mins ago 7m 15s 3m 31s 0 RUNNING\n+ ID TASK NAME REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS\n+ 10 - train-job10 1x[V100:4] 5 mins ago 5m 5s 1m 12s 0 RUNNING\n+ 9 - train-job9 1x[V100:4] 6 mins ago 6m 11s 2m 23s 0 RUNNING\n+ 8 - train-job8 1x[V100:4] 7 mins ago 7m 15s 3m 31s 0 RUNNING\n ...\n \n \ndiff --git a/docs/source/serving/sky-serve.rst b/docs/source/serving/sky-serve.rst\nindex 15bf4232b52..046031f6420 100644\n--- a/docs/source/serving/sky-serve.rst\n+++ b/docs/source/serving/sky-serve.rst\n@@ -527,8 +527,7 @@ To achieve the above, you can specify custom configs in :code:`~/.sky/config.yam\n resources:\n # All configs below are optional.\n # Specify the location of the SkyServe controller.\n- cloud: gcp\n- region: us-central1\n+ infra: gcp/us-central1\n # Specify the maximum number of services that can be run concurrently.\n cpus: 2+ # number of vCPUs, max concurrent services = min(4 * cpus, memory in GiB)\n # Specify the disk_size in GB of the SkyServe controller.\ndiff --git a/docs/source/serving/spot-policy.rst b/docs/source/serving/spot-policy.rst\nindex 02af9a79f26..955ad5ecf47 100644\n--- a/docs/source/serving/spot-policy.rst\n+++ b/docs/source/serving/spot-policy.rst\n@@ -88,11 +88,11 @@ When the service is up, we can check the status of the service and the replicas\n http-server 1 1m 17s NO_REPLICA 0/4 54.227.229.217:30001\n \n Service Replicas\n- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 - 1 min ago 1x GCP([Spot]vCPU=2) PROVISIONING us-east1\n- http-server 2 1 - 1 min ago 1x GCP([Spot]vCPU=2) PROVISIONING us-central1\n- http-server 3 1 - 1 mins ago 1x GCP(vCPU=2) PROVISIONING us-east1\n- http-server 4 1 - 1 min ago 1x GCP(vCPU=2) PROVISIONING us-central1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 - 1 min ago GCP (us-east1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) PROVISIONING \n+ http-server 2 1 - 1 min ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) PROVISIONING \n+ http-server 3 1 - 1 mins ago GCP (us-east1) 1x(cpus=2, mem=8, type=n2-standard-2, ...) PROVISIONING \n+ http-server 4 1 - 1 min ago GCP (us-central1) 1x(cpus=2, mem=8, type=n2-standard-2, ...) PROVISIONING \n \n When the required number of spot replicas are not available, SkyServe will provision on-demand replicas to meet the target number of replicas. For example, when the target number is 2 and no spot replicas are ready, SkyServe will provision 2 on-demand replicas to meet the target number of replicas.\n \n@@ -105,11 +105,11 @@ When the required number of spot replicas are not available, SkyServe will provi\n http-server 1 1m 17s READY 2/4 54.227.229.217:30001\n \n Service Replicas\n- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 http://34.23.22.160:8081 3 min ago 1x GCP([Spot]vCPU=2) READY us-east1\n- http-server 2 1 http://34.68.226.193:8081 3 min ago 1x GCP([Spot]vCPU=2) READY us-central1\n- http-server 3 1 - 3 mins ago 1x GCP(vCPU=2) SHUTTING_DOWN us-east1\n- http-server 4 1 - 3 min ago 1x GCP(vCPU=2) SHUTTING_DOWN us-central1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://34.23.22.160:8081 3 min ago GCP (us-east1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \n+ http-server 2 1 http://34.68.226.193:8081 3 min ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \n+ http-server 3 1 - 3 mins ago GCP (us-east1) 1x(cpus=2, mem=8, type=n2-standard-2, ...) SHUTTING_DOWN \n+ http-server 4 1 - 3 min ago GCP (us-central1) 1x(cpus=2, mem=8, type=n2-standard-2, ...) SHUTTING_DOWN \n \n When the spot replicas are ready, SkyServe will automatically scale down on-demand replicas to maximize cost savings.\n \n@@ -122,9 +122,9 @@ When the spot replicas are ready, SkyServe will automatically scale down on-dema\n http-server 1 3m 59s READY 2/2 54.227.229.217:30001\n \n Service Replicas\n- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 http://34.23.22.160:8081 4 mins ago 1x GCP([Spot]vCPU=2) READY us-east1\n- http-server 2 1 http://34.68.226.193:8081 4 mins ago 1x GCP([Spot]vCPU=2) READY us-central1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://34.23.22.160:8081 4 mins ago GCP (us-east1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \n+ http-server 2 1 http://34.68.226.193:8081 4 mins ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \n \n In the event of spot instance interruptions (e.g. replica 1), SkyServe will automatically fallback to on-demand replicas (e.g. launch one on-demand replica) to meet the required capacity of replicas. SkyServe will continue trying to provision one spot replica in the event where spot availability is back. Note that SkyServe will try different regions and clouds to maximize the chance of successfully provisioning spot instances.\n \n@@ -137,10 +137,10 @@ In the event of spot instance interruptions (e.g. replica 1), SkyServe will auto\n http-server 1 7m 2s READY 1/3 54.227.229.217:30001\n \n Service Replicas\n- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION\n- http-server 2 1 http://34.68.226.193:8081 7 mins ago 1x GCP([Spot]vCPU=2) READY us-central1\n- http-server 5 1 - 13 secs ago 1x GCP([Spot]vCPU=2) PROVISIONING us-central1\n- http-server 6 1 - 13 secs ago 1x GCP(vCPU=2) PROVISIONING us-central1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 2 1 http://34.68.226.193:8081 7 mins ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \n+ http-server 5 1 - 13 secs ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) PROVISIONING \n+ http-server 6 1 - 13 secs ago GCP (us-central1) 1x(cpus=2, mem=8, type=n2-standard-2, ...) PROVISIONING \n \n Eventually, when the spot availability is back, SkyServe will automatically scale down on-demand replicas.\n \n@@ -153,6 +153,6 @@ Eventually, when the spot availability is back, SkyServe will automatically scal\n http-server 1 10m 5s READY 2/3 54.227.229.217:30001\n \n Service Replicas\n- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION\n- http-server 2 1 http://34.68.226.193:8081 10 mins ago 1x GCP([Spot]vCPU=2) READY us-central1\n- http-server 5 1 http://34.121.49.94:8081 1 min ago 1x GCP([Spot]vCPU=2) READY us-central1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 2 1 http://34.68.226.193:8081 10 mins ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \n+ http-server 5 1 http://34.121.49.94:8081 1 min ago GCP (us-central1) 1x[spot](cpus=2, mem=8, type=n2-standard-2, ...) READY \ndiff --git a/docs/source/serving/update.rst b/docs/source/serving/update.rst\nindex ca4f5ddb0ba..1490f143963 100644\n--- a/docs/source/serving/update.rst\n+++ b/docs/source/serving/update.rst\n@@ -57,9 +57,9 @@ We can use :code:`sky serve status http-server` to check the status of the servi\n http-server 1 1m 41s READY 2/2 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 2 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 2 1 52.87.241.103 2 mins ago 1x AWS(vCPU=2) READY us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 2 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 2 1 http://52.87.241.103:8081 2 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n \n Service ``http-server`` has an initial version of 1.\n \n@@ -102,12 +102,12 @@ SkyServe will trigger launching three new replicas.\n http-server 2 6m 15s READY 2/5 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 6 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 2 1 52.87.241.103 6 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 3 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n- http-server 4 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n- http-server 5 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 2 1 http://52.87.241.103:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 3 2 - 21 secs ago AWS (us-east-1b) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+ http-server 4 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+ http-server 5 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n \n \n Whenever a new replica is ready, the traffic will be redirected to both old and new replicas.\n@@ -121,12 +121,12 @@ Whenever a new replica is ready, the traffic will be redirected to both old and\n http-server 1,2 10m 4s READY 3/5 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 4 2 - 1 min ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n- http-server 5 2 - 1 min ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 4 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+ http-server 5 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n \n \n Once the total number of both old and new replicas exceeds the requested number, old replicas will be scaled down.\n@@ -140,12 +140,13 @@ Once the total number of both old and new replicas exceeds the requested number,\n http-server 1,2 10m 4s READY 3/5 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) SHUTTING_DOWN us-east-1\n- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 4 2 18.206.226.82 1 min ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 5 2 - 1 min ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 4 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+ http-server 5 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+\n \n Eventually, we will only have new replicas ready to serve user requests.\n \n@@ -158,10 +159,10 @@ Eventually, we will only have new replicas ready to serve user requests.\n http-server 2 11m 42s READY 3/3 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 3 2 3.93.241.163 3 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 4 2 18.206.226.82 3 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 5 2 3.26.232.31 1 min ago 1x AWS(vCPU=2) READY us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 3 2 http://3.93.241.163:8081 3 mins ago AWS (us-east-1b) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 4 2 http://18.206.226.82:8081 3 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 5 2 http://3.26.232.31:8081 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY\n \n \n \n@@ -210,12 +211,12 @@ SkyServe will trigger launching three new replicas.\n http-server 2 6m 15s READY 2/5 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 6 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 2 1 52.87.241.103 6 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 3 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n- http-server 4 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n- http-server 5 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 2 1 http://52.87.241.103:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 3 2 - 21 secs ago AWS (us-east-1b) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+ http-server 4 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n+ http-server 5 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n \n \n When a new replica is ready, the traffic will still be redirected to old replicas.\n@@ -229,12 +230,12 @@ When a new replica is ready, the traffic will still be redirected to old replica\n http-server 1 10m 4s READY 3/5 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) READY us-east-1\n- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=4) READY us-east-1\n- http-server 4 2 - 1 min ago 1x AWS(vCPU=4) PROVISIONING us-east-1\n- http-server 5 2 - 1 min ago 1x AWS(vCPU=4) PROVISIONING us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) SHUTTING_DOWN \n+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) READY \n+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY \n+ http-server 4 2 http://18.206.226.82:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY \n+ http-server 5 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) PROVISIONING \n \n \n Once the total number of new replicas satisfies the requirements, traffics will be redirected to new replicas and old replicas will be scaled down.\n@@ -248,12 +249,12 @@ Once the total number of new replicas satisfies the requirements, traffics will\n http-server 2 10m 4s READY 3/5 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) SHUTTING_DOWN us-east-1\n- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) SHUTTING_DOWN us-east-1\n- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=4) READY us-east-1\n- http-server 4 2 18.206.226.82 1 min ago 1x AWS(vCPU=4) READY us-east-1\n- http-server 5 2 3.26.232.31 1 min ago 1x AWS(vCPU=4) READY us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) SHUTTING_DOWN \n+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, type=m5.large, ...) SHUTTING_DOWN \n+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY \n+ http-server 4 2 http://18.206.226.82:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY \n+ http-server 5 2 http://3.26.232.31:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY\n \n Eventually, same as the rolling update, we will only have new replicas ready to serve user requests.\n \n@@ -266,7 +267,7 @@ Eventually, same as the rolling update, we will only have new replicas ready to\n http-server 2 11m 42s READY 3/3 44.206.240.249:30002\n \n Service Replicas\n- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION\n- http-server 3 2 3.93.241.163 3 mins ago 1x AWS(vCPU=4) READY us-east-1\n- http-server 4 2 18.206.226.82 3 mins ago 1x AWS(vCPU=4) READY us-east-1\n- http-server 5 2 3.26.232.31 1 min ago 1x AWS(vCPU=4) READY us-east-1\n+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS \n+ http-server 3 2 http://3.93.241.163:8081 3 mins ago AWS (us-east-1b) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY \n+ http-server 4 2 http://18.206.226.82:8081 3 mins ago AWS (us-east-1a) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY \n+ http-server 5 2 http://3.26.232.31:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, type=m5.xlarge, ...) READY\ndiff --git a/examples/admin_policy/task.yaml b/examples/admin_policy/task.yaml\nindex 065b4cbfb11..d3d4789c7ee 100644\n--- a/examples/admin_policy/task.yaml\n+++ b/examples/admin_policy/task.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n cpus: 2\n labels:\n other_labels: test\ndiff --git a/examples/autogluon.yaml b/examples/autogluon.yaml\nindex 00e5804f809..66093004b5e 100644\n--- a/examples/autogluon.yaml\n+++ b/examples/autogluon.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: gcp\n+ infra: gcp\n \n setup: |\n git clone https://github.com/autogluon/autogluon.git\ndiff --git a/examples/aws_efa/nccl_efa.yaml b/examples/aws_efa/nccl_efa.yaml\nindex de6212a1c52..c73f1c01beb 100644\n--- a/examples/aws_efa/nccl_efa.yaml\n+++ b/examples/aws_efa/nccl_efa.yaml\n@@ -1,7 +1,7 @@\n name: nccl-test-efa\n \n resources:\n- cloud: kubernetes\n+ infra: kubernetes\n accelerators: A100:8\n cpus: 90+\n image_id: docker:public.ecr.aws/hpc-cloud/nccl-tests:latest\ndiff --git a/examples/azure_start_stop.yaml b/examples/azure_start_stop.yaml\nindex f6337267c1c..33dcbdfd187 100644\n--- a/examples/azure_start_stop.yaml\n+++ b/examples/azure_start_stop.yaml\n@@ -2,7 +2,7 @@\n name: azure-start-stop\n \n resources:\n- cloud: azure\n+ infra: azure\n \n # Optimizing for smoke tests\n # 2 nodes: smoke tests ~37 mins\ndiff --git a/examples/containerized_app.py b/examples/containerized_app.py\nindex be58de3152b..2563aaaa9ba 100644\n--- a/examples/containerized_app.py\n+++ b/examples/containerized_app.py\n@@ -22,6 +22,6 @@\n \n with sky.Dag() as dag:\n t = sky.Task(run=run_command, setup=setup_cmd)\n- t.set_resources(sky.Resources(sky.AWS(), accelerators='V100'))\n+ t.set_resources(infra='aws', accelerators='V100')\n \n sky.launch(dag)\ndiff --git a/examples/custom_image.yaml b/examples/custom_image.yaml\nindex 535b91bfa4e..602985aa955 100644\n--- a/examples/custom_image.yaml\n+++ b/examples/custom_image.yaml\n@@ -1,6 +1,5 @@\n resources:\n- cloud: aws\n- region: us-east-2\n+ infra: aws/us-east-2\n # Nvidia image from\n # https://aws.amazon.com/marketplace/pp/prodview-rf7na2b2ttvdg\n image_id: ami-062ddd90fb6f8267a\ndiff --git a/examples/disk_size.yaml b/examples/disk_size.yaml\nindex 7384533b17c..eb97a978bc3 100644\n--- a/examples/disk_size.yaml\n+++ b/examples/disk_size.yaml\n@@ -9,7 +9,7 @@\n name: minimal\n \n resources:\n- cloud: azure\n+ infra: azure\n disk_size: 512\n \n setup: |\ndiff --git a/examples/dvc/dvc_pipeline.yaml b/examples/dvc/dvc_pipeline.yaml\nindex e3ff3bce8bb..1a377e55e7a 100644\n--- a/examples/dvc/dvc_pipeline.yaml\n+++ b/examples/dvc/dvc_pipeline.yaml\n@@ -2,8 +2,8 @@\n name: dvc-pipeline\n resources:\n accelerators: T4:1\n- cloud: aws\n- region: us-east-2\n+ infra: aws/us-east-2\n+\n workdir: .\n file_mounts: \n ~/.ssh/id_rsa: ~/.ssh/id_rsa\n@@ -18,4 +18,4 @@ run: |\n # run DVC pipeline as an experiment\n dvc exp run --pull --allow-missing\n # push experiment results to DVC remote\n- dvc exp push origin \n\\ No newline at end of file\n+ dvc exp push origin \ndiff --git a/examples/example_app.py b/examples/example_app.py\nindex 82162d11ac3..c86c123c13c 100644\n--- a/examples/example_app.py\n+++ b/examples/example_app.py\n@@ -40,10 +40,12 @@ def make_application():\n train_op.set_outputs('CLOUD://my-model', estimated_size_gigabytes=0.1)\n \n train_op.set_resources({\n- sky.Resources(sky.AWS(), 'p3.2xlarge'), # 1 V100, EC2.\n- sky.Resources(sky.AWS(), 'p3.8xlarge'), # 4 V100s, EC2.\n+ sky.Resources(infra='aws',\n+ instance_type='p3.2xlarge'), # 1 V100, EC2.\n+ sky.Resources(infra='aws',\n+ instance_type='p3.8xlarge'), # 4 V100s, EC2.\n # Tuples mean all resources are required.\n- sky.Resources(sky.GCP(), accelerators='tpu-v3-8'),\n+ sky.Resources(infra='gcp', accelerators='tpu-v3-8'),\n })\n \n train_op.set_time_estimator(time_estimators.resnet50_estimate_runtime)\n@@ -58,10 +60,14 @@ def make_application():\n estimated_size_gigabytes=0.1)\n \n infer_op.set_resources({\n- sky.Resources(sky.AWS(), 'inf1.2xlarge'),\n- sky.Resources(sky.AWS(), 'p3.2xlarge'),\n- sky.Resources(sky.GCP(), 'n1-standard-4', accelerators='T4'),\n- sky.Resources(sky.GCP(), 'n1-standard-8', accelerators='T4'),\n+ sky.Resources(infra='aws', instance_type='inf1.2xlarge'),\n+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),\n+ sky.Resources(infra='gcp',\n+ instance_type='n1-standard-4',\n+ accelerators='T4'),\n+ sky.Resources(infra='gcp',\n+ instance_type='n1-standard-8',\n+ accelerators='T4'),\n })\n \n infer_op.set_time_estimator(\ndiff --git a/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml b/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml\nindex 23a205cf810..f74c5af7017 100644\n--- a/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml\n+++ b/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml\n@@ -1,7 +1,7 @@\n name: nccl-gpu-direct-tcpx\n \n resources:\n- cloud: gcp\n+ infra: gcp\n instance_type: a3-highgpu-8g\n image_id: docker:us-docker.pkg.dev/gce-ai-infra/gpudirect-tcpx/nccl-plugin-gpudirecttcpx\n \ndiff --git a/examples/gcp_start_stop.yaml b/examples/gcp_start_stop.yaml\nindex 507e75eb0ac..cdd833addbf 100644\n--- a/examples/gcp_start_stop.yaml\n+++ b/examples/gcp_start_stop.yaml\n@@ -2,7 +2,7 @@\n name: gcp-start-stop\n \n resources:\n- cloud: gcp\n+ infra: gcp\n \n num_nodes: 2\n \ndiff --git a/examples/horovod_distributed_tf_app.py b/examples/horovod_distributed_tf_app.py\nindex 273f653a710..9dd23485b3a 100644\n--- a/examples/horovod_distributed_tf_app.py\n+++ b/examples/horovod_distributed_tf_app.py\n@@ -55,7 +55,7 @@ def run_fn(ip_list: List[IPAddr]) -> Dict[IPAddr, str]:\n estimated_size_gigabytes=70)\n train.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)\n train.set_resources({\n- sky.Resources(sky.AWS(), 'p3.2xlarge'),\n+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),\n })\n \n dag = sky.Optimizer.optimize(dag)\ndiff --git a/examples/huggingface_glue_imdb_grid_search_app.py b/examples/huggingface_glue_imdb_grid_search_app.py\nindex 89965f62fa2..4fc7b04dd9e 100644\n--- a/examples/huggingface_glue_imdb_grid_search_app.py\n+++ b/examples/huggingface_glue_imdb_grid_search_app.py\n@@ -1,7 +1,7 @@\n \"\"\"Grid search version of huggingface_glue_imdb_app.py.\"\"\"\n import sky\n \n-resources_to_launch = sky.Resources(sky.AWS(), accelerators={'V100': 4})\n+resources_to_launch = sky.Resources(infra='aws', accelerators={'V100': 4})\n with sky.Dag() as dag:\n # Setup command, run once (pip, download dataset).\n common_setup = \"\"\"\\\ndiff --git a/examples/image_with_tag.yaml b/examples/image_with_tag.yaml\nindex 480cec69e99..7b18406bc38 100644\n--- a/examples/image_with_tag.yaml\n+++ b/examples/image_with_tag.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n image_id: skypilot:gpu-ubuntu-1804\n \n \ndiff --git a/examples/k8s_cloud_deploy/README.md b/examples/k8s_cloud_deploy/README.md\nindex efe2586a6ee..15fbe6182c0 100644\n--- a/examples/k8s_cloud_deploy/README.md\n+++ b/examples/k8s_cloud_deploy/README.md\n@@ -21,7 +21,7 @@ pip install \"skypilot-nightly[lambda,kubernetes]\"\n 1. Edit `cloud_k8s.yaml` to set the desired number of workers and GPUs per node. If using GCP, AWS or Azure, uncomment the ports line to allow inbound connections to the Kubernetes API server. \n ```yaml\n resources:\n- cloud: lambda\n+ infra: lambda\n accelerators: A10:1\n # ports: 6443\n \ndiff --git a/examples/k8s_cloud_deploy/cloud_k8s.yaml b/examples/k8s_cloud_deploy/cloud_k8s.yaml\nindex 2db46fb502b..dd8aeffe2f9 100644\n--- a/examples/k8s_cloud_deploy/cloud_k8s.yaml\n+++ b/examples/k8s_cloud_deploy/cloud_k8s.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: lambda\n+ infra: lambda\n accelerators: A10:1\n # Uncomment the following line to expose ports on a different cloud\n # ports: 6443\ndiff --git a/examples/managed_job_with_storage.yaml b/examples/managed_job_with_storage.yaml\nindex 77b69485269..41d3648e074 100644\n--- a/examples/managed_job_with_storage.yaml\n+++ b/examples/managed_job_with_storage.yaml\n@@ -7,7 +7,7 @@\n # sky down spot-storage\n \n resources:\n- cloud: aws\n+ infra: aws\n use_spot: true\n job_recovery: failover\n \ndiff --git a/examples/many_gpu_vms.yaml b/examples/many_gpu_vms.yaml\nindex 453392cdeb1..6a2789242a9 100644\n--- a/examples/many_gpu_vms.yaml\n+++ b/examples/many_gpu_vms.yaml\n@@ -1,7 +1,7 @@\n name: many_gpu_vms\n \n resources:\n- cloud: aws\n+ infra: aws\n accelerators: V100:8\n # use_spot: true\n \ndiff --git a/examples/minimal.yaml b/examples/minimal.yaml\nindex e76182c114a..89357a86112 100644\n--- a/examples/minimal.yaml\n+++ b/examples/minimal.yaml\n@@ -9,7 +9,7 @@\n name: minimal\n \n resources:\n- cloud: aws\n+ infra: aws\n \n setup: |\n echo \"running setup\"\ndiff --git a/examples/mpirun.yaml b/examples/mpirun.yaml\nindex 4ec7ce0107c..d002b63e985 100644\n--- a/examples/mpirun.yaml\n+++ b/examples/mpirun.yaml\n@@ -1,7 +1,7 @@\n workdir: .\n \n resources:\n- cloud: aws\n+ infra: aws\n \n num_nodes: 2 # Total number of nodes (1 head + 1 worker)\n \ndiff --git a/examples/multi_echo.py b/examples/multi_echo.py\nindex 2512fc3a437..1bab8bce523 100644\n--- a/examples/multi_echo.py\n+++ b/examples/multi_echo.py\n@@ -9,7 +9,7 @@\n \n \n def run(cluster: Optional[str] = None,\n- cloud: Optional[str] = None,\n+ infra: Optional[str] = None,\n use_spot: bool = True):\n if cluster is None:\n # (username, last 4 chars of hash of hostname): for uniquefying users on\n@@ -19,14 +19,13 @@ def run(cluster: Optional[str] = None,\n _user_and_host = f'{getpass.getuser()}-{hostname_hash}'\n cluster = f'test-multi-echo-{_user_and_host}'\n \n- if cloud is None:\n- cloud = 'gcp'\n- cloud = sky.CLOUD_REGISTRY.from_str(cloud)\n+ if infra is None:\n+ infra = 'gcp'\n \n # Create the cluster.\n with sky.Dag() as dag:\n cluster_resources = sky.Resources(\n- cloud,\n+ infra=infra,\n # We need to set CPUs to 5+ so that the total number of RUNNING jobs\n # is not limited by the number of CPU cores (5 x 2 x 2 = 20).\n cpus='5+',\n@@ -56,13 +55,13 @@ def _exec(i):\n \n if __name__ == '__main__':\n cluster = None\n- cloud = None\n+ infra = None\n use_spot = True\n if len(sys.argv) > 1:\n # For smoke test passing in a cluster name.\n cluster = sys.argv[1]\n if len(sys.argv) > 2:\n- cloud = sys.argv[2]\n+ infra = sys.argv[2]\n if len(sys.argv) > 3:\n use_spot = sys.argv[3] == '1'\n- run(cluster, cloud, use_spot)\n+ run(cluster, infra, use_spot)\ndiff --git a/examples/multi_hostname.py b/examples/multi_hostname.py\nindex 2c03a46fa19..e44a60e6d29 100644\n--- a/examples/multi_hostname.py\n+++ b/examples/multi_hostname.py\n@@ -6,6 +6,6 @@\n # My hostname: <host1>\n # My hostname: <host2>\n sky.Task(run='echo My hostname: $(hostname)',\n- num_nodes=2).set_resources(sky.Resources(sky.AWS()))\n+ num_nodes=2).set_resources(sky.Resources(infra='aws'))\n \n sky.launch(dag)\ndiff --git a/examples/multi_resources.yaml b/examples/multi_resources.yaml\nindex 56656b7cd1b..11f7c3eb23c 100644\n--- a/examples/multi_resources.yaml\n+++ b/examples/multi_resources.yaml\n@@ -2,16 +2,16 @@ name: multi-resources\n \n resources:\n ordered:\n- - cloud: AWS\n+ - infra: aws\n accelerators: A10g\n- - cloud: GCP\n+ - infra: gcp\n accelerators: L4\n \n # resources:\n # any_of:\n- # - cloud: AWS\n+ # - infra: aws\n # accelerators: A10g\n- # - cloud: GCP\n+ # - infra: gcp\n # accelerators: L4\n \n run: |\ndiff --git a/examples/oci/dataset-mount.yaml b/examples/oci/dataset-mount.yaml\nindex 1f62360a5a3..96a34c72af5 100644\n--- a/examples/oci/dataset-mount.yaml\n+++ b/examples/oci/dataset-mount.yaml\n@@ -1,8 +1,7 @@\n name: cpu-task1\n \n resources:\n- cloud: oci\n- region: us-sanjose-1\n+ infra: oci/us-sanjose-1\n cpus: 2\n disk_size: 256\n disk_tier: medium\ndiff --git a/examples/oci/dataset-upload-and-mount.yaml b/examples/oci/dataset-upload-and-mount.yaml\nindex 13ddc4d2b35..b28e754c126 100644\n--- a/examples/oci/dataset-upload-and-mount.yaml\n+++ b/examples/oci/dataset-upload-and-mount.yaml\n@@ -1,8 +1,7 @@\n name: cpu-task1\n \n resources:\n- cloud: oci\n- region: us-sanjose-1\n+ infra: oci/us-sanjose-1\n cpus: 2\n disk_size: 256\n disk_tier: medium\ndiff --git a/examples/oci/gpu-oraclelinux9.yaml b/examples/oci/gpu-oraclelinux9.yaml\nindex cc7b05ea0fc..4d24d6c9526 100644\n--- a/examples/oci/gpu-oraclelinux9.yaml\n+++ b/examples/oci/gpu-oraclelinux9.yaml\n@@ -2,7 +2,7 @@ name: gpu-task\n \n resources:\n # Optional; if left out, automatically pick the cheapest cloud.\n- cloud: oci\n+ infra: oci\n \n accelerators: A10:1\n \ndiff --git a/examples/oci/gpu-ubuntu-2204.yaml b/examples/oci/gpu-ubuntu-2204.yaml\nindex e0012a31a1a..b9fb1b35986 100644\n--- a/examples/oci/gpu-ubuntu-2204.yaml\n+++ b/examples/oci/gpu-ubuntu-2204.yaml\n@@ -2,7 +2,7 @@ name: gpu-task\n \n resources:\n # Optional; if left out, automatically pick the cheapest cloud.\n- cloud: oci\n+ infra: oci\n \n accelerators: A10:1\n \ndiff --git a/examples/oci/oci-mounts.yaml b/examples/oci/oci-mounts.yaml\nindex 6fd2aaf16eb..0d675fb3fe2 100644\n--- a/examples/oci/oci-mounts.yaml\n+++ b/examples/oci/oci-mounts.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: oci\n+ infra: oci\n \n file_mounts:\n ~/tmpfile: ~/tmpfile\ndiff --git a/examples/oci/oci_cpu-sky-preemptible.yaml b/examples/oci/oci_cpu-sky-preemptible.yaml\nindex fb1c6e5f838..0d504a30ec4 100644\n--- a/examples/oci/oci_cpu-sky-preemptible.yaml\n+++ b/examples/oci/oci_cpu-sky-preemptible.yaml\n@@ -2,12 +2,8 @@ name: cpu-task2\n \n resources:\n # Optional; if left out, automatically pick the cheapest cloud.\n- cloud: oci\n+ infra: oci/ap-seoul-1\n \n- region: ap-seoul-1\n- \n- # zone: AP-SEOUL-1-AD-1\n- \n instance_type: VM.Standard.E4.Flex$_2_16\n \n cpus: 2\ndiff --git a/examples/oci/oci_cpu-sky.yaml b/examples/oci/oci_cpu-sky.yaml\nindex 41367a0700b..5a14f130ad8 100644\n--- a/examples/oci/oci_cpu-sky.yaml\n+++ b/examples/oci/oci_cpu-sky.yaml\n@@ -2,12 +2,8 @@ name: cpu-task1\n \n resources:\n # Optional; if left out, automatically pick the cheapest cloud.\n- cloud: oci\n+ infra: oci/ap-seoul-1\n \n- region: ap-seoul-1\n- \n- # zone: AP-SEOUL-1-AD-1\n- \n instance_type: VM.Standard.E4.Flex$_2_16\n \n cpus: 2\ndiff --git a/examples/oci/oci_gpu-sky.yaml b/examples/oci/oci_gpu-sky.yaml\nindex a3592145c89..ca05beb26a9 100644\n--- a/examples/oci/oci_gpu-sky.yaml\n+++ b/examples/oci/oci_gpu-sky.yaml\n@@ -2,14 +2,10 @@ name: gpu-task1\n \n resources:\n # Optional; if left out, automatically pick the cheapest cloud.\n- cloud: oci\n+ infra: oci/ap-seoul-1\n \n accelerators: A10:1 # 1x NVIDIA A10 GPU\n \n- region: ap-seoul-1\n- \n- # zone: AP-SEOUL-1-AD-1\n- \n # instance_type: VM.GPU.A10.1\n \n # image_id: skypilot:gpu-ubuntu-2004\ndiff --git a/examples/oci/serve-http-cpu.yaml b/examples/oci/serve-http-cpu.yaml\nindex 68e3d18c9e5..011b58ff10f 100644\n--- a/examples/oci/serve-http-cpu.yaml\n+++ b/examples/oci/serve-http-cpu.yaml\n@@ -3,8 +3,7 @@ service:\n replicas: 2\n \n resources:\n- cloud: oci\n- region: us-sanjose-1\n+ infra: oci/us-sanjose-1\n ports: 8080\n cpus: 2+\n \ndiff --git a/examples/oci/serve-qwen-7b.yaml b/examples/oci/serve-qwen-7b.yaml\nindex 004e912b088..d0a5d1f014d 100644\n--- a/examples/oci/serve-qwen-7b.yaml\n+++ b/examples/oci/serve-qwen-7b.yaml\n@@ -5,8 +5,7 @@ service:\n \n # Fields below describe each replica.\n resources:\n- cloud: oci\n- region: us-sanjose-1\n+ infra: oci/us-sanjose-1\n ports: 8080\n accelerators: {A10:1}\n \ndiff --git a/examples/per_region_images.yaml b/examples/per_region_images.yaml\nindex 99bc6e4f0c5..4e0e470969f 100644\n--- a/examples/per_region_images.yaml\n+++ b/examples/per_region_images.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n instance_type: g4dn.xlarge\n image_id:\n us-west-2: skypilot:gpu-ubuntu-1804\ndiff --git a/examples/perf/storage_rawperf.yaml b/examples/perf/storage_rawperf.yaml\nindex 982a1e7c43a..cc6263c712d 100644\n--- a/examples/perf/storage_rawperf.yaml\n+++ b/examples/perf/storage_rawperf.yaml\n@@ -17,7 +17,7 @@\n name: storage-demo\n \n resources:\n- cloud: aws\n+ infra: aws\n instance_type: m5.8xlarge\n \n file_mounts:\ndiff --git a/examples/playground/min_fail.yaml b/examples/playground/min_fail.yaml\nindex 215f3268855..bd64ba3b0fc 100644\n--- a/examples/playground/min_fail.yaml\n+++ b/examples/playground/min_fail.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n \n setup: |\n echo \"running setup\"\ndiff --git a/examples/playground/min_progress_bar.yaml b/examples/playground/min_progress_bar.yaml\nindex 06ba3b027e0..43499f0bf4c 100644\n--- a/examples/playground/min_progress_bar.yaml\n+++ b/examples/playground/min_progress_bar.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n \n setup: |\n echo \"running setup\"\ndiff --git a/examples/playground/symlink_playground.yaml b/examples/playground/symlink_playground.yaml\nindex 398373af85c..d53753f3efa 100644\n--- a/examples/playground/symlink_playground.yaml\n+++ b/examples/playground/symlink_playground.yaml\n@@ -4,7 +4,7 @@\n name: symlink-playground\n \n resources:\n- cloud: aws\n+ infra: aws\n instance_type: m5.2xlarge\n \n # Symlink: ln -s [data_path] ~/Downloads/temp1\ndiff --git a/examples/ray_tune_app.py b/examples/ray_tune_app.py\nindex 1993eb7e7d4..b08756ede0b 100644\n--- a/examples/ray_tune_app.py\n+++ b/examples/ray_tune_app.py\n@@ -30,7 +30,7 @@ def run_fn(node_rank: int, ip_list: List[str]) -> Optional[str]:\n )\n \n train.set_resources({\n- sky.Resources(sky.AWS(), 'p3.2xlarge'),\n+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),\n })\n \n sky.launch(dag)\ndiff --git a/examples/ray_tune_app.yaml b/examples/ray_tune_app.yaml\nindex 96146b1ee2d..9c7bae9b099 100644\n--- a/examples/ray_tune_app.yaml\n+++ b/examples/ray_tune_app.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n accelerators: V100\n \n num_nodes: 2\ndiff --git a/examples/resnet_app.py b/examples/resnet_app.py\nindex 17ebf9fa5d6..c7f43744ca3 100644\n--- a/examples/resnet_app.py\n+++ b/examples/resnet_app.py\n@@ -68,10 +68,10 @@\n task.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)\n task.set_resources({\n ##### Fully specified\n- # sky.Resources(sky.AWS(), 'p3.2xlarge'),\n- # sky.Resources(sky.GCP(), 'n1-standard-16'),\n+ # sky.Resources(infra='aws', instance_type='p3.2xlarge'),\n+ # sky.Resources(infra='gcp', instance_type='n1-standard-16'),\n # sky.Resources(\n- # sky.GCP(),\n+ # infra='gcp',\n # 'n1-standard-8',\n # # Options: 'V100', {'V100': <num>}.\n # 'V100',\n@@ -79,16 +79,16 @@\n ##### Partially specified\n # sky.Resources(accelerators='T4'),\n # sky.Resources(accelerators={'T4': 8}, use_spot=True),\n- # sky.Resources(sky.AWS(), accelerators={'T4': 8}, use_spot=True),\n- # sky.Resources(sky.AWS(), accelerators='K80'),\n- # sky.Resources(sky.AWS(), accelerators='K80', use_spot=True),\n+ # sky.Resources(infra='aws', accelerators={'T4': 8}, use_spot=True),\n+ # sky.Resources(infra='aws', accelerators='K80'),\n+ # sky.Resources(infra='aws', accelerators='K80', use_spot=True),\n # sky.Resources(accelerators='tpu-v3-8'),\n # sky.Resources(accelerators='V100', use_spot=True),\n # sky.Resources(accelerators={'T4': 4}),\n- sky.Resources(sky.AWS(), accelerators='V100'),\n- # sky.Resources(sky.GCP(), accelerators={'V100': 4}),\n- # sky.Resources(sky.AWS(), accelerators='V100', use_spot=True),\n- # sky.Resources(sky.AWS(), accelerators={'V100': 8}),\n+ sky.Resources(infra='aws', accelerators='V100'),\n+ # sky.Resources(infra='gcp', accelerators={'V100': 4}),\n+ # sky.Resources(infra='aws', accelerators='V100', use_spot=True),\n+ # sky.Resources(infra='aws', accelerators={'V100': 8}),\n })\n \n # Optionally, specify a time estimator: Resources -> time in seconds.\ndiff --git a/examples/resnet_app.yaml b/examples/resnet_app.yaml\nindex 4a37d332415..473dcea173c 100644\n--- a/examples/resnet_app.yaml\n+++ b/examples/resnet_app.yaml\n@@ -1,7 +1,7 @@\n name: resnet-app\n \n resources:\n- cloud: aws\n+ infra: aws\n accelerators:\n V100: 1\n \ndiff --git a/examples/resnet_app_storage.py b/examples/resnet_app_storage.py\nindex 9d8063ea6ab..707acecf11f 100644\n--- a/examples/resnet_app_storage.py\n+++ b/examples/resnet_app_storage.py\n@@ -71,7 +71,7 @@\n train.set_inputs('s3://imagenet-bucket', estimated_size_gigabytes=150)\n train.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)\n train.set_resources({\n- sky.Resources(sky.AWS(), 'p3.2xlarge'),\n+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),\n })\n \n sky.launch(dag)\ndiff --git a/examples/resnet_app_storage.yaml b/examples/resnet_app_storage.yaml\nindex 7a3ddd81b57..b6747ab5614 100644\n--- a/examples/resnet_app_storage.yaml\n+++ b/examples/resnet_app_storage.yaml\n@@ -2,7 +2,7 @@ name: resnet-app-storage\n workdir: ~/Downloads/tpu\n \n resources:\n- cloud: aws\n+ infra: aws\n instance_type: p3.2xlarge\n \n inputs: {\ndiff --git a/examples/resnet_app_storage_spot.yaml b/examples/resnet_app_storage_spot.yaml\nindex 0d4a3fec840..27ed558b4fc 100644\n--- a/examples/resnet_app_storage_spot.yaml\n+++ b/examples/resnet_app_storage_spot.yaml\n@@ -1,7 +1,7 @@\n name: resnet-app-storage\n \n resources:\n- cloud: aws\n+ infra: aws\n accelerators: V100\n use_spot: true\n spot_recovery: failover\ndiff --git a/examples/resnet_distributed_tf_app.py b/examples/resnet_distributed_tf_app.py\nindex 62befbbb313..2df1705e386 100644\n--- a/examples/resnet_distributed_tf_app.py\n+++ b/examples/resnet_distributed_tf_app.py\n@@ -7,7 +7,7 @@\n import sky\n \n \n-def run(cluster: Optional[str] = None, cloud: Optional[str] = None):\n+def run(cluster: Optional[str] = None, infra: Optional[str] = None):\n if cluster is None:\n # (username, last 4 chars of hash of hostname): for uniquefying users on\n # shared-account cloud providers.\n@@ -75,19 +75,17 @@ def run_fn(node_rank: int, ip_list: List[str]) -> Optional[str]:\n train.set_inputs('gs://cloud-tpu-test-datasets/fake_imagenet',\n estimated_size_gigabytes=70)\n train.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)\n- train.set_resources(\n- sky.Resources(sky.CLOUD_REGISTRY.from_str(cloud),\n- accelerators='V100'))\n+ train.set_resources(sky.Resources(infra=infra, accelerators='V100'))\n \n sky.launch(dag, cluster_name=cluster, retry_until_up=True)\n \n \n if __name__ == '__main__':\n cluster = None\n- cloud = None\n+ infra = None\n if len(sys.argv) > 1:\n # For smoke test passing in a cluster name.\n cluster = sys.argv[1]\n if len(sys.argv) > 2:\n- cloud = sys.argv[2]\n- run(cluster, cloud)\n+ infra = sys.argv[2]\n+ run(cluster, infra)\ndiff --git a/examples/resnet_distributed_torch_app.py b/examples/resnet_distributed_torch_app.py\nindex 1bc38886536..9b31419cf85 100644\n--- a/examples/resnet_distributed_torch_app.py\n+++ b/examples/resnet_distributed_torch_app.py\n@@ -35,19 +35,19 @@ def run_fn(node_rank: int, ip_list: List[str]) -> Optional[str]:\n \n train.set_resources({\n ##### Fully specified\n- sky.Resources(sky.AWS(), 'p3.2xlarge'),\n- # sky.Resources(sky.GCP(), 'n1-standard-16'),\n+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),\n+ # sky.Resources(infra='gcp', instance_type='n1-standard-16'),\n #sky.Resources(\n- # sky.GCP(),\n- # 'n1-standard-8',\n+ # infra='gcp',\n+ # instance_type='n1-standard-8',\n # Options: 'V100', {'V100': <num>}.\n- # 'V100',\n+ # accelerators='V100',\n #),\n ##### Partially specified\n #sky.Resources(accelerators='V100'),\n # sky.Resources(accelerators='tpu-v3-8'),\n- # sky.Resources(sky.AWS(), accelerators={'V100': 4}),\n- # sky.Resources(sky.AWS(), accelerators='V100'),\n+ # sky.Resources(infra='aws', accelerators={'V100': 4}),\n+ # sky.Resources(infra='aws', accelerators='V100'),\n })\n \n sky.launch(train, cluster_name='dth')\ndiff --git a/examples/resnet_distributed_torch_with_script.yaml b/examples/resnet_distributed_torch_with_script.yaml\nindex a492e4878b3..b3a07b227d9 100644\n--- a/examples/resnet_distributed_torch_with_script.yaml\n+++ b/examples/resnet_distributed_torch_with_script.yaml\n@@ -2,7 +2,7 @@ name: resnet-distributed-app\n \n \n resources:\n- cloud: aws\n+ infra: aws\n accelerators: V100\n \n num_nodes: 2\ndiff --git a/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml b/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml\nindex faa5e9f08ff..d5dd35a189a 100644\n--- a/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml\n+++ b/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml\n@@ -11,8 +11,8 @@ service:\n \n resources:\n any_of:\n- - zone: us-central1-a\n- - region: us-east1\n+ - infra: gcp/*/us-central1-a\n+ - infra: gcp/us-east1\n ports: 8081\n cpus: 2+\n # use_spot is needed for ondemand fallback\ndiff --git a/examples/spot/lightning_cifar10.yaml b/examples/spot/lightning_cifar10.yaml\nindex 2a9aa0c761c..b8fbb11bb7e 100644\n--- a/examples/spot/lightning_cifar10.yaml\n+++ b/examples/spot/lightning_cifar10.yaml\n@@ -2,7 +2,7 @@ name: lit\n \n resources:\n accelerators: V100:1\n- cloud: aws\n+ infra: aws\n use_spot: true\n spot_recovery: FAILOVER\n \ndiff --git a/examples/spot/resnet.yaml b/examples/spot/resnet.yaml\nindex 54c13489f1a..7b439fff848 100644\n--- a/examples/spot/resnet.yaml\n+++ b/examples/spot/resnet.yaml\n@@ -10,7 +10,7 @@ name: resnet\n \n resources:\n accelerators: V100\n- cloud: aws\n+ infra: aws\n use_spot: true\n spot_recovery: FAILOVER\n \ndiff --git a/examples/storage/checkpointed_training.yaml b/examples/storage/checkpointed_training.yaml\nindex 7d96e9634ca..c0fc8c6b2bd 100644\n--- a/examples/storage/checkpointed_training.yaml\n+++ b/examples/storage/checkpointed_training.yaml\n@@ -20,7 +20,7 @@ name: resnet-distributed-app\n \n resources:\n accelerators: V100\n- cloud: aws\n+ infra: aws\n \n num_nodes: 1\n \ndiff --git a/examples/storage/hostname_echo_demo.yaml b/examples/storage/hostname_echo_demo.yaml\nindex d90edbc9ebd..1769593f601 100644\n--- a/examples/storage/hostname_echo_demo.yaml\n+++ b/examples/storage/hostname_echo_demo.yaml\n@@ -10,7 +10,7 @@\n name: hostecho-demo\n \n resources:\n- cloud: aws\n+ infra: aws\n instance_type: m5.2xlarge\n \n num_nodes: 2\ndiff --git a/examples/storage/pingpong.yaml b/examples/storage/pingpong.yaml\nindex fae72ab7f6a..345ade16162 100644\n--- a/examples/storage/pingpong.yaml\n+++ b/examples/storage/pingpong.yaml\n@@ -14,7 +14,7 @@ name: pingpong\n num_nodes: 2\n \n resources:\n- cloud: gcp\n+ infra: gcp\n \n file_mounts:\n /sharedfs:\ndiff --git a/examples/tensorboard_app.py b/examples/tensorboard_app.py\nindex a5432ee7a6f..e181c8dac7f 100644\n--- a/examples/tensorboard_app.py\n+++ b/examples/tensorboard_app.py\n@@ -19,7 +19,7 @@\n cd models && pip install -e .)'\n \n task = sky.Task('setup', workdir=workdir, setup=setup)\n- task.set_resources(sky.Resources(sky.AWS(), accelerators={'V100': 1}))\n+ task.set_resources(sky.Resources(infra='aws', accelerators={'V100': 1}))\n sky.stream_and_get(sky.launch(dag, cluster_name='tb'))\n \n # Run the training task.\ndiff --git a/examples/tensorflow_distributed/tf_distributed.yaml b/examples/tensorflow_distributed/tf_distributed.yaml\nindex beb6ad4b96e..0d59a538c30 100644\n--- a/examples/tensorflow_distributed/tf_distributed.yaml\n+++ b/examples/tensorflow_distributed/tf_distributed.yaml\n@@ -7,7 +7,7 @@\n # sky down myclus\n \n resources:\n- cloud: gcp\n+ infra: gcp\n accelerators: V100:1 # Provision 1 V100 GPU per node\n \n # Provision 2 nodes, giving us a total of 2 GPUs in the cluster\ndiff --git a/examples/timm_app.py b/examples/timm_app.py\nindex 72ac53509c3..d3e9e4dd147 100644\n--- a/examples/timm_app.py\n+++ b/examples/timm_app.py\n@@ -49,6 +49,6 @@ def clone_project():\n # Download from GCS.\n '/tmp/fake_imagenet': 'gs://cloud-tpu-test-datasets/fake_imagenet',\n })\n- train.set_resources({sky.Resources(sky.AWS(), accelerators='V100')})\n+ train.set_resources({sky.Resources(infra='aws', accelerators='V100')})\n \n sky.launch(dag)\ndiff --git a/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml b/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml\nindex 36278961006..6c4c627aa4b 100644\n--- a/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml\n+++ b/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml\n@@ -30,7 +30,7 @@ num_nodes: 2\n resources:\n accelerators: A100:8 # Make sure you use 8 GPU instances\n use_spot: True\n- cloud: gcp\n+ infra: gcp\n \n file_mounts: \n ./torch_ddp_benchmark.py: ./examples/torch_ddp_benchmark/torch_ddp_benchmark.py\ndiff --git a/examples/using_file_mounts.yaml b/examples/using_file_mounts.yaml\nindex fb7110ac705..5b6783efc9f 100644\n--- a/examples/using_file_mounts.yaml\n+++ b/examples/using_file_mounts.yaml\n@@ -10,7 +10,7 @@\n # commands may require flags to follow symlinks (e.g., ls -H; du -L).\n \n resources:\n- cloud: aws\n+ infra: aws\n cpus: 2+\n \n workdir: .\ndiff --git a/examples/using_file_mounts_with_env_vars.yaml b/examples/using_file_mounts_with_env_vars.yaml\nindex 100aa3d15c9..11fcd99bb32 100644\n--- a/examples/using_file_mounts_with_env_vars.yaml\n+++ b/examples/using_file_mounts_with_env_vars.yaml\n@@ -8,7 +8,7 @@ envs:\n MODEL_SIZE: 13b\n \n resources:\n- cloud: gcp\n+ infra: gcp\n \n # You can use env vars in\n # - the destination: source paths\ndiff --git a/llm/axolotl/axolotl-docker.yaml b/llm/axolotl/axolotl-docker.yaml\nindex b883ebdde46..25caf8ae408 100644\n--- a/llm/axolotl/axolotl-docker.yaml\n+++ b/llm/axolotl/axolotl-docker.yaml\n@@ -5,7 +5,7 @@ name: axolotl\n \n resources:\n accelerators: L4:1\n- cloud: gcp # optional\n+ infra: gcp # optional\n \n workdir: mistral\n \ndiff --git a/llm/axolotl/axolotl-spot.yaml b/llm/axolotl/axolotl-spot.yaml\nindex 0e04ba11992..e6c04f1bca7 100644\n--- a/llm/axolotl/axolotl-spot.yaml\n+++ b/llm/axolotl/axolotl-spot.yaml\n@@ -10,7 +10,7 @@ name: axolotl\n \n resources:\n accelerators: A100:1\n- cloud: gcp # optional\n+ infra: gcp # optional\n use_spot: True\n image_id: docker:winglian/axolotl:main-py3.10-cu118-2.0.1\n \ndiff --git a/llm/batch_inference/compute_text_vectors.yaml b/llm/batch_inference/compute_text_vectors.yaml\nindex 259bd685294..df197fcba80 100644\n--- a/llm/batch_inference/compute_text_vectors.yaml\n+++ b/llm/batch_inference/compute_text_vectors.yaml\n@@ -6,7 +6,7 @@ resources:\n cpus: 4\n accelerators: \n L4: 1\n- cloud: aws\n+ infra: aws\n any_of:\n - use_spot: true\n - use_spot: false\n@@ -83,4 +83,4 @@ run: |\n \n # Clean up vLLM service\n pkill -f \"python -m vllm.entrypoints.openai.api_server\"\n- echo \"vLLM service has been stopped\" \n\\ No newline at end of file\n+ echo \"vLLM service has been stopped\" \ndiff --git a/llm/batch_inference/monitor_progress.yaml b/llm/batch_inference/monitor_progress.yaml\nindex 8f59b43325b..623d0df1dad 100644\n--- a/llm/batch_inference/monitor_progress.yaml\n+++ b/llm/batch_inference/monitor_progress.yaml\n@@ -5,7 +5,7 @@ workdir: .\n resources:\n cpus: 2\n memory: 8+\n- cloud: aws\n+ infra: aws\n ports:\n - 8000\n \n@@ -26,4 +26,4 @@ setup: |\n pip install pandas pyarrow plotly\n \n run: |\n- python scripts/monitor_progress.py --metrics-dir /output/metrics \n\\ No newline at end of file\n+ python scripts/monitor_progress.py --metrics-dir /output/metrics \ndiff --git a/llm/gpt-2/gpt2-pipeline.yaml b/llm/gpt-2/gpt2-pipeline.yaml\nindex e5ea05f7948..5d9b9d34164 100644\n--- a/llm/gpt-2/gpt2-pipeline.yaml\n+++ b/llm/gpt-2/gpt2-pipeline.yaml\n@@ -46,13 +46,13 @@ resources:\n any_of:\n # Avoid using docker image for lambda due to the docker is not supported on\n # Lambda yet, but the base image works.\n- - cloud: lambda\n+ - infra: lambda\n image_id: null\n- - cloud: aws\n- - cloud: gcp\n- - cloud: azure\n- - cloud: fluidstack\n- - cloud: kubernetes\n+ - infra: aws\n+ - infra: gcp\n+ - infra: azure\n+ - infra: fluidstack\n+ - infra: kubernetes\n \n file_mounts:\n ~/.cache/huggingface:\ndiff --git a/llm/gpt-2/gpt2-train.yaml b/llm/gpt-2/gpt2-train.yaml\nindex 3a4e8c28d14..b3d48a67bd0 100644\n--- a/llm/gpt-2/gpt2-train.yaml\n+++ b/llm/gpt-2/gpt2-train.yaml\n@@ -11,13 +11,13 @@ resources:\n any_of:\n # Avoid using docker image for lambda due to the docker is not supported on\n # Lambda yet, but the base image works.\n- - cloud: lambda\n+ - infra: lambda\n image_id: null\n- - cloud: aws\n- - cloud: gcp\n- - cloud: azure\n- - cloud: fluidstack\n- - cloud: kubernetes\n+ - infra: aws\n+ - infra: gcp\n+ - infra: azure\n+ - infra: fluidstack\n+ - infra: kubernetes\n \n file_mounts:\n ~/.cache/huggingface:\ndiff --git a/llm/gpt-2/gpt2.yaml b/llm/gpt-2/gpt2.yaml\nindex 8e203772128..6ede787d178 100644\n--- a/llm/gpt-2/gpt2.yaml\n+++ b/llm/gpt-2/gpt2.yaml\n@@ -7,13 +7,13 @@ resources:\n any_of:\n # Avoid using docker image for lambda due to the docker is not supported on\n # Lambda yet, but the base image works.\n- - cloud: lambda\n+ - infra: lambda\n image_id: null\n- - cloud: aws\n- - cloud: gcp\n- - cloud: azure\n- - cloud: fluidstack\n- - cloud: kubernetes\n+ - infra: aws\n+ - infra: gcp\n+ - infra: azure\n+ - infra: fluidstack\n+ - infra: kubernetes\n \n \n setup: |\ndiff --git a/llm/rag/build_rag.yaml b/llm/rag/build_rag.yaml\nindex 9323f661903..ffd64911de0 100644\n--- a/llm/rag/build_rag.yaml\n+++ b/llm/rag/build_rag.yaml\n@@ -4,7 +4,7 @@ workdir: .\n \n resources:\n memory: 32+ # Need more memory for merging embeddings\n- cloud: aws\n+ infra: aws\n \n envs:\n EMBEDDINGS_BUCKET_NAME: sky-rag-embeddings\ndiff --git a/sky/backends/backend_utils.py b/sky/backends/backend_utils.py\nindex 3a1333e30ab..c33ffa80a88 100644\n--- a/sky/backends/backend_utils.py\n+++ b/sky/backends/backend_utils.py\n@@ -2570,7 +2570,10 @@ def _update_record_with_credentials_and_resources_str(\n if handle is None:\n return\n record['resources_str'] = resources_utils.get_readable_resources_repr(\n- handle)\n+ handle, simplify=True)\n+ record[\n+ 'resources_str_full'] = resources_utils.get_readable_resources_repr(\n+ handle, simplify=False)\n credentials = ssh_credential_from_yaml(handle.cluster_yaml,\n handle.docker_user,\n handle.ssh_user)\ndiff --git a/sky/backends/cloud_vm_ray_backend.py b/sky/backends/cloud_vm_ray_backend.py\nindex 48eb2ac7e0d..33dd56029c9 100644\n--- a/sky/backends/cloud_vm_ray_backend.py\n+++ b/sky/backends/cloud_vm_ray_backend.py\n@@ -8,7 +8,6 @@\n import pathlib\n import re\n import shlex\n-import shutil\n import signal\n import subprocess\n import sys\n@@ -2157,11 +2156,18 @@ def provision_with_retries(\n # possible resources or the requested resources is too\n # restrictive. If we reach here, our failover logic finally\n # ends here.\n- table = log_utils.create_table(['Resource', 'Reason'])\n+ table = log_utils.create_table(['INFRA', 'RESOURCES', 'REASON'])\n for (resource, exception) in resource_exceptions.items():\n- table.add_row(\n- [resources_utils.format_resource(resource), exception])\n- table.max_table_width = shutil.get_terminal_size().columns\n+ table.add_row([\n+ resource.infra.formatted_str(),\n+ resources_utils.format_resource(resource,\n+ simplify=True),\n+ exception\n+ ])\n+ # Set the max width of REASON column to 80 to avoid the table\n+ # being wrapped in a unreadable way.\n+ # pylint: disable=protected-access\n+ table._max_width = {'REASON': 80}\n raise exceptions.ResourcesUnavailableError(\n _RESOURCES_UNAVAILABLE_LOG + '\\n' + table.get_string(),\n failover_history=failover_history)\ndiff --git a/sky/check.py b/sky/check.py\nindex 65a3f92366e..6663e508748 100644\n--- a/sky/check.py\n+++ b/sky/check.py\n@@ -34,7 +34,7 @@ def check_capabilities(\n echo = (lambda *_args, **_kwargs: None\n ) if quiet else lambda *args, **kwargs: click.echo(\n *args, **kwargs, color=True)\n- echo('Checking credentials to enable clouds for SkyPilot.')\n+ echo('Checking credentials to enable infra for SkyPilot.')\n if capabilities is None:\n capabilities = sky_cloud.ALL_CAPABILITIES\n assert capabilities is not None\n@@ -189,7 +189,7 @@ def get_all_clouds():\n key=lambda item: item[0])\n ])\n echo(f'\\n{colorama.Fore.GREEN}{PARTY_POPPER_EMOJI} '\n- f'Enabled clouds {PARTY_POPPER_EMOJI}'\n+ f'Enabled infra {PARTY_POPPER_EMOJI}'\n f'{colorama.Style.RESET_ALL}{enabled_clouds_str}')\n return enabled_clouds\n \ndiff --git a/sky/cli.py b/sky/cli.py\nindex bd067d95461..88e39fd398b 100644\n--- a/sky/cli.py\n+++ b/sky/cli.py\n@@ -78,6 +78,7 @@\n from sky.utils import controller_utils\n from sky.utils import dag_utils\n from sky.utils import env_options\n+from sky.utils import infra_utils\n from sky.utils import log_utils\n from sky.utils import registry\n from sky.utils import resources_utils\n@@ -345,24 +346,39 @@ def return_option_decorator(func):\n 'where the task will be invoked. '\n 'Overrides the \"workdir\" config in the YAML if both are supplied.'\n )),\n+ click.option(\n+ '--infra',\n+ required=False,\n+ type=str,\n+ help='Infrastructure to use. '\n+ 'Format: cloud, cloud/region, cloud/region/zone, '\n+ 'or kubernetes/context-name. '\n+ 'Examples: aws, aws/us-east-1, aws/us-east-1/us-east-1a, '\n+ # TODO(zhwu): we have to use `\\*` to make sure the docs build\n+ # not complaining about the `*`, but this will cause `--help`\n+ # to show `\\*` instead of `*`.\n+ 'aws/\\\\*/us-east-1a, kubernetes/my-cluster-context.'),\n click.option(\n '--cloud',\n required=False,\n type=str,\n help=('The cloud to use. If specified, overrides the \"resources.cloud\" '\n- 'config. Passing \"none\" resets the config.')),\n+ 'config. Passing \"none\" resets the config.'),\n+ hidden=True),\n click.option(\n '--region',\n required=False,\n type=str,\n help=('The region to use. If specified, overrides the '\n- '\"resources.region\" config. Passing \"none\" resets the config.')),\n+ '\"resources.region\" config. Passing \"none\" resets the config.'),\n+ hidden=True),\n click.option(\n '--zone',\n required=False,\n type=str,\n help=('The zone to use. If specified, overrides the '\n- '\"resources.zone\" config. Passing \"none\" resets the config.')),\n+ '\"resources.zone\" config. Passing \"none\" resets the config.'),\n+ hidden=True),\n click.option(\n '--num-nodes',\n required=False,\n@@ -1063,6 +1079,33 @@ def cli():\n pass\n \n \n+def _handle_infra_cloud_region_zone_options(infra: Optional[str],\n+ cloud: Optional[str],\n+ region: Optional[str],\n+ zone: Optional[str]):\n+ \"\"\"Handle the backward compatibility for --infra and --cloud/region/zone.\n+\n+ Returns:\n+ cloud, region, zone\n+ \"\"\"\n+ if cloud is not None or region is not None or zone is not None:\n+ click.secho(\n+ 'The --cloud, --region, and --zone options are deprecated. '\n+ 'Use --infra instead.',\n+ fg='yellow')\n+ if infra is not None:\n+ with ux_utils.print_exception_no_traceback():\n+ raise ValueError('Cannot specify both --infra and '\n+ '--cloud, --region, or --zone.')\n+\n+ if infra is not None:\n+ infra_info = infra_utils.InfraInfo.from_str(infra)\n+ cloud = infra_info.cloud\n+ region = infra_info.region\n+ zone = infra_info.zone\n+ return cloud, region, zone\n+\n+\n @cli.command(cls=_DocumentedCodeCommand)\n @config_option(expose_value=True)\n @click.argument('entrypoint',\n@@ -1172,6 +1215,7 @@ def launch(\n backend_name: Optional[str],\n name: Optional[str],\n workdir: Optional[str],\n+ infra: Optional[str],\n cloud: Optional[str],\n region: Optional[str],\n zone: Optional[str],\n@@ -1219,6 +1263,9 @@ def launch(\n if backend_name is None:\n backend_name = backends.CloudVmRayBackend.NAME\n \n+ cloud, region, zone = _handle_infra_cloud_region_zone_options(\n+ infra, cloud, region, zone)\n+\n task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(\n entrypoint=entrypoint,\n name=name,\n@@ -1336,6 +1383,7 @@ def exec(cluster: Optional[str],\n entrypoint: Tuple[str, ...],\n detach_run: bool,\n name: Optional[str],\n+ infra: Optional[str],\n cloud: Optional[str],\n region: Optional[str],\n zone: Optional[str],\n@@ -1427,6 +1475,9 @@ def exec(cluster: Optional[str],\n controller_utils.check_cluster_name_not_controller(\n cluster, operation_str='Executing task on it')\n \n+ cloud, region, zone = _handle_infra_cloud_region_zone_options(\n+ infra, cloud, region, zone)\n+\n task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(\n entrypoint=entrypoint,\n name=name,\n@@ -3265,7 +3316,7 @@ def _down_or_stop(name: str):\n \n @cli.command(cls=_DocumentedCodeCommand)\n @config_option(expose_value=False)\[email protected]('clouds', required=False, type=str, nargs=-1)\[email protected]('infra_list', required=False, type=str, nargs=-1)\n @click.option('--verbose',\n '-v',\n is_flag=True,\n@@ -3273,7 +3324,7 @@ def _down_or_stop(name: str):\n help='Show the activated account for each cloud.')\n @usage_lib.entrypoint\n # pylint: disable=redefined-outer-name\n-def check(clouds: Tuple[str], verbose: bool):\n+def check(infra_list: Tuple[str], verbose: bool):\n \"\"\"Check which clouds are available to use.\n \n This checks access credentials for all clouds supported by SkyPilot. If a\n@@ -3295,8 +3346,8 @@ def check(clouds: Tuple[str], verbose: bool):\n # Check only specific clouds - AWS and GCP.\n sky check aws gcp\n \"\"\"\n- clouds_arg = clouds if len(clouds) > 0 else None\n- request_id = sdk.check(clouds=clouds_arg, verbose=verbose)\n+ infra_arg = infra_list if len(infra_list) > 0 else None\n+ request_id = sdk.check(infra_list=infra_arg, verbose=verbose)\n sdk.stream_and_get(request_id)\n api_server_url = server_common.get_server_url()\n click.echo()\n@@ -3312,10 +3363,15 @@ def check(clouds: Tuple[str], verbose: bool):\n is_flag=True,\n default=False,\n help='Show details of all GPU/TPU/accelerator offerings.')\[email protected]('--infra',\n+ default=None,\n+ type=str,\n+ help='Infrastructure to query. Examples: \"aws\", \"aws/us-east-1\"')\n @click.option('--cloud',\n default=None,\n type=str,\n- help='Cloud provider to query.')\n+ help='Cloud provider to query.',\n+ hidden=True)\n @click.option(\n '--region',\n required=False,\n@@ -3323,6 +3379,7 @@ def check(clouds: Tuple[str], verbose: bool):\n help=\n ('The region to use. If not specified, shows accelerators from all regions.'\n ),\n+ hidden=True,\n )\n @click.option(\n '--all-regions',\n@@ -3335,6 +3392,7 @@ def check(clouds: Tuple[str], verbose: bool):\n def show_gpus(\n accelerator_str: Optional[str],\n all: bool, # pylint: disable=redefined-builtin\n+ infra: Optional[str],\n cloud: Optional[str],\n region: Optional[str],\n all_regions: Optional[bool]):\n@@ -3376,6 +3434,11 @@ def show_gpus(\n * ``UTILIZATION`` (Kubernetes only): Total number of GPUs free / available\n in the Kubernetes cluster.\n \"\"\"\n+ cloud, region, _ = _handle_infra_cloud_region_zone_options(infra,\n+ cloud,\n+ region,\n+ zone=None)\n+\n # validation for the --region flag\n if region is not None and cloud is None:\n raise click.UsageError(\n@@ -3991,6 +4054,7 @@ def jobs_launch(\n name: Optional[str],\n cluster: Optional[str],\n workdir: Optional[str],\n+ infra: Optional[str],\n cloud: Optional[str],\n region: Optional[str],\n zone: Optional[str],\n@@ -4032,6 +4096,8 @@ def jobs_launch(\n 'Use one of the flags as they are alias.')\n name = cluster\n env = _merge_env_vars(env_file, env)\n+ cloud, region, zone = _handle_infra_cloud_region_zone_options(\n+ infra, cloud, region, zone)\n task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(\n entrypoint,\n name=name,\n@@ -4509,6 +4575,7 @@ def serve_up(\n service_yaml: Tuple[str, ...],\n service_name: Optional[str],\n workdir: Optional[str],\n+ infra: Optional[str],\n cloud: Optional[str],\n region: Optional[str],\n zone: Optional[str],\n@@ -4555,6 +4622,8 @@ def serve_up(\n \n sky serve up service.yaml\n \"\"\"\n+ cloud, region, zone = _handle_infra_cloud_region_zone_options(\n+ infra, cloud, region, zone)\n if service_name is None:\n service_name = serve_lib.generate_service_name()\n \n@@ -4621,13 +4690,13 @@ def serve_up(\n @timeline.event\n @usage_lib.entrypoint\n def serve_update(service_name: str, service_yaml: Tuple[str, ...],\n- workdir: Optional[str], cloud: Optional[str],\n- region: Optional[str], zone: Optional[str],\n- num_nodes: Optional[int], use_spot: Optional[bool],\n- image_id: Optional[str], env_file: Optional[Dict[str, str]],\n- env: List[Tuple[str, str]], gpus: Optional[str],\n- instance_type: Optional[str], ports: Tuple[str],\n- cpus: Optional[str], memory: Optional[str],\n+ workdir: Optional[str], infra: Optional[str],\n+ cloud: Optional[str], region: Optional[str],\n+ zone: Optional[str], num_nodes: Optional[int],\n+ use_spot: Optional[bool], image_id: Optional[str],\n+ env_file: Optional[Dict[str, str]], env: List[Tuple[str, str]],\n+ gpus: Optional[str], instance_type: Optional[str],\n+ ports: Tuple[str], cpus: Optional[str], memory: Optional[str],\n disk_size: Optional[int], disk_tier: Optional[str], mode: str,\n yes: bool, async_call: bool):\n \"\"\"Update a SkyServe service.\n@@ -4659,6 +4728,8 @@ def serve_update(service_name: str, service_yaml: Tuple[str, ...],\n sky serve update --mode blue_green sky-service-16aa new_service.yaml\n \n \"\"\"\n+ cloud, region, zone = _handle_infra_cloud_region_zone_options(\n+ infra, cloud, region, zone)\n task = _generate_task_with_service(\n service_name=service_name,\n service_yaml_args=service_yaml,\n@@ -5173,6 +5244,7 @@ def benchmark_launch(\n benchmark: str,\n name: Optional[str],\n workdir: Optional[str],\n+ infra: Optional[str],\n cloud: Optional[str],\n region: Optional[str],\n zone: Optional[str],\n@@ -5206,7 +5278,6 @@ def benchmark_launch(\n raise click.BadParameter(f'Benchmark {benchmark} already exists. '\n 'To delete the previous benchmark result, '\n f'run `sky bench delete {benchmark}`.')\n-\n entrypoint = ' '.join(entrypoint)\n if not entrypoint:\n raise click.BadParameter('Please specify a task yaml to benchmark.')\n@@ -5217,6 +5288,8 @@ def benchmark_launch(\n 'Sky Benchmark does not support command line tasks. '\n 'Please provide a YAML file.')\n assert config is not None, (is_yaml, config)\n+ cloud, region, zone = _handle_infra_cloud_region_zone_options(\n+ infra, cloud, region, zone)\n \n click.secho('Benchmarking a task from YAML: ', fg='cyan', nl=False)\n click.secho(entrypoint, bold=True)\ndiff --git a/sky/client/sdk.py b/sky/client/sdk.py\nindex e5c38550aa8..a21cf70e811 100644\n--- a/sky/client/sdk.py\n+++ b/sky/client/sdk.py\n@@ -42,6 +42,7 @@\n from sky.utils import common_utils\n from sky.utils import dag_utils\n from sky.utils import env_options\n+from sky.utils import infra_utils\n from sky.utils import rich_utils\n from sky.utils import status_lib\n from sky.utils import subprocess_utils\n@@ -87,12 +88,12 @@ def stream_response(request_id: Optional[str],\n @usage_lib.entrypoint\n @server_common.check_server_healthy_or_start\n @annotations.client_api\n-def check(clouds: Optional[Tuple[str]],\n+def check(infra_list: Optional[Tuple[str, ...]],\n verbose: bool) -> server_common.RequestId:\n \"\"\"Checks the credentials to enable clouds.\n \n Args:\n- clouds: The clouds to check.\n+ infra: The infra to check.\n verbose: Whether to show verbose output.\n \n Returns:\n@@ -101,6 +102,22 @@ def check(clouds: Optional[Tuple[str]],\n Request Returns:\n None\n \"\"\"\n+ if infra_list is None:\n+ clouds = None\n+ else:\n+ specified_clouds = []\n+ for infra_str in infra_list:\n+ infra = infra_utils.InfraInfo.from_str(infra_str)\n+ if infra.cloud is None:\n+ with ux_utils.print_exception_no_traceback():\n+ raise ValueError(f'Invalid infra to check: {infra_str}')\n+ if infra.region is not None or infra.zone is not None:\n+ region_zone = infra_str.partition('/')[-1]\n+ logger.warning(f'Infra {infra_str} is specified, but `check` '\n+ f'only supports checking {infra.cloud}, '\n+ f'ignoring {region_zone}')\n+ specified_clouds.append(infra.cloud)\n+ clouds = tuple(specified_clouds)\n body = payloads.CheckBody(clouds=clouds, verbose=verbose)\n response = requests.post(f'{server_common.get_server_url()}/check',\n json=json.loads(body.model_dump_json()),\n@@ -344,7 +361,7 @@ def launch(\n import sky\n task = sky.Task(run='echo hello SkyPilot')\n task.set_resources(\n- sky.Resources(cloud=sky.AWS(), accelerators='V100:4'))\n+ sky.Resources(infra='aws', accelerators='V100:4'))\n sky.launch(task, cluster_name='my-cluster')\n \n \ndiff --git a/sky/dashboard/src/components/clusters.jsx b/sky/dashboard/src/components/clusters.jsx\nindex e3b88e2cf46..804cdcba57e 100755\n--- a/sky/dashboard/src/components/clusters.jsx\n+++ b/sky/dashboard/src/components/clusters.jsx\n@@ -7,7 +7,10 @@\n \n import React, { useState, useEffect } from 'react';\n import { CircularProgress } from '@mui/material';\n-import { CustomTooltip as Tooltip } from '@/components/utils';\n+import {\n+ CustomTooltip as Tooltip,\n+ NonCapitalizedTooltip,\n+} from '@/components/utils';\n import Link from 'next/link';\n import { Button } from '@/components/ui/button';\n import { Card } from '@/components/ui/card';\n@@ -228,15 +231,15 @@ export function ClusterTable({\n </TableHead>\n <TableHead\n className=\"sortable whitespace-nowrap\"\n- onClick={() => requestSort('resources_str')}\n+ onClick={() => requestSort('infra')}\n >\n- Resources{getSortDirection('resources_str')}\n+ Infra{getSortDirection('infra')}\n </TableHead>\n <TableHead\n className=\"sortable whitespace-nowrap\"\n- onClick={() => requestSort('region')}\n+ onClick={() => requestSort('resources_str')}\n >\n- Region{getSortDirection('region')}\n+ Resources{getSortDirection('resources_str')}\n </TableHead>\n <TableHead\n className=\"sortable whitespace-nowrap\"\n@@ -277,8 +280,22 @@ export function ClusterTable({\n </Link>\n </TableCell>\n <TableCell>{item.user}</TableCell>\n- <TableCell>{item.resources_str}</TableCell>\n- <TableCell>{item.region}</TableCell>\n+ <TableCell>\n+ <NonCapitalizedTooltip\n+ content={item.full_infra || item.infra}\n+ className=\"text-sm text-muted-foreground\"\n+ >\n+ <span>{item.infra}</span>\n+ </NonCapitalizedTooltip>\n+ </TableCell>\n+ <TableCell>\n+ <NonCapitalizedTooltip\n+ content={item.resources_str_full || item.resources_str}\n+ className=\"text-sm text-muted-foreground\"\n+ >\n+ <span>{item.resources_str}</span>\n+ </NonCapitalizedTooltip>\n+ </TableCell>\n <TableCell>{relativeTime(item.time)}</TableCell>\n <TableCell className=\"text-left\">\n <Status2Actions\ndiff --git a/sky/dashboard/src/components/jobs.jsx b/sky/dashboard/src/components/jobs.jsx\nindex 98b44e277dd..4ae9854f743 100755\n--- a/sky/dashboard/src/components/jobs.jsx\n+++ b/sky/dashboard/src/components/jobs.jsx\n@@ -21,7 +21,11 @@ import { formatDuration } from '@/components/utils';\n import { getManagedJobs } from '@/data/connectors/jobs';\n import { getClusters } from '@/data/connectors/clusters';\n import { Layout } from '@/components/elements/layout';\n-import { CustomTooltip as Tooltip, relativeTime } from '@/components/utils';\n+import {\n+ CustomTooltip as Tooltip,\n+ NonCapitalizedTooltip,\n+ relativeTime,\n+} from '@/components/utils';\n import {\n FileSearchIcon,\n RotateCwIcon,\n@@ -490,22 +494,23 @@ export function ManagedJobsTable({\n </TableHead>\n <TableHead\n className=\"sortable whitespace-nowrap\"\n- onClick={() => requestSort('resources')}\n+ onClick={() => requestSort('resources_str')}\n >\n- Resources{getSortDirection('resources')}\n+ Requested{getSortDirection('resources_str')}\n </TableHead>\n <TableHead\n className=\"sortable whitespace-nowrap\"\n- onClick={() => requestSort('cluster')}\n+ onClick={() => requestSort('infra')}\n >\n- Cluster{getSortDirection('cluster')}\n+ Infra{getSortDirection('infra')}\n </TableHead>\n <TableHead\n className=\"sortable whitespace-nowrap\"\n- onClick={() => requestSort('region')}\n+ onClick={() => requestSort('cluster')}\n >\n- Region{getSortDirection('region')}\n+ Resources{getSortDirection('cluster')}\n </TableHead>\n+\n <TableHead\n className=\"sortable whitespace-nowrap\"\n onClick={() => requestSort('recoveries')}\n@@ -520,7 +525,7 @@ export function ManagedJobsTable({\n {loading && isInitialLoad ? (\n <TableRow>\n <TableCell\n- colSpan={12}\n+ colSpan={11}\n className=\"text-center py-6 text-gray-500\"\n >\n <div className=\"flex justify-center items-center\">\n@@ -556,9 +561,25 @@ export function ManagedJobsTable({\n <TableCell>\n <StatusBadge status={item.status} />\n </TableCell>\n- <TableCell>{item.resources}</TableCell>\n- <TableCell>{item.cluster}</TableCell>\n- <TableCell>{item.region}</TableCell>\n+ <TableCell>{item.requested_resources}</TableCell>\n+ <TableCell>\n+ <NonCapitalizedTooltip\n+ content={item.full_infra || item.infra}\n+ className=\"text-sm text-muted-foreground\"\n+ >\n+ <span>{item.infra}</span>\n+ </NonCapitalizedTooltip>\n+ </TableCell>\n+ <TableCell>\n+ <NonCapitalizedTooltip\n+ content={\n+ item.resources_str_full || item.resources_str\n+ }\n+ className=\"text-sm text-muted-foreground\"\n+ >\n+ <span>{item.resources_str}</span>\n+ </NonCapitalizedTooltip>\n+ </TableCell>\n <TableCell>{item.recoveries}</TableCell>\n <TableCell>\n {item.details ? (\n@@ -583,7 +604,7 @@ export function ManagedJobsTable({\n {expandedRowId === item.id && (\n <ExpandedDetailsRow\n text={item.details}\n- colSpan={12}\n+ colSpan={11}\n innerRef={expandedRowRef}\n />\n )}\n@@ -592,7 +613,7 @@ export function ManagedJobsTable({\n </>\n ) : (\n <TableRow>\n- <TableCell colSpan={12} className=\"text-center py-6\">\n+ <TableCell colSpan={11} className=\"text-center py-6\">\n <div className=\"flex flex-col items-center space-y-4\">\n {controllerLaunching && (\n <div className=\"flex flex-col items-center space-y-2\">\ndiff --git a/sky/dashboard/src/components/utils.jsx b/sky/dashboard/src/components/utils.jsx\nindex 8044abac9df..e4fe8a2b2e2 100644\n--- a/sky/dashboard/src/components/utils.jsx\n+++ b/sky/dashboard/src/components/utils.jsx\n@@ -87,6 +87,24 @@ export const CustomTooltip = ({ children, ...props }) => {\n );\n };\n \n+export const NonCapitalizedTooltip = ({ children, ...props }) => {\n+ const content = props.content;\n+ props.content = undefined;\n+ return (\n+ <Tooltip\n+ {...DEFAULT_TOOLTIP_PROPS}\n+ {...props}\n+ content={\n+ <span className=\"left-full w-max px-2 py-1 text-sm text-gray-100 bg-gray-500 text-sm rounded\">\n+ {content}\n+ </span>\n+ }\n+ >\n+ {children}\n+ </Tooltip>\n+ );\n+};\n+\n // Format duration from seconds to a readable format\n export function formatDuration(durationInSeconds) {\n if (!durationInSeconds && durationInSeconds !== 0) return '-';\ndiff --git a/sky/dashboard/src/data/connectors/clusters.jsx b/sky/dashboard/src/data/connectors/clusters.jsx\nindex 71944182602..8fb5052be52 100644\n--- a/sky/dashboard/src/data/connectors/clusters.jsx\n+++ b/sky/dashboard/src/data/connectors/clusters.jsx\n@@ -4,6 +4,38 @@ import { useState, useEffect, useCallback } from 'react';\n import { showToast } from '@/data/connectors/toast';\n import { ENDPOINT } from '@/data/connectors/constants';\n \n+/**\n+ * Truncates a string in the middle, preserving parts from beginning and end.\n+ * @param {string} str - The string to truncate\n+ * @param {number} maxLength - Maximum length of the truncated string\n+ * @return {string} - Truncated string\n+ */\n+function truncateMiddle(str, maxLength = 15) {\n+ if (!str || str.length <= maxLength) return str;\n+\n+ // Reserve 3 characters for '...'\n+ if (maxLength <= 3) return '...';\n+\n+ // Calculate how many characters to keep from beginning and end\n+ const halfLength = Math.floor((maxLength - 3) / 2);\n+ const remainder = (maxLength - 3) % 2;\n+\n+ // Keep one more character at the beginning if maxLength - 3 is odd\n+ const startLength = halfLength + remainder;\n+ const endLength = halfLength;\n+\n+ // When endLength is 0, just show the start part and '...'\n+ if (endLength === 0) {\n+ return str.substring(0, startLength) + '...';\n+ }\n+\n+ return (\n+ str.substring(0, startLength) +\n+ '...' +\n+ str.substring(str.length - endLength)\n+ );\n+}\n+\n const clusterStatusMap = {\n UP: 'RUNNING',\n STOPPED: 'STOPPED',\n@@ -31,16 +63,33 @@ export async function getClusters({ clusterNames = null } = {}) {\n const data = await fetchedData.json();\n const clusters = data.return_value ? JSON.parse(data.return_value) : [];\n const clusterData = clusters.map((cluster) => {\n+ let region_or_zone = '';\n+ if (cluster.zone) {\n+ region_or_zone = cluster.zone;\n+ } else {\n+ region_or_zone = cluster.region;\n+ }\n+ // Store the full value before truncation\n+ const full_region_or_zone = region_or_zone;\n+ // Truncate region_or_zone in the middle if it's too long\n+ if (region_or_zone && region_or_zone.length > 25) {\n+ region_or_zone = truncateMiddle(region_or_zone, 25);\n+ }\n return {\n status: clusterStatusMap[cluster.status],\n cluster: cluster.name,\n user: cluster.user_name,\n- infra: cluster.cloud,\n- region: cluster.region,\n+ infra: region_or_zone\n+ ? cluster.cloud + ' (' + region_or_zone + ')'\n+ : cluster.cloud,\n+ full_infra: full_region_or_zone\n+ ? `${cluster.cloud} (${full_region_or_zone})`\n+ : cluster.cloud,\n cpus: cluster.cpus,\n mem: cluster.memory,\n gpus: cluster.accelerators,\n resources_str: cluster.resources_str,\n+ resources_str_full: cluster.resources_str_full,\n time: new Date(cluster.launched_at * 1000),\n num_nodes: cluster.nodes,\n jobs: [],\n@@ -169,7 +218,7 @@ export function useClusterDetails({ cluster, job = null }) {\n if (cluster) {\n try {\n setLoadingClusterJobData(true);\n- const data = await getClusterJobs({ clusterName: cluster, job: job });\n+ const data = await getClusterJobs({ clusterName: cluster });\n setClusterJobData(data);\n } catch (error) {\n console.error('Error fetching cluster job data:', error);\n@@ -177,7 +226,7 @@ export function useClusterDetails({ cluster, job = null }) {\n setLoadingClusterJobData(false);\n }\n }\n- }, [cluster, job]);\n+ }, [cluster]);\n \n const refreshData = useCallback(async () => {\n await Promise.all([fetchClusterData(), fetchClusterJobData()]);\ndiff --git a/sky/dashboard/src/data/connectors/jobs.jsx b/sky/dashboard/src/data/connectors/jobs.jsx\nindex 55cf10cf0f9..3f3728496fb 100644\n--- a/sky/dashboard/src/data/connectors/jobs.jsx\n+++ b/sky/dashboard/src/data/connectors/jobs.jsx\n@@ -82,6 +82,49 @@ export async function getManagedJobs({ allUsers = true } = {}) {\n let endTime = job.end_at ? job.end_at : Date.now() / 1000;\n const total_duration = endTime - job.submitted_at;\n \n+ // Extract cloud name if not available (backward compatibility)\n+ // TODO(zhwu): remove this after 0.12.0\n+ let cloud = job.cloud;\n+ let cluster_resources = job.cluster_resources;\n+ if (!cloud) {\n+ // Backward compatibility for old jobs controller without cloud info\n+ // Similar to the logic in sky/jobs/utils.py\n+ if (job.cluster_resources && job.cluster_resources !== '-') {\n+ try {\n+ cloud = job.cluster_resources.split('(')[0].split('x').pop().trim();\n+ cluster_resources = job.cluster_resources\n+ .replace(`${cloud}(`, '(')\n+ .replace('x ', 'x');\n+ } catch (error) {\n+ // If parsing fails, set a default value\n+ cloud = 'Unknown';\n+ }\n+ } else {\n+ cloud = 'Unknown';\n+ }\n+ }\n+\n+ let region_or_zone = '';\n+ if (job.zone) {\n+ region_or_zone = job.zone;\n+ } else {\n+ region_or_zone = job.region;\n+ }\n+\n+ const full_region_or_zone = region_or_zone;\n+ if (region_or_zone && region_or_zone.length > 15) {\n+ region_or_zone = region_or_zone.substring(0, 15) + '...';\n+ }\n+\n+ let infra = cloud + ' (' + region_or_zone + ')';\n+ if (region_or_zone === '-') {\n+ infra = cloud;\n+ }\n+ let full_infra = cloud + ' (' + full_region_or_zone + ')';\n+ if (full_region_or_zone === '-') {\n+ full_infra = cloud;\n+ }\n+\n return {\n id: job.job_id,\n task: job.task_name,\n@@ -89,9 +132,11 @@ export async function getManagedJobs({ allUsers = true } = {}) {\n job_duration: job.job_duration,\n total_duration: total_duration,\n status: job.status,\n- resources: job.resources,\n- cluster: job.cluster_resources,\n- region: job.region,\n+ requested_resources: job.resources,\n+ resources_str: cluster_resources,\n+ resources_str_full: job.cluster_resources_full || cluster_resources,\n+ infra: infra,\n+ full_infra: full_infra,\n recoveries: job.recovery_count,\n details: job.failure_reason,\n user: job.user_name,\ndiff --git a/sky/dashboard/src/pages/clusters/[cluster].js b/sky/dashboard/src/pages/clusters/[cluster].js\nindex 60549cbe30c..4d15ca6f71e 100644\n--- a/sky/dashboard/src/pages/clusters/[cluster].js\n+++ b/sky/dashboard/src/pages/clusters/[cluster].js\n@@ -147,6 +147,14 @@ function ActiveTab({ clusterData, clusterJobData, loading }) {\n </div>\n <div className=\"p-4\">\n <div className=\"grid grid-cols-2 gap-6\">\n+ <div>\n+ <div className=\"text-gray-600 font-medium text-base\">\n+ Status\n+ </div>\n+ <div className=\"text-base mt-1\">\n+ <StatusBadge status={clusterData.status} />\n+ </div>\n+ </div>\n <div>\n <div className=\"text-gray-600 font-medium text-base\">\n Cluster\n@@ -158,11 +166,9 @@ function ActiveTab({ clusterData, clusterJobData, loading }) {\n <div className=\"text-base mt-1\">{clusterData.user}</div>\n </div>\n <div>\n- <div className=\"text-gray-600 font-medium text-base\">\n- Status\n- </div>\n+ <div className=\"text-gray-600 font-medium text-base\">Infra</div>\n <div className=\"text-base mt-1\">\n- <StatusBadge status={clusterData.status} />\n+ {clusterData.full_infra || clusterData.infra || 'N/A'}\n </div>\n </div>\n <div>\n@@ -170,15 +176,19 @@ function ActiveTab({ clusterData, clusterJobData, loading }) {\n Resources\n </div>\n <div className=\"text-base mt-1\">\n- {clusterData.resources_str || 'N/A'}\n+ {clusterData.resources_str_full ||\n+ clusterData.resources_str ||\n+ 'N/A'}\n </div>\n </div>\n <div>\n <div className=\"text-gray-600 font-medium text-base\">\n- Region\n+ Started\n </div>\n <div className=\"text-base mt-1\">\n- {clusterData.region || 'N/A'}\n+ {clusterData.time\n+ ? new Date(clusterData.time).toLocaleString()\n+ : 'N/A'}\n </div>\n </div>\n </div>\ndiff --git a/sky/dashboard/src/pages/clusters/[cluster]/[job].js b/sky/dashboard/src/pages/clusters/[cluster]/[job].js\nindex 54ea6f31b66..1d67b8c344a 100755\n--- a/sky/dashboard/src/pages/clusters/[cluster]/[job].js\n+++ b/sky/dashboard/src/pages/clusters/[cluster]/[job].js\n@@ -228,7 +228,7 @@ export function JobDetailPage() {\n {jobData.resources && (\n <div>\n <div className=\"text-gray-600 font-medium text-base\">\n- Resources\n+ Requested Resources\n </div>\n <div className=\"text-base mt-1\">\n {jobData.resources || 'N/A'}\ndiff --git a/sky/dashboard/src/pages/jobs/[job].js b/sky/dashboard/src/pages/jobs/[job].js\nindex 4b9b0c44cb2..5a079444a4c 100755\n--- a/sky/dashboard/src/pages/jobs/[job].js\n+++ b/sky/dashboard/src/pages/jobs/[job].js\n@@ -450,12 +450,10 @@ function JobDetailsContent({\n return (\n <div className=\"grid grid-cols-2 gap-6\">\n <div>\n- <div className=\"text-gray-600 font-medium text-base\">Job ID</div>\n- <div className=\"text-base mt-1\">{jobData.id}</div>\n- </div>\n- <div>\n- <div className=\"text-gray-600 font-medium text-base\">Job Name</div>\n- <div className=\"text-base mt-1\">{jobData.name}</div>\n+ <div className=\"text-gray-600 font-medium text-base\">Job ID (Name)</div>\n+ <div className=\"text-base mt-1\">\n+ {jobData.id} {jobData.name ? `(${jobData.name})` : ''}\n+ </div>\n </div>\n <div>\n <div className=\"text-gray-600 font-medium text-base\">Status</div>\n@@ -468,12 +466,22 @@ function JobDetailsContent({\n <div className=\"text-base mt-1\">{jobData.user}</div>\n </div>\n <div>\n- <div className=\"text-gray-600 font-medium text-base\">Resources</div>\n- <div className=\"text-base mt-1\">{jobData.resources || 'N/A'}</div>\n+ <div className=\"text-gray-600 font-medium text-base\">\n+ Requested Resources\n+ </div>\n+ <div className=\"text-base mt-1\">\n+ {jobData.requested_resources || 'N/A'}\n+ </div>\n+ </div>\n+ <div>\n+ <div className=\"text-gray-600 font-medium text-base\">Infra</div>\n+ <div className=\"text-base mt-1\">{jobData.infra || '-'}</div>\n </div>\n <div>\n- <div className=\"text-gray-600 font-medium text-base\">Cluster</div>\n- <div className=\"text-base mt-1\">{jobData.cluster || '-'}</div>\n+ <div className=\"text-gray-600 font-medium text-base\">Resources</div>\n+ <div className=\"text-base mt-1\">\n+ {jobData.resources_str_full || jobData.resources_str || '-'}\n+ </div>\n </div>\n </div>\n );\ndiff --git a/sky/execution.py b/sky/execution.py\nindex 9d42ac11689..b173cc8b407 100644\n--- a/sky/execution.py\n+++ b/sky/execution.py\n@@ -465,7 +465,7 @@ def launch(\n import sky\n task = sky.Task(run='echo hello SkyPilot')\n task.set_resources(\n- sky.Resources(cloud=sky.AWS(), accelerators='V100:4'))\n+ sky.Resources(infra='aws', accelerators='V100:4'))\n sky.launch(task, cluster_name='my-cluster')\n \n \ndiff --git a/sky/jobs/server/core.py b/sky/jobs/server/core.py\nindex 09080c8a012..e64befc7488 100644\n--- a/sky/jobs/server/core.py\n+++ b/sky/jobs/server/core.py\n@@ -395,7 +395,7 @@ def queue(refresh: bool,\n if returncode != 0:\n logger.error(job_table_payload + stderr)\n raise RuntimeError('Failed to fetch managed jobs with returncode: '\n- f'{returncode}')\n+ f'{returncode}.\\n{job_table_payload + stderr}')\n \n jobs = managed_job_utils.load_managed_job_queue(job_table_payload)\n \ndiff --git a/sky/jobs/utils.py b/sky/jobs/utils.py\nindex 73d96185c9e..c0eee370881 100644\n--- a/sky/jobs/utils.py\n+++ b/sky/jobs/utils.py\n@@ -33,8 +33,10 @@\n from sky.skylet import log_lib\n from sky.usage import usage_lib\n from sky.utils import common_utils\n+from sky.utils import infra_utils\n from sky.utils import log_utils\n from sky.utils import message_utils\n+from sky.utils import resources_utils\n from sky.utils import rich_utils\n from sky.utils import subprocess_utils\n from sky.utils import ux_utils\n@@ -911,15 +913,23 @@ def dump_managed_job_queue() -> str:\n cluster_name = generate_managed_job_cluster_name(\n job['task_name'], job['job_id'])\n handle = global_user_state.get_handle_from_cluster_name(cluster_name)\n- if handle is not None:\n- assert isinstance(handle, backends.CloudVmRayResourceHandle)\n- job['cluster_resources'] = (\n- f'{handle.launched_nodes}x {handle.launched_resources}')\n+ if isinstance(handle, backends.CloudVmRayResourceHandle):\n+ resources_str = resources_utils.get_readable_resources_repr(\n+ handle, simplify=True)\n+ resources_str_full = resources_utils.get_readable_resources_repr(\n+ handle, simplify=False)\n+ job['cluster_resources'] = resources_str\n+ job['cluster_resources_full'] = resources_str_full\n+ job['cloud'] = str(handle.launched_resources.cloud)\n job['region'] = handle.launched_resources.region\n+ job['zone'] = handle.launched_resources.zone\n else:\n # FIXME(zongheng): display the last cached values for these.\n job['cluster_resources'] = '-'\n+ job['cluster_resources_full'] = '-'\n+ job['cloud'] = '-'\n job['region'] = '-'\n+ job['zone'] = '-'\n \n return message_utils.encode_payload(jobs)\n \n@@ -1026,7 +1036,7 @@ def get_hash(task):\n 'TASK',\n 'NAME',\n *user_cols,\n- 'RESOURCES',\n+ 'REQUESTED',\n 'SUBMITTED',\n 'TOT. DURATION',\n 'JOB DURATION',\n@@ -1035,7 +1045,7 @@ def get_hash(task):\n ]\n if show_all:\n # TODO: move SCHED. STATE to a separate flag (e.g. --debug)\n- columns += ['STARTED', 'CLUSTER', 'REGION', 'SCHED. STATE', 'DETAILS']\n+ columns += ['STARTED', 'INFRA', 'RESOURCES', 'SCHED. STATE', 'DETAILS']\n if tasks_have_k8s_user:\n columns.insert(0, 'USER')\n job_table = log_utils.create_table(columns)\n@@ -1174,11 +1184,32 @@ def get_user_column_values(task: Dict[str, Any]) -> List[str]:\n # more than one task, only display on the aggregated row.\n schedule_state = (task['schedule_state']\n if len(job_tasks) == 1 else '-')\n+ cloud = task.get('cloud')\n+ if cloud is None:\n+ # Backward compatibility for old jobs controller without\n+ # cloud info returned, we parse it from the cluster\n+ # resources\n+ # TODO(zhwu): remove this after 0.12.0\n+ cloud = task['cluster_resources'].split('(')[0].split(\n+ 'x')[-1]\n+ task['cluster_resources'] = task[\n+ 'cluster_resources'].replace(f'{cloud}(',\n+ '(').replace('x ', 'x')\n+ region = task['region']\n+ zone = task.get('zone')\n+ if cloud == '-':\n+ cloud = None\n+ if region == '-':\n+ region = None\n+ if zone == '-':\n+ zone = None\n+\n+ infra = infra_utils.InfraInfo(cloud, region, zone)\n values.extend([\n # STARTED\n log_utils.readable_time_duration(task['start_at']),\n+ infra.formatted_str(),\n task['cluster_resources'],\n- task['region'],\n schedule_state,\n generate_details(task['failure_reason']),\n ])\ndiff --git a/sky/optimizer.py b/sky/optimizer.py\nindex f4a9fa03553..453afb0b633 100644\n--- a/sky/optimizer.py\n+++ b/sky/optimizer.py\n@@ -167,7 +167,7 @@ def _add_dummy_source_sink_nodes(dag: 'dag_lib.Dag'):\n \n def make_dummy(name):\n dummy = task_lib.Task(name)\n- dummy.set_resources({DummyResources(DummyCloud(), None)})\n+ dummy.set_resources({DummyResources(cloud=DummyCloud())})\n dummy.set_time_estimator(lambda _: 0)\n return dummy\n \n@@ -321,10 +321,10 @@ def get_reservations_available_resources(\n estimated_runtime = 1 * 3600\n else:\n # We assume the time estimator takes in a partial resource\n- # Resources('V100')\n+ # Resources(accelerators='V100')\n # and treats their launchable versions\n- # Resources(AWS, 'p3.2xlarge'),\n- # Resources(GCP, '...', 'V100'),\n+ # Resources(infra='aws', instance_type='p3.2xlarge'),\n+ # Resources(infra='gcp', accelerators='V100'),\n # ...\n # as having the same run time.\n # FIXME(zongheng): take 'num_nodes' as an arg/into\n@@ -772,6 +772,15 @@ def print_optimized_plan(\n f'{colorama.Style.BRIGHT}Estimated total cost: '\n f'{colorama.Style.RESET_ALL}${total_cost:.1f}\\n')\n \n+ def _instance_type_str(resources: 'resources_lib.Resources') -> str:\n+ instance_type = resources.instance_type\n+ assert instance_type is not None, 'Instance type must be specified'\n+ if isinstance(resources.cloud, clouds.Kubernetes):\n+ instance_type = '-'\n+ if resources.use_spot:\n+ instance_type = ''\n+ return instance_type\n+\n def _get_resources_element_list(\n resources: 'resources_lib.Resources') -> List[str]:\n accelerators = resources.get_accelerators_str()\n@@ -794,22 +803,20 @@ def format_number(x: Optional[float]) -> str:\n vcpus = format_number(vcpus_)\n mem = format_number(mem_)\n \n- if resources.zone is None:\n- region_or_zone = resources.region\n- else:\n- region_or_zone = resources.zone\n+ # Format infra as CLOUD (REGION/ZONE)\n+ infra = resources.infra.formatted_str()\n+\n return [\n- str(cloud),\n- resources.instance_type + spot,\n+ infra,\n+ _instance_type_str(resources) + spot,\n vcpus,\n mem,\n str(accelerators),\n- str(region_or_zone),\n ]\n \n Row = collections.namedtuple('Row', [\n- 'cloud', 'instance', 'vcpus', 'mem', 'accelerators',\n- 'region_or_zone', 'cost_str', 'chosen_str'\n+ 'infra', 'instance', 'vcpus', 'mem', 'accelerators', 'cost_str',\n+ 'chosen_str'\n ])\n \n def _get_resources_named_tuple(resources: 'resources_lib.Resources',\n@@ -833,18 +840,15 @@ def format_number(x: Optional[float]) -> str:\n vcpus = format_number(vcpus_)\n mem = format_number(mem_)\n \n- if resources.zone is None:\n- region_or_zone = resources.region\n- else:\n- region_or_zone = resources.zone\n+ infra = resources.infra.formatted_str()\n \n chosen_str = ''\n if chosen:\n chosen_str = (colorama.Fore.GREEN + ' ' + '\\u2714' +\n colorama.Style.RESET_ALL)\n- row = Row(cloud, resources.instance_type + spot, vcpus, mem,\n- str(accelerators), str(region_or_zone), cost_str,\n- chosen_str)\n+ row = Row(infra,\n+ _instance_type_str(resources) + spot, vcpus, mem,\n+ str(accelerators), cost_str, chosen_str)\n \n return row\n \n@@ -862,10 +866,7 @@ def _get_resource_group_hash(resources: 'resources_lib.Resources'):\n return json.dumps(resource_key_dict, sort_keys=True)\n \n # Print the list of resouces that the optimizer considered.\n- resource_fields = [\n- 'CLOUD', 'INSTANCE', 'vCPUs', 'Mem(GB)', 'ACCELERATORS',\n- 'REGION/ZONE'\n- ]\n+ resource_fields = ['INFRA', 'INSTANCE', 'vCPUs', 'Mem(GB)', 'GPUS']\n if len(ordered_best_plan) > 1:\n best_plan_rows = []\n for t, r in ordered_best_plan.items():\n@@ -993,13 +994,19 @@ def _print_candidates(node_to_candidate_map: _TaskToPerCloudCandidates):\n if len(candidate_list) > 1:\n is_multi_instances = True\n instance_list = [\n- res.instance_type for res in candidate_list\n+ res.instance_type\n+ for res in candidate_list\n+ if res.instance_type is not None\n ]\n+ candidate_str = resources_utils.format_resource(\n+ candidate_list[0], simplify=True)\n+\n logger.info(\n- f'Multiple {cloud} instances satisfy '\n- f'{acc_name}:{int(acc_count)}. '\n- f'The cheapest {candidate_list[0]!r} is considered '\n- f'among:\\n{instance_list}.')\n+ f'{colorama.Style.DIM}🔍 Multiple {cloud} instances '\n+ f'satisfy {acc_name}:{int(acc_count)}. '\n+ f'The cheapest {candidate_str} is considered '\n+ f'among: {\", \".join(instance_list)}.'\n+ f'{colorama.Style.RESET_ALL}')\n if is_multi_instances:\n logger.info(\n f'To list more details, run: sky show-gpus {acc_name}\\n')\ndiff --git a/sky/resources.py b/sky/resources.py\nindex f988c000548..cfbea41199f 100644\n--- a/sky/resources.py\n+++ b/sky/resources.py\n@@ -6,6 +6,7 @@\n \n import colorama\n \n+import sky\n from sky import check as sky_check\n from sky import clouds\n from sky import exceptions\n@@ -20,6 +21,7 @@\n from sky.utils import annotations\n from sky.utils import common_utils\n from sky.utils import config_utils\n+from sky.utils import infra_utils\n from sky.utils import log_utils\n from sky.utils import registry\n from sky.utils import resources_utils\n@@ -106,6 +108,7 @@ def __init__(\n memory: Union[None, int, float, str] = None,\n accelerators: Union[None, str, Dict[str, Union[int, float]]] = None,\n accelerator_args: Optional[Dict[str, str]] = None,\n+ infra: Optional[str] = None,\n use_spot: Optional[bool] = None,\n job_recovery: Optional[Union[Dict[str, Optional[Union[str, int]]],\n str]] = None,\n@@ -134,9 +137,9 @@ def __init__(\n .. code-block:: python\n \n # Fully specified cloud and instance type (is_launchable() is True).\n- sky.Resources(clouds.AWS(), 'p3.2xlarge')\n- sky.Resources(clouds.GCP(), 'n1-standard-16')\n- sky.Resources(clouds.GCP(), 'n1-standard-8', 'V100')\n+ sky.Resources(infra='aws', instance_type='p3.2xlarge')\n+ sky.Resources(infra='k8s/my-cluster-ctx', accelerators='V100')\n+ sky.Resources(infra='gcp/us-central1', accelerators='V100')\n \n # Specifying required resources; the system decides the\n # cloud/instance type. The below are equivalent:\n@@ -145,8 +148,9 @@ def __init__(\n sky.Resources(accelerators={'V100': 1})\n sky.Resources(cpus='2+', memory='16+', accelerators='V100')\n \n+\n Args:\n- cloud: the cloud to use.\n+ cloud: the cloud to use. Deprecated. Use `infra` instead.\n instance_type: the instance type to use.\n cpus: the number of CPUs required for the task.\n If a str, must be a string of the form ``'2'`` or ``'2+'``, where\n@@ -160,6 +164,11 @@ def __init__(\n dict of the form ``{'V100': 2}`` or ``{'tpu-v2-8': 1}``.\n accelerator_args: accelerator-specific arguments. For example,\n ``{'tpu_vm': True, 'runtime_version': 'tpu-vm-base'}`` for TPUs.\n+ infra: a string specifying the infrastructure to use, in the format\n+ of \"cloud/region\" or \"cloud/region/zone\". For example,\n+ `aws/us-east-1` or `k8s/my-cluster-ctx`. This is an alternative to\n+ specifying cloud, region, and zone separately. If provided, it\n+ takes precedence over cloud, region, and zone parameters.\n use_spot: whether to use spot instances. If None, defaults to\n False.\n job_recovery: the job recovery strategy to use for the managed\n@@ -172,8 +181,8 @@ def __init__(\n - max_restarts_on_errors: the max number of restarts on user code\n errors.\n \n- region: the region to use.\n- zone: the zone to use.\n+ region: the region to use. Deprecated. Use `infra` instead.\n+ zone: the zone to use. Deprecated. Use `infra` instead.\n image_id: the image ID to use. If a str, must be a string\n of the image id from the cloud, such as AWS:\n ``'ami-1234567890abcdef0'``, GCP:\n@@ -218,6 +227,25 @@ def __init__(\n exceptions.NoCloudAccessError: if no public cloud is enabled.\n \"\"\"\n self._version = self._VERSION\n+\n+ if infra is not None and (cloud is not None or region is not None or\n+ zone is not None):\n+ with ux_utils.print_exception_no_traceback():\n+ raise ValueError('Cannot specify both `infra` and `cloud`, '\n+ '`region`, or `zone` parameters. '\n+ f'Got: infra={infra}, cloud={cloud}, '\n+ f'region={region}, zone={zone}')\n+\n+ # Infra is user facing, and cloud, region, zone in parameters are for\n+ # backward compatibility. Internally, we keep using cloud, region, zone\n+ # for simplicity.\n+ if infra is not None:\n+ infra_info = infra_utils.InfraInfo.from_str(infra)\n+ # Infra takes precedence over individually specified parameters\n+ cloud = sky.CLOUD_REGISTRY.from_str(infra_info.cloud)\n+ region = infra_info.region\n+ zone = infra_info.zone\n+\n self._cloud = cloud\n self._region: Optional[str] = region\n self._zone: Optional[str] = zone\n@@ -431,6 +459,11 @@ def repr_with_region_zone(self) -> str:\n repr_str += f'{region_str}{zone_str}'\n return repr_str\n \n+ @property\n+ def infra(self) -> infra_utils.InfraInfo:\n+ cloud = str(self.cloud) if self.cloud is not None else None\n+ return infra_utils.InfraInfo(cloud, self.region, self.zone)\n+\n @property\n def cloud(self) -> Optional[clouds.Cloud]:\n return self._cloud\n@@ -486,9 +519,9 @@ def memory(self) -> Optional[str]:\n def accelerators(self) -> Optional[Dict[str, Union[int, float]]]:\n \"\"\"Returns the accelerators field directly or by inferring.\n \n- For example, Resources(AWS, 'p3.2xlarge') has its accelerators field\n- set to None, but this function will infer {'V100': 1} from the instance\n- type.\n+ For example, Resources(infra='aws', instance_type='p3.2xlarge') has its\n+ accelerators field set to None, but this function will infer {'V100': 1}\n+ from the instance type.\n \"\"\"\n if self._accelerators is not None:\n return self._accelerators\n@@ -1450,6 +1483,7 @@ def copy(self, **override) -> 'Resources':\n ports=override.pop('ports', self.ports),\n labels=override.pop('labels', self.labels),\n autostop=override.pop('autostop', current_autostop_config),\n+ infra=override.pop('infra', None),\n _docker_login_config=override.pop('_docker_login_config',\n self._docker_login_config),\n _docker_username_for_runpod=override.pop(\n@@ -1621,9 +1655,18 @@ def _override_resources(\n @classmethod\n def _from_yaml_config_single(cls, config: Dict[str, str]) -> 'Resources':\n \n- resources_fields = {}\n+ resources_fields: Dict[str, Any] = {}\n+\n+ # Extract infra field if present\n+ infra = config.pop('infra', None)\n+ resources_fields['infra'] = infra\n+\n+ # Keep backward compatibility with cloud, region, zone\n resources_fields['cloud'] = registry.CLOUD_REGISTRY.from_str(\n config.pop('cloud', None))\n+ resources_fields['region'] = config.pop('region', None)\n+ resources_fields['zone'] = config.pop('zone', None)\n+\n resources_fields['instance_type'] = config.pop('instance_type', None)\n resources_fields['cpus'] = config.pop('cpus', None)\n resources_fields['memory'] = config.pop('memory', None)\n@@ -1641,8 +1684,6 @@ def _from_yaml_config_single(cls, config: Dict[str, str]) -> 'Resources':\n # exclusive by the schema validation.\n resources_fields['job_recovery'] = config.pop('job_recovery', None)\n resources_fields['disk_size'] = config.pop('disk_size', None)\n- resources_fields['region'] = config.pop('region', None)\n- resources_fields['zone'] = config.pop('zone', None)\n resources_fields['image_id'] = config.pop('image_id', None)\n resources_fields['disk_tier'] = config.pop('disk_tier', None)\n resources_fields['ports'] = config.pop('ports', None)\n@@ -1679,7 +1720,10 @@ def add_if_not_none(key, value):\n if value is not None and value != 'None':\n config[key] = value\n \n- add_if_not_none('cloud', str(self.cloud))\n+ # Construct infra field if cloud is set\n+ infra = self.infra.to_str()\n+ add_if_not_none('infra', infra)\n+\n add_if_not_none('instance_type', self.instance_type)\n add_if_not_none('cpus', self._cpus)\n add_if_not_none('memory', self.memory)\n@@ -1690,8 +1734,6 @@ def add_if_not_none(key, value):\n add_if_not_none('use_spot', self.use_spot)\n add_if_not_none('job_recovery', self.job_recovery)\n add_if_not_none('disk_size', self.disk_size)\n- add_if_not_none('region', self.region)\n- add_if_not_none('zone', self.zone)\n add_if_not_none('image_id', self.image_id)\n if self.disk_tier is not None:\n config['disk_tier'] = self.disk_tier.value\ndiff --git a/sky/serve/serve_utils.py b/sky/serve/serve_utils.py\nindex a1b2b4a2b37..d1d510ff0d7 100644\n--- a/sky/serve/serve_utils.py\n+++ b/sky/serve/serve_utils.py\n@@ -1027,11 +1027,9 @@ def _format_replica_table(replica_records: List[Dict[str, Any]],\n return 'No existing replicas.'\n \n replica_columns = [\n- 'SERVICE_NAME', 'ID', 'VERSION', 'ENDPOINT', 'LAUNCHED', 'RESOURCES',\n- 'STATUS', 'REGION'\n+ 'SERVICE_NAME', 'ID', 'VERSION', 'ENDPOINT', 'LAUNCHED', 'INFRA',\n+ 'RESOURCES', 'STATUS'\n ]\n- if show_all:\n- replica_columns.append('ZONE')\n replica_table = log_utils.create_table(replica_columns)\n \n truncate_hint = ''\n@@ -1047,21 +1045,17 @@ def _format_replica_table(replica_records: List[Dict[str, Any]],\n version = (record['version'] if 'version' in record else '-')\n replica_endpoint = endpoint if endpoint else '-'\n launched_at = log_utils.readable_time_duration(record['launched_at'])\n+ infra = '-'\n resources_str = '-'\n replica_status = record['status']\n status_str = replica_status.colored_str()\n- region = '-'\n- zone = '-'\n \n replica_handle: Optional['backends.CloudVmRayResourceHandle'] = record[\n 'handle']\n if replica_handle is not None:\n+ infra = replica_handle.launched_resources.infra.formatted_str()\n resources_str = resources_utils.get_readable_resources_repr(\n replica_handle, simplify=not show_all)\n- if replica_handle.launched_resources.region is not None:\n- region = replica_handle.launched_resources.region\n- if replica_handle.launched_resources.zone is not None:\n- zone = replica_handle.launched_resources.zone\n \n replica_values = [\n service_name,\n@@ -1069,12 +1063,10 @@ def _format_replica_table(replica_records: List[Dict[str, Any]],\n version,\n replica_endpoint,\n launched_at,\n+ infra,\n resources_str,\n status_str,\n- region,\n ]\n- if show_all:\n- replica_values.append(zone)\n replica_table.add_row(replica_values)\n \n return f'{replica_table}{truncate_hint}'\ndiff --git a/sky/utils/cli_utils/status_utils.py b/sky/utils/cli_utils/status_utils.py\nindex 6e267770b87..246d3e19785 100644\n--- a/sky/utils/cli_utils/status_utils.py\n+++ b/sky/utils/cli_utils/status_utils.py\n@@ -33,17 +33,15 @@ class StatusColumn:\n def __init__(self,\n name: str,\n calc_func: Callable,\n- trunc_length: int = 0,\n+ truncate: bool = True,\n show_by_default: bool = True):\n self.name = name\n self.calc_func = calc_func\n- self.trunc_length = trunc_length\n+ self.truncate: bool = truncate\n self.show_by_default = show_by_default\n \n def calc(self, record):\n- val = self.calc_func(record)\n- if self.trunc_length != 0:\n- val = common_utils.truncate_long_string(str(val), self.trunc_length)\n+ val = self.calc_func(record, self.truncate)\n return val\n \n \n@@ -68,19 +66,20 @@ def show_status_table(cluster_records: List[_ClusterRecord],\n StatusColumn('USER_ID', _get_user_hash, show_by_default=False))\n \n status_columns += [\n- StatusColumn('LAUNCHED', _get_launched),\n- StatusColumn('RESOURCES',\n- _get_resources,\n- trunc_length=70 if not show_all else 0),\n- StatusColumn('REGION', _get_region, show_by_default=False),\n- StatusColumn('ZONE', _get_zone, show_by_default=False),\n+ StatusColumn('INFRA', _get_infra, truncate=not show_all),\n+ StatusColumn('RESOURCES', _get_resources, truncate=not show_all),\n StatusColumn('STATUS', _get_status_colored),\n StatusColumn('AUTOSTOP', _get_autostop),\n- StatusColumn('HEAD_IP', _get_head_ip, show_by_default=False),\n- StatusColumn('COMMAND',\n- _get_command,\n- trunc_length=COMMAND_TRUNC_LENGTH if not show_all else 0),\n+ StatusColumn('LAUNCHED', _get_launched),\n ]\n+ if show_all:\n+ status_columns += [\n+ StatusColumn('HEAD_IP', _get_head_ip, show_by_default=False),\n+ StatusColumn('COMMAND',\n+ _get_command,\n+ truncate=not show_all,\n+ show_by_default=False),\n+ ]\n \n columns = []\n for status_column in status_columns:\n@@ -160,10 +159,10 @@ def show_cost_report_table(cluster_records: List[_ClusterCostReportRecord],\n status_columns = [\n StatusColumn('NAME', _get_name),\n StatusColumn('LAUNCHED', _get_launched),\n- StatusColumn('DURATION', _get_duration, trunc_length=20),\n+ StatusColumn('DURATION', _get_duration, truncate=False),\n StatusColumn('RESOURCES',\n _get_resources_for_cost_report,\n- trunc_length=70 if not show_all else 0),\n+ truncate=False),\n StatusColumn('STATUS',\n _get_status_for_cost_report,\n show_by_default=True),\n@@ -221,47 +220,68 @@ def show_cost_report_table(cluster_records: List[_ClusterCostReportRecord],\n # Some of these lambdas are invoked on both _ClusterRecord and\n # _ClusterCostReportRecord, which is okay as we guarantee the queried fields\n # exist in those cases.\n-_get_name = (lambda cluster_record: cluster_record['name'])\n-_get_user_hash = (lambda cluster_record: cluster_record['user_hash'])\n-_get_user_name = (lambda cluster_record: cluster_record.get('user_name', '-'))\n-_get_launched = (lambda cluster_record: log_utils.readable_time_duration(\n+_get_name = (lambda cluster_record, _: cluster_record['name'])\n+_get_user_hash = (lambda cluster_record, _: cluster_record['user_hash'])\n+_get_user_name = (\n+ lambda cluster_record, _: cluster_record.get('user_name', '-'))\n+_get_launched = (lambda cluster_record, _: log_utils.readable_time_duration(\n cluster_record['launched_at']))\n-_get_region = (\n- lambda clusters_status: clusters_status['handle'].launched_resources.region)\n-_get_command = (lambda cluster_record: cluster_record['last_use'])\n-_get_duration = (lambda cluster_record: log_utils.readable_time_duration(\n+_get_duration = (lambda cluster_record, _: log_utils.readable_time_duration(\n 0, cluster_record['duration'], absolute=True))\n \n \n-def _get_status(cluster_record: _ClusterRecord) -> status_lib.ClusterStatus:\n- return cluster_record['status']\n-\n+def _get_command(cluster_record: _ClusterRecord, truncate: bool = True) -> str:\n+ command = cluster_record.get('last_use', '-')\n+ if truncate:\n+ return common_utils.truncate_long_string(command, COMMAND_TRUNC_LENGTH)\n+ return command\n \n-def _get_status_colored(cluster_record: _ClusterRecord) -> str:\n- return _get_status(cluster_record).colored_str()\n \n+def _get_status(cluster_record: _ClusterRecord,\n+ truncate: bool = True) -> status_lib.ClusterStatus:\n+ del truncate\n+ return cluster_record['status']\n \n-def _get_resources(cluster_record: _ClusterRecord) -> str:\n- if 'resources_str' in cluster_record:\n- return cluster_record['resources_str']\n- handle = cluster_record['handle']\n- if isinstance(handle, backends.LocalDockerResourceHandle):\n- resources_str = 'docker'\n- elif isinstance(handle, backends.CloudVmRayResourceHandle):\n- resources_str = resources_utils.get_readable_resources_repr(handle)\n- else:\n- raise ValueError(f'Unknown handle type {type(handle)} encountered.')\n- return resources_str\n \n+def _get_status_colored(cluster_record: _ClusterRecord,\n+ truncate: bool = True) -> str:\n+ del truncate\n+ return _get_status(cluster_record).colored_str()\n \n-def _get_zone(cluster_record: _ClusterRecord) -> str:\n- zone_str = cluster_record['handle'].launched_resources.zone\n- if zone_str is None:\n- zone_str = '-'\n- return zone_str\n \n+def _get_resources(cluster_record: _ClusterRecord,\n+ truncate: bool = True) -> str:\n+ \"\"\"Get the resources information for a cluster.\n \n-def _get_autostop(cluster_record: _ClusterRecord) -> str:\n+ Returns:\n+ A string in one of the following formats:\n+ - For cloud VMs: \"Nx instance_type\" (e.g., \"1x m6i.2xlarge\")\n+ - For K8S/SSH: \"Nx (...)\"\n+ - \"-\" if no resource information is available\n+ \"\"\"\n+ handle = cluster_record['handle']\n+ if isinstance(handle, backends.CloudVmRayResourceHandle):\n+ launched_resources = handle.launched_resources\n+ if launched_resources is None:\n+ return '-'\n+\n+ # For cloud VMs, show instance type directly\n+ # For K8S/SSH, show (...) as the resource type\n+ resources_str = cluster_record.get('resources_str', None)\n+ if not truncate:\n+ resources_str_full = cluster_record.get('resources_str_full', None)\n+ if resources_str_full is not None:\n+ resources_str = resources_str_full\n+ if resources_str is None:\n+ resources_str = resources_utils.get_readable_resources_repr(\n+ handle, simplify=truncate)\n+\n+ return resources_str\n+ return '-'\n+\n+\n+def _get_autostop(cluster_record: _ClusterRecord, truncate: bool = True) -> str:\n+ del truncate\n autostop_str = ''\n separation = ''\n if cluster_record['autostop'] >= 0:\n@@ -276,7 +296,8 @@ def _get_autostop(cluster_record: _ClusterRecord) -> str:\n return autostop_str\n \n \n-def _get_head_ip(cluster_record: _ClusterRecord) -> str:\n+def _get_head_ip(cluster_record: _ClusterRecord, truncate: bool = True) -> str:\n+ del truncate # Unused\n handle = cluster_record['handle']\n if not isinstance(handle, backends.CloudVmRayResourceHandle):\n return '-'\n@@ -291,6 +312,25 @@ def _is_pending_autostop(cluster_record: _ClusterRecord) -> bool:\n cluster_record) != status_lib.ClusterStatus.STOPPED\n \n \n+def _get_infra(cluster_record: _ClusterRecord, truncate: bool = True) -> str:\n+ \"\"\"Get the infrastructure information for a cluster.\n+\n+ Returns:\n+ A string in one of the following formats:\n+ - AWS/region (e.g., \"AWS/us-east-1\")\n+ - K8S/context (e.g., \"K8S/my-ctx\")\n+ - SSH/hostname (e.g., \"SSH/my-tobi-box\")\n+ - \"-\" if no infrastructure information is available\n+ \"\"\"\n+ handle = cluster_record['handle']\n+ if isinstance(handle, backends.CloudVmRayResourceHandle):\n+ if handle.launched_resources is None:\n+ # If launched_resources is None, try to get infra from the record\n+ return cluster_record.get('infra', '-')\n+ return handle.launched_resources.infra.formatted_str(truncate)\n+ return '-'\n+\n+\n # ---- 'sky cost-report' helper functions below ----\n \n \n@@ -347,14 +387,13 @@ def show_kubernetes_cluster_status_table(\n show_all: bool) -> None:\n \"\"\"Compute cluster table values and display for Kubernetes clusters.\"\"\"\n status_columns = [\n- StatusColumn('USER', lambda c: c.user),\n- StatusColumn('NAME', lambda c: c.cluster_name),\n- StatusColumn('LAUNCHED',\n- lambda c: log_utils.readable_time_duration(c.launched_at)),\n- StatusColumn('RESOURCES',\n- lambda c: c.resources_str,\n- trunc_length=70 if not show_all else 0),\n- StatusColumn('STATUS', lambda c: c.status.colored_str()),\n+ StatusColumn('USER', lambda c, _: c.user),\n+ StatusColumn('NAME', lambda c, _: c.cluster_name),\n+ StatusColumn('RESOURCES', lambda c, _: c.resources_str, truncate=False),\n+ StatusColumn('STATUS', lambda c, _: c.status.colored_str()),\n+ StatusColumn(\n+ 'LAUNCHED',\n+ lambda c, _: log_utils.readable_time_duration(c.launched_at)),\n # TODO(romilb): We should consider adding POD_NAME field here when --all\n # is passed to help users fetch pod name programmatically.\n ]\ndiff --git a/sky/utils/common_utils.py b/sky/utils/common_utils.py\nindex 99f205e8e7a..00d9db4c756 100644\n--- a/sky/utils/common_utils.py\n+++ b/sky/utils/common_utils.py\n@@ -723,10 +723,43 @@ def new_func(*args, **kwargs):\n return new_func\n \n \n-def truncate_long_string(s: str, max_length: int = 35) -> str:\n- \"\"\"Truncate a string to a maximum length, preserving whole words.\"\"\"\n+def truncate_long_string(s: str,\n+ max_length: int = 35,\n+ truncate_middle: bool = False) -> str:\n+ \"\"\"Truncate a string to a maximum length.\n+\n+ Args:\n+ s: String to truncate.\n+ max_length: Maximum length of the truncated string.\n+ truncate_middle: Whether to truncate in the middle of the string.\n+ If True, the middle part of the string is replaced with '...'.\n+ If False, truncation happens at the end preserving whole words.\n+\n+ Returns:\n+ Truncated string.\n+ \"\"\"\n if len(s) <= max_length:\n return s\n+\n+ if truncate_middle:\n+ # Reserve 3 characters for '...'\n+ if max_length <= 3:\n+ return '...'\n+\n+ # Calculate how many characters to keep from beginning and end\n+ half_length = (max_length - 3) // 2\n+ remainder = (max_length - 3) % 2\n+\n+ # Keep one more character at the beginning if max_length - 3 is odd\n+ start_length = half_length + remainder\n+ end_length = half_length\n+\n+ # When end_length is 0, just show the start part and '...'\n+ if end_length == 0:\n+ return s[:start_length] + '...'\n+ return s[:start_length] + '...' + s[-end_length:]\n+\n+ # Original end-truncation logic\n splits = s.split(' ')\n if len(splits[0]) > max_length:\n return splits[0][:max_length] + '...' # Use '…'?\ndiff --git a/sky/utils/infra_utils.py b/sky/utils/infra_utils.py\nnew file mode 100644\nindex 00000000000..278475da51f\n--- /dev/null\n+++ b/sky/utils/infra_utils.py\n@@ -0,0 +1,175 @@\n+\"\"\"Utility functions for handling infrastructure specifications.\"\"\"\n+import dataclasses\n+from typing import Optional\n+\n+from sky.utils import common_utils\n+from sky.utils import ux_utils\n+\n+_REGION_OR_ZONE_TRUNCATION_LENGTH = 25\n+\n+\[email protected]\n+class InfraInfo:\n+ \"\"\"Infrastructure information parsed from infra string.\n+\n+ When a field is None, it means the field is not specified.\n+ \"\"\"\n+ cloud: Optional[str] = None\n+ region: Optional[str] = None\n+ zone: Optional[str] = None\n+\n+ def __init__(self,\n+ cloud: Optional[str] = None,\n+ region: Optional[str] = None,\n+ zone: Optional[str] = None):\n+ assert cloud not in ['none', 'None', 'NONE'], 'cloud must be specified'\n+ if not cloud or cloud == '*':\n+ cloud = None\n+ if not region or region == '*':\n+ region = None\n+ if not zone or zone == '*':\n+ zone = None\n+\n+ self.cloud = cloud\n+ self.region = region\n+ self.zone = zone\n+\n+ @staticmethod\n+ def from_str(infra: Optional[str]) -> 'InfraInfo':\n+ \"\"\"Parse the infra string into cloud, region, and zone components.\n+\n+ The format of the infra string is `cloud`, `cloud/region`, or\n+ `cloud/region/zone`. Examples: `aws`, `aws/us-east-1`,\n+ `aws/us-east-1/us-east-1a`. For any field, you can use `*` to indicate\n+ that any value is acceptable.\n+\n+ If `*` is used for any field, the InfraInfo will have None for that\n+ field.\n+\n+ Args:\n+ infra: A string in the format of `cloud`, `cloud/region`, or\n+ `cloud/region/zone`. Examples: `aws`, `aws/us-east-1`,\n+ `aws/us-east-1/us-east-1a`.\n+\n+ Returns:\n+ An InfraInfo object containing cloud, region, and zone information.\n+\n+ Raises:\n+ ValueError: If the infra string is malformed.\n+ \"\"\"\n+ if infra is None or not infra.strip():\n+ return InfraInfo()\n+\n+ infra = infra.strip().strip('/')\n+\n+ # Split on / to get cloud, region, zone\n+ parts = [p.strip() for p in infra.strip().split('/')]\n+\n+ if '' in parts:\n+ with ux_utils.print_exception_no_traceback():\n+ raise ValueError(\n+ f'Invalid infra format: {infra}. Format should not contain '\n+ 'empty parts (e.g., double slashes \"//\").')\n+\n+ if not parts or not parts[0]:\n+ with ux_utils.print_exception_no_traceback():\n+ raise ValueError(\n+ f'Invalid infra format: {infra}. Expected format is '\n+ '\"cloud\", \"cloud/region\", or \"cloud/region/zone\".')\n+\n+ cloud_name: Optional[str] = parts[0].lower()\n+\n+ # Handle Kubernetes contexts specially, as they can contain slashes\n+ if cloud_name in ['k8s', 'kubernetes']:\n+ # For Kubernetes, the entire string after \"k8s/\" is the\n+ # context name (region)\n+ cloud_name = 'kubernetes' # Normalize k8s to kubernetes\n+ region = '/'.join(parts[1:]) if len(parts) >= 2 else None\n+ zone = None\n+ else:\n+ # For non-Kubernetes clouds, continue with regular parsing\n+ # but be careful to only split into max 3 parts\n+ region_zone_parts = parts[1:]\n+ region = None\n+ zone = None\n+ if region_zone_parts:\n+ region = region_zone_parts[0]\n+ if len(region_zone_parts) > 1:\n+ zone = region_zone_parts[1]\n+ if len(region_zone_parts) > 2:\n+ with ux_utils.print_exception_no_traceback():\n+ raise ValueError(\n+ f'Invalid infra format: {infra}. Expected format '\n+ 'is \"cloud\", \"cloud/region\", or '\n+ '\"cloud/region/zone\".')\n+\n+ if cloud_name == '*':\n+ cloud_name = None\n+ if region == '*':\n+ region = None\n+ if zone == '*':\n+ zone = None\n+ return InfraInfo(cloud=cloud_name, region=region, zone=zone)\n+\n+ def to_str(self) -> Optional[str]:\n+ \"\"\"Formats cloud, region, and zone into an infra string.\n+\n+ Args:\n+ cloud: The cloud object\n+ region: The region name\n+ zone: The zone name\n+\n+ Returns:\n+ A formatted infra string, or None if cloud is None or '*'\n+ \"\"\"\n+ cloud = self.cloud\n+ region = self.region\n+ zone = self.zone\n+\n+ if cloud is None:\n+ cloud = '*'\n+ if region is None:\n+ region = '*'\n+ if zone is None:\n+ zone = '*'\n+\n+ # Build the parts list and filter out trailing wildcards\n+ parts = [cloud.lower(), region, zone]\n+ while parts and parts[-1] == '*':\n+ parts.pop()\n+\n+ if not parts:\n+ return None\n+\n+ # Join the parts with '/'\n+ return '/'.join(parts)\n+\n+ def formatted_str(self, truncate: bool = True) -> str:\n+ \"\"\"Formats cloud, region, and zone into an infra string.\n+\n+ Args:\n+ truncate: Whether to truncate the region or zone\n+\n+ Returns:\n+ A formatted infra string, or None if cloud is None or '*'\n+ \"\"\"\n+ if self.cloud is None or self.cloud == '*':\n+ return '-'\n+\n+ region_or_zone = None\n+ if self.zone is not None and self.zone != '*':\n+ region_or_zone = self.zone\n+ elif self.region is not None and self.region != '*':\n+ region_or_zone = self.region\n+\n+ if region_or_zone is not None and truncate:\n+ region_or_zone = common_utils.truncate_long_string(\n+ region_or_zone,\n+ _REGION_OR_ZONE_TRUNCATION_LENGTH,\n+ truncate_middle=True)\n+\n+ formatted_str = f'{self.cloud}'\n+ if region_or_zone is not None:\n+ formatted_str += f' ({region_or_zone})'\n+\n+ return formatted_str\ndiff --git a/sky/utils/resources_utils.py b/sky/utils/resources_utils.py\nindex 60556e95b68..6c6fb38a374 100644\n--- a/sky/utils/resources_utils.py\n+++ b/sky/utils/resources_utils.py\n@@ -4,11 +4,11 @@\n import itertools\n import json\n import math\n-import re\n import typing\n from typing import Dict, List, Optional, Set, Union\n \n from sky import skypilot_config\n+from sky.utils import common_utils\n from sky.utils import registry\n from sky.utils import ux_utils\n \n@@ -139,34 +139,54 @@ def simplify_ports(ports: List[str]) -> List[str]:\n \n def format_resource(resource: 'resources_lib.Resources',\n simplify: bool = False) -> str:\n+ resource = resource.assert_launchable()\n+ vcpu, mem = resource.cloud.get_vcpus_mem_from_instance_type(\n+ resource.instance_type)\n+\n+ components = []\n+\n+ if resource.accelerators is not None:\n+ acc, count = list(resource.accelerators.items())[0]\n+ components.append(f'gpus={acc}:{count}')\n+\n+ is_k8s = str(resource.cloud).lower() == 'kubernetes'\n+ if (resource.accelerators is None or is_k8s or not simplify):\n+ if vcpu is not None:\n+ components.append(f'cpus={int(vcpu)}')\n+ if mem is not None:\n+ components.append(f'mem={int(mem)}')\n+\n+ instance_type = resource.instance_type\n if simplify:\n- resource = resource.assert_launchable()\n- cloud = resource.cloud\n- if resource.accelerators is None:\n- vcpu, _ = cloud.get_vcpus_mem_from_instance_type(\n- resource.instance_type)\n- assert vcpu is not None, 'vCPU must be specified'\n- hardware = f'vCPU={int(vcpu)}'\n- else:\n- hardware = f'{resource.accelerators}'\n- spot = '[Spot]' if resource.use_spot else ''\n- return f'{cloud}({spot}{hardware})'\n+ instance_type = common_utils.truncate_long_string(instance_type, 15)\n+ if not is_k8s:\n+ components.append(f'type={instance_type}')\n+ if simplify:\n+ components.append('...')\n else:\n- # accelerator_args is way too long.\n- # Convert from:\n- # GCP(n1-highmem-8, {'tpu-v2-8': 1}, accelerator_args={'runtime_version': '2.12.0'} # pylint: disable=line-too-long\n- # to:\n- # GCP(n1-highmem-8, {'tpu-v2-8': 1}...)\n- pattern = ', accelerator_args={.*}'\n- launched_resource_str = re.sub(pattern, '...', str(resource))\n- return launched_resource_str\n+ image_id = resource.image_id\n+ if image_id is not None:\n+ if None in image_id:\n+ components.append(f'image_id={image_id[None]}')\n+ else:\n+ components.append(f'image_id={image_id}')\n+ components.append(f'disk={resource.disk_size}')\n+ disk_tier = resource.disk_tier\n+ if disk_tier is not None:\n+ components.append(f'disk_tier={disk_tier.value}')\n+ ports = resource.ports\n+ if ports is not None:\n+ components.append(f'ports={ports}')\n+\n+ spot = '[spot]' if resource.use_spot else ''\n+ return f'{spot}({\"\" if not components else \", \".join(components)})'\n \n \n def get_readable_resources_repr(handle: 'backends.CloudVmRayResourceHandle',\n simplify: bool = False) -> str:\n if (handle.launched_nodes is not None and\n handle.launched_resources is not None):\n- return (f'{handle.launched_nodes}x '\n+ return (f'{handle.launched_nodes}x'\n f'{format_resource(handle.launched_resources, simplify)}')\n return _DEFAULT_MESSAGE_HANDLE_INITIALIZING\n \ndiff --git a/sky/utils/schemas.py b/sky/utils/schemas.py\nindex 479c23cb7e1..7b263460eeb 100644\n--- a/sky/utils/schemas.py\n+++ b/sky/utils/schemas.py\n@@ -85,6 +85,19 @@ def _get_single_resources_schema():\n 'zone': {\n 'type': 'string',\n },\n+ 'infra': {\n+ 'type': 'string',\n+ 'description':\n+ ('Infrastructure specification in format: '\n+ 'cloud[/region[/zone]]. Use \"*\" as a wildcard.'),\n+ # Create a pattern validator that uses a big regex to match all\n+ # valid formats. This allows us to maintain JSON Schema\n+ # validation while supporting all formats\n+ 'pattern':\n+ ('^(?:(?i:(' + '|'.join(list(service_catalog.ALL_CLOUDS)) +\n+ '))(?:/[^/]+(?:/[^/]+)?)?|\\\\*(?:/[^/]+(?:/[^/]+)?|/\\\\*'\n+ '(?:/[^/]+)?)?|(?i:k8s|kubernetes)/.+)$')\n+ },\n 'cpus': {\n 'anyOf': [{\n 'type': 'string',\ndiff --git a/tests/common_test_fixtures.py b/tests/common_test_fixtures.py\nindex 10de8e89766..6ad96fb2531 100644\n--- a/tests/common_test_fixtures.py\n+++ b/tests/common_test_fixtures.py\n@@ -133,6 +133,59 @@ def mock_get(url, *args, **kwargs):\n monkeypatch.setattr(requests, \"get\", mock_get)\n \n \n+# Define helper functions at module level for pickleability\n+def get_cached_enabled_clouds_mock(enabled_clouds, *_, **__):\n+ return enabled_clouds\n+\n+\n+def dummy_function(*_, **__):\n+ return None\n+\n+\n+def get_az_mappings(*_, **__):\n+ return pd.read_csv('tests/default_aws_az_mappings.csv')\n+\n+\n+def list_empty_reservations(*_, **__):\n+ return []\n+\n+\n+def get_kubernetes_label_formatter(*_, **__):\n+ return [kubernetes_utils.SkyPilotLabelFormatter, {}]\n+\n+\n+def detect_accelerator_resource_mock(*_, **__):\n+ return [True, []]\n+\n+\n+def check_instance_fits_mock(*_, **__):\n+ return [True, '']\n+\n+\n+def get_spot_label_mock(*_, **__):\n+ return [None, None]\n+\n+\n+def is_kubeconfig_exec_auth_mock(*_, **__):\n+ return [False, None]\n+\n+\n+def regions_with_offering_mock(*_, **__):\n+ return [sky.clouds.Region('my-k8s-cluster-context')]\n+\n+\n+def check_quota_available_mock(*_, **__):\n+ return True\n+\n+\n+def mock_redirect_output(*_, **__):\n+ return (None, None)\n+\n+\n+def mock_restore_output(*_, **__):\n+ return None\n+\n+\n @pytest.fixture\n def enable_all_clouds(monkeypatch, request, mock_client_requests):\n \"\"\"Create mock context managers for cloud configurations.\"\"\"\n@@ -143,40 +196,43 @@ def enable_all_clouds(monkeypatch, request, mock_client_requests):\n config_file = tempfile.NamedTemporaryFile(prefix='tmp_config_default',\n delete=False).name\n \n+ # Use a function that takes enabled_clouds as an argument\n+ def get_clouds_factory(*args, **kwargs):\n+ return get_cached_enabled_clouds_mock(enabled_clouds, *args, **kwargs)\n+\n # Mock all the functions\n monkeypatch.setattr('sky.check.get_cached_enabled_clouds_or_refresh',\n- lambda *_, **__: enabled_clouds)\n- monkeypatch.setattr('sky.check.check_capability', lambda *_, **__: None)\n+ get_clouds_factory)\n+ monkeypatch.setattr('sky.check.check_capability', dummy_function)\n monkeypatch.setattr(\n 'sky.clouds.service_catalog.aws_catalog._get_az_mappings',\n- lambda *_, **__: pd.read_csv('tests/default_aws_az_mappings.csv'))\n+ get_az_mappings)\n monkeypatch.setattr('sky.backends.backend_utils.check_owner_identity',\n- lambda *_, **__: None)\n+ dummy_function)\n monkeypatch.setattr(\n 'sky.clouds.utils.gcp_utils.list_reservations_for_instance_type_in_zone',\n- lambda *_, **__: [])\n+ list_empty_reservations)\n \n # Kubernetes mocks\n- monkeypatch.setattr('sky.adaptors.kubernetes._load_config',\n- lambda *_, **__: None)\n+ monkeypatch.setattr('sky.adaptors.kubernetes._load_config', dummy_function)\n monkeypatch.setattr(\n 'sky.provision.kubernetes.utils.detect_gpu_label_formatter',\n- lambda *_, **__: [kubernetes_utils.SkyPilotLabelFormatter, {}])\n+ get_kubernetes_label_formatter)\n monkeypatch.setattr(\n 'sky.provision.kubernetes.utils.detect_accelerator_resource',\n- lambda *_, **__: [True, []])\n+ detect_accelerator_resource_mock)\n monkeypatch.setattr('sky.provision.kubernetes.utils.check_instance_fits',\n- lambda *_, **__: [True, ''])\n+ check_instance_fits_mock)\n monkeypatch.setattr('sky.provision.kubernetes.utils.get_spot_label',\n- lambda *_, **__: [None, None])\n+ get_spot_label_mock)\n monkeypatch.setattr('sky.clouds.kubernetes.kubernetes_utils.get_spot_label',\n- lambda *_, **__: [None, None])\n+ get_spot_label_mock)\n monkeypatch.setattr(\n 'sky.provision.kubernetes.utils.is_kubeconfig_exec_auth',\n- lambda *_, **__: [False, None])\n+ is_kubeconfig_exec_auth_mock)\n monkeypatch.setattr(\n 'sky.clouds.kubernetes.Kubernetes.regions_with_offering',\n- lambda *_, **__: [sky.clouds.Region('my-k8s-cluster-context')])\n+ regions_with_offering_mock)\n \n # VSphere catalog mock\n monkeypatch.setattr(vsphere_catalog, '_LOCAL_CATALOG',\n@@ -186,7 +242,7 @@ def enable_all_clouds(monkeypatch, request, mock_client_requests):\n for cloud in enabled_clouds:\n if hasattr(cloud, 'check_quota_available'):\n monkeypatch.setattr(cloud, 'check_quota_available',\n- lambda *_, **__: True)\n+ check_quota_available_mock)\n \n # Environment variables\n monkeypatch.setattr(\n@@ -326,9 +382,9 @@ def mock_get_queue(schedule_type):\n @pytest.fixture\n def mock_redirect_log_file(monkeypatch):\n monkeypatch.setattr('sky.server.requests.executor._redirect_output',\n- lambda *_, **__: (None, None))\n+ mock_redirect_output)\n monkeypatch.setattr('sky.server.requests.executor._restore_output',\n- lambda *_, **__: None)\n+ mock_restore_output)\n \n \n @pytest.fixture\ndiff --git a/tests/load_tests/test_distribute_load_on_server.py b/tests/load_tests/test_distribute_load_on_server.py\nindex 60c2dc26ed5..e2511b4e67c 100644\n--- a/tests/load_tests/test_distribute_load_on_server.py\n+++ b/tests/load_tests/test_distribute_load_on_server.py\n@@ -79,9 +79,7 @@ def stream_log(req_id):\n task = sky.Task(setup=setup, run=run)\n task.set_file_mounts(file_mounts)\n task.set_resources(\n- sky.Resources(clouds.Kubernetes(),\n- cpus=args.cpus,\n- memory=args.memory))\n+ sky.Resources(infra='k8s', cpus=args.cpus, memory=args.memory))\n # Use launch instead of jobs launch for predictable client parallelism\n resps.append(sky.launch(task, f'benchmark-{i}'))\n try:\ndiff --git a/tests/skyserve/auto_restart.yaml b/tests/skyserve/auto_restart.yaml\nindex 5fd26ea8acd..6369034f08e 100644\n--- a/tests/skyserve/auto_restart.yaml\n+++ b/tests/skyserve/auto_restart.yaml\n@@ -7,7 +7,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/cancel/cancel.yaml b/tests/skyserve/cancel/cancel.yaml\nindex 2683e7dbb7b..883814fb03d 100644\n--- a/tests/skyserve/cancel/cancel.yaml\n+++ b/tests/skyserve/cancel/cancel.yaml\n@@ -8,7 +8,7 @@ service:\n \n resources:\n ports: 9000\n- cloud: gcp\n+ infra: gcp\n \n workdir: examples/serve/misc/cancel\n \ndiff --git a/tests/skyserve/high_availability/config.yaml b/tests/skyserve/high_availability/config.yaml\nindex 836894e2067..49100f4eff4 100644\n--- a/tests/skyserve/high_availability/config.yaml\n+++ b/tests/skyserve/high_availability/config.yaml\n@@ -1,6 +1,6 @@\n serve:\n controller:\n resources:\n- cloud: kubernetes\n+ infra: kubernetes\n cpus: 2\n high_availability: true\ndiff --git a/tests/skyserve/high_availability/service.yaml b/tests/skyserve/high_availability/service.yaml\nindex a33761535b6..b9b875fd830 100644\n--- a/tests/skyserve/high_availability/service.yaml\n+++ b/tests/skyserve/high_availability/service.yaml\n@@ -7,7 +7,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp\n cpus: 2+\n \n workdir: examples/serve/http_server\ndiff --git a/tests/skyserve/http/aws.yaml b/tests/skyserve/http/aws.yaml\nindex edb562a5273..73b1da6bba9 100644\n--- a/tests/skyserve/http/aws.yaml\n+++ b/tests/skyserve/http/aws.yaml\n@@ -6,7 +6,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: aws\n+ infra: aws\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/http/azure.yaml b/tests/skyserve/http/azure.yaml\nindex 2f111a7d610..b0e869e9b13 100644\n--- a/tests/skyserve/http/azure.yaml\n+++ b/tests/skyserve/http/azure.yaml\n@@ -6,7 +6,7 @@ service:\n \n resources:\n ports: 8081\n- cloud: azure\n+ infra: azure\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/http/gcp.yaml b/tests/skyserve/http/gcp.yaml\nindex b61f0c29fe3..81c2e24eaf4 100644\n--- a/tests/skyserve/http/gcp.yaml\n+++ b/tests/skyserve/http/gcp.yaml\n@@ -6,7 +6,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/http/kubernetes.yaml b/tests/skyserve/http/kubernetes.yaml\nindex 987304bb2d7..64a9033ead2 100644\n--- a/tests/skyserve/http/kubernetes.yaml\n+++ b/tests/skyserve/http/kubernetes.yaml\n@@ -6,7 +6,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: kubernetes\n+ infra: kubernetes\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/http/oci.yaml b/tests/skyserve/http/oci.yaml\nindex d7d98c18ab4..c9451634438 100644\n--- a/tests/skyserve/http/oci.yaml\n+++ b/tests/skyserve/http/oci.yaml\n@@ -3,8 +3,8 @@ service:\n replicas: 2\n \n resources:\n- cloud: oci\n+ infra: oci\n ports: 8080\n cpus: 2+\n \n-run: python -m http.server 8080\n\\ No newline at end of file\n+run: python -m http.server 8080\ndiff --git a/tests/skyserve/llm/service.yaml b/tests/skyserve/llm/service.yaml\nindex dde5c9313b0..a848889ea9d 100644\n--- a/tests/skyserve/llm/service.yaml\n+++ b/tests/skyserve/llm/service.yaml\n@@ -15,7 +15,7 @@ envs:\n \n resources:\n ports: 8087\n- cloud: gcp\n+ infra: gcp\n accelerators: T4\n cpus: 7+\n memory: 20+\ndiff --git a/tests/skyserve/spot/dynamic_ondemand_fallback.yaml b/tests/skyserve/spot/dynamic_ondemand_fallback.yaml\nindex 2e8d692ecbd..00cb905eaa6 100644\n--- a/tests/skyserve/spot/dynamic_ondemand_fallback.yaml\n+++ b/tests/skyserve/spot/dynamic_ondemand_fallback.yaml\n@@ -11,9 +11,8 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp/*/us-central1-a\n cpus: 2+\n- zone: us-central1-a\n use_spot: true\n \n workdir: examples/serve/http_server\ndiff --git a/tests/skyserve/spot/recovery.yaml b/tests/skyserve/spot/recovery.yaml\nindex 81cae7e1fc7..5efc467c6d6 100644\n--- a/tests/skyserve/spot/recovery.yaml\n+++ b/tests/skyserve/spot/recovery.yaml\n@@ -7,8 +7,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n- zone: us-central1-a\n+ infra: gcp/*/us-central1-a\n use_spot: true\n \n workdir: examples/serve/http_server\ndiff --git a/tests/skyserve/spot/spot_hedge.yaml b/tests/skyserve/spot/spot_hedge.yaml\nindex 88bbfeda052..f9dcb5e16c7 100644\n--- a/tests/skyserve/spot/spot_hedge.yaml\n+++ b/tests/skyserve/spot/spot_hedge.yaml\n@@ -21,14 +21,12 @@ envs:\n HF_TOKEN: # TODO: Fill with your own huggingface token, or use --env to pass.\n \n resources:\n- cloud: aws\n+ infra: aws\n any_of:\n # Enable all region in AWS.\n- - cloud: aws\n+ - infra: aws\n # Enable one in GCP.\n- - cloud: gcp\n- region: asia-northeast3\n- zone: asia-northeast3-a\n+ - infra: gcp/*/asia-northeast3-a\n use_spot: true\n accelerators: L4\n ports: 9000 # Expose to internet traffic.\ndiff --git a/tests/skyserve/spot/spot_hedge_T4.yaml b/tests/skyserve/spot/spot_hedge_T4.yaml\nindex af949ee8904..641fd245428 100644\n--- a/tests/skyserve/spot/spot_hedge_T4.yaml\n+++ b/tests/skyserve/spot/spot_hedge_T4.yaml\n@@ -23,13 +23,11 @@ envs:\n resources:\n any_of:\n # Enable all region in AWS.\n- - cloud: aws\n+ - infra: aws\n # region: us-east-1\n # zone: us-east-1f\n # Enable one zone in GCP.\n- - cloud: gcp\n- region: europe-west2\n- zone: europe-west2-a\n+ - infra: gcp/*/europe-west2-a\n use_spot: true\n accelerators: T4\n ports: 9000 # Expose to internet traffic.\ndiff --git a/tests/skyserve/update/new.yaml b/tests/skyserve/update/new.yaml\nindex 4317af1b146..15982656d0e 100644\n--- a/tests/skyserve/update/new.yaml\n+++ b/tests/skyserve/update/new.yaml\n@@ -7,7 +7,7 @@ service:\n \n resources:\n ports: 8081\n- cloud: gcp\n+ infra: gcp\n \n workdir: tests/skyserve/update\n \ndiff --git a/tests/skyserve/update/num_min_one.yaml b/tests/skyserve/update/num_min_one.yaml\nindex e168af69af3..9dd84f38091 100644\n--- a/tests/skyserve/update/num_min_one.yaml\n+++ b/tests/skyserve/update/num_min_one.yaml\n@@ -7,7 +7,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/update/num_min_two.yaml b/tests/skyserve/update/num_min_two.yaml\nindex d4f26fdee8c..457ddd5849e 100644\n--- a/tests/skyserve/update/num_min_two.yaml\n+++ b/tests/skyserve/update/num_min_two.yaml\n@@ -7,7 +7,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp\n \n workdir: examples/serve/http_server\n \ndiff --git a/tests/skyserve/update/old.yaml b/tests/skyserve/update/old.yaml\nindex 38ef1cdcb60..666f8ff231a 100644\n--- a/tests/skyserve/update/old.yaml\n+++ b/tests/skyserve/update/old.yaml\n@@ -7,7 +7,7 @@ service:\n \n resources:\n ports: 8080\n- cloud: gcp\n+ infra: gcp\n \n workdir: tests/skyserve/update\n \ndiff --git a/tests/smoke_tests/smoke_tests_utils.py b/tests/smoke_tests/smoke_tests_utils.py\nindex 4396a73b4a9..d79026b6d14 100644\n--- a/tests/smoke_tests/smoke_tests_utils.py\n+++ b/tests/smoke_tests/smoke_tests_utils.py\n@@ -37,10 +37,10 @@\n # different job id.\n test_id = str(uuid.uuid4())[-2:]\n \n-LAMBDA_TYPE = '--cloud lambda --gpus A10'\n-FLUIDSTACK_TYPE = '--cloud fluidstack --gpus RTXA4000'\n+LAMBDA_TYPE = '--infra lambda --gpus A10'\n+FLUIDSTACK_TYPE = '--infra fluidstack --gpus RTXA4000'\n \n-SCP_TYPE = '--cloud scp'\n+SCP_TYPE = '--infra scp'\n SCP_GPU_V100 = '--gpus V100-32GB'\n \n STORAGE_SETUP_COMMANDS = [\n@@ -490,7 +490,7 @@ def get_aws_region_for_quota_failover() -> Optional[str]:\n use_spot=True,\n region=None,\n zone=None)\n- original_resources = sky.Resources(cloud=sky.AWS(),\n+ original_resources = sky.Resources(infra='aws',\n instance_type='p3.16xlarge',\n use_spot=True)\n \n@@ -517,7 +517,7 @@ def get_gcp_region_for_quota_failover() -> Optional[str]:\n region=None,\n zone=None)\n \n- original_resources = sky.Resources(cloud=sky.GCP(),\n+ original_resources = sky.Resources(infra='gcp',\n instance_type='a2-ultragpu-1g',\n accelerators={'A100-80GB': 1},\n use_spot=True)\n@@ -611,7 +611,7 @@ def launch_cluster_for_cloud_cmd(cloud: str, test_cluster_name: str) -> str:\n return 'true'\n else:\n return (\n- f'sky launch -y -c {cluster_name} --cloud {cloud} {LOW_RESOURCE_ARG} --async'\n+ f'sky launch -y -c {cluster_name} --infra {cloud} {LOW_RESOURCE_ARG} --async'\n )\n \n \ndiff --git a/tests/smoke_tests/test_basic.py b/tests/smoke_tests/test_basic.py\nindex 21ed1d37940..7cb87c5582a 100644\n--- a/tests/smoke_tests/test_basic.py\n+++ b/tests/smoke_tests/test_basic.py\n@@ -55,12 +55,12 @@ def test_minimal(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'minimal',\n [\n- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n # Output validation done.\n f'sky logs {name} 1 --status',\n f'sky logs {name} --status | grep \"Job 1: SUCCEEDED\"', # Equivalent.\n # Test launch output again on existing cluster\n- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n f'sky logs {name} 2 --status',\n f'sky logs {name} --status | grep \"Job 2: SUCCEEDED\"', # Equivalent.\n # Check the logs downloading\n@@ -103,7 +103,7 @@ def test_launch_fast(generic_cloud: str):\n 'test_launch_fast',\n [\n # First launch to create the cluster\n- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n f'sky logs {name} 1 --status',\n \n # Second launch to test fast launch - should not reprovision\n@@ -138,7 +138,7 @@ def test_launch_fast_with_autostop(generic_cloud: str):\n 'test_launch_fast_with_autostop',\n [\n # First launch to create the cluster with a short autostop\n- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} --fast -i 1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} --fast -i 1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n f'sky logs {name} 1 --status',\n f'sky status -r {name} | grep UP',\n \n@@ -172,7 +172,7 @@ def test_launch_fast_with_cluster_changes(generic_cloud: str, tmp_path):\n 'test_launch_fast_with_cluster_changes',\n [\n # Initial launch\n- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',\n f'sky logs {name} 1 --status',\n \n # Launch again - setup and provisioning should be skipped\n@@ -209,14 +209,14 @@ def test_stale_job(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'stale_job',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo hi\"',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo hi\"',\n f'sky exec {name} -d \"echo start; sleep 10000\"',\n- f'sky stop {name} -y',\n+ f'sky stop -y {name}',\n smoke_tests_utils.get_cmd_wait_until_cluster_status_contains(\n cluster_name=name,\n cluster_status=[sky.ClusterStatus.STOPPED],\n timeout=100),\n- f'sky start {name} -y',\n+ f'sky start -y {name}',\n f'sky logs {name} 1 --status',\n f's=$(sky queue {name}); echo \"$s\"; echo; echo; echo \"$s\" | grep FAILED_DRIVER',\n ],\n@@ -236,7 +236,7 @@ def test_aws_stale_job_manual_restart():\n 'aws_stale_job_manual_restart',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),\n- f'sky launch -y -c {name} --cloud aws --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo hi\"',\n+ f'sky launch -y -c {name} --infra aws/us-east-2 {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo hi\"',\n f'sky exec {name} -d \"echo start; sleep 10000\"',\n # Stop the cluster manually.\n smoke_tests_utils.run_cloud_cmd_on_cluster(\n@@ -283,7 +283,7 @@ def test_gcp_stale_job_manual_restart():\n 'gcp_stale_job_manual_restart',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n- f'sky launch -y -c {name} --cloud gcp --zone {zone} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo hi\"',\n+ f'sky launch -y -c {name} --infra gcp/*/us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo hi\"',\n f'sky exec {name} -d \"echo start; sleep 10000\"',\n # Stop the cluster manually.\n smoke_tests_utils.run_cloud_cmd_on_cluster(name, cmd=stop_cmd),\n@@ -313,7 +313,7 @@ def test_env_check(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'env_check',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/env_check.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/env_check.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n # Test with only setup.\n f'sky launch -y -c {name} tests/test_yamls/test_only_setup.yaml',\n@@ -337,7 +337,7 @@ def test_cli_logs(generic_cloud: str):\n num_nodes = 1\n timestamp = time.time()\n test = smoke_tests_utils.Test('cli_logs', [\n- f'sky launch -y -c {name} --cloud {generic_cloud} --num-nodes {num_nodes} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo {timestamp} 1\"',\n+ f'sky launch -y -c {name} --infra {generic_cloud} --num-nodes {num_nodes} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo {timestamp} 1\"',\n f'sky exec {name} \"echo {timestamp} 2\"',\n f'sky exec {name} \"echo {timestamp} 3\"',\n f'sky exec {name} \"echo {timestamp} 4\"',\n@@ -377,21 +377,23 @@ def test_scp_logs():\n # These tests are for testing the return value of the APIs not fully used in CLI.\n def test_core_api_sky_launch_exec(generic_cloud: str):\n name = smoke_tests_utils.get_cluster_name()\n- cloud = sky.CLOUD_REGISTRY.from_str(generic_cloud)\n task = sky.Task(run=\"whoami\")\n task.set_resources(\n- sky.Resources(cloud=cloud, **smoke_tests_utils.LOW_RESOURCE_PARAM))\n+ sky.Resources(infra=generic_cloud,\n+ **smoke_tests_utils.LOW_RESOURCE_PARAM))\n try:\n job_id, handle = sky.get(sky.launch(task, cluster_name=name))\n assert job_id == 1\n assert handle is not None\n assert handle.cluster_name == name\n- assert handle.launched_resources.cloud.is_same_cloud(cloud)\n+ assert str(\n+ handle.launched_resources.cloud).lower() == generic_cloud.lower()\n job_id_exec, handle_exec = sky.get(sky.exec(task, cluster_name=name))\n assert job_id_exec == 2\n assert handle_exec is not None\n assert handle_exec.cluster_name == name\n- assert handle_exec.launched_resources.cloud.is_same_cloud(cloud)\n+ assert str(handle_exec.launched_resources.cloud).lower(\n+ ) == generic_cloud.lower()\n # For dummy task (i.e. task.run is None), the job won't be submitted.\n dummy_task = sky.Task()\n job_id_dummy, _ = sky.get(sky.exec(dummy_task, cluster_name=name))\n@@ -416,10 +418,10 @@ def test_core_api_sky_launch_exec(generic_cloud: str):\n @pytest.mark.no_kubernetes\n def test_core_api_sky_launch_fast(generic_cloud: str):\n name = smoke_tests_utils.get_cluster_name()\n- cloud = sky.CLOUD_REGISTRY.from_str(generic_cloud)\n try:\n task = sky.Task(run=\"whoami\").set_resources(\n- sky.Resources(cloud=cloud, **smoke_tests_utils.LOW_RESOURCE_PARAM))\n+ sky.Resources(infra=generic_cloud,\n+ **smoke_tests_utils.LOW_RESOURCE_PARAM))\n sky.launch(task,\n cluster_name=name,\n idle_minutes_to_autostop=1,\n@@ -444,9 +446,9 @@ def test_jobs_launch_and_logs(generic_cloud: str):\n smoke_tests_utils.LOW_CONTROLLER_RESOURCE_OVERRIDE_CONFIG):\n name = smoke_tests_utils.get_cluster_name()\n task = sky.Task(run=\"echo start job; sleep 30; echo end job\")\n- cloud = sky.CLOUD_REGISTRY.from_str(generic_cloud)\n task.set_resources(\n- sky.Resources(cloud=cloud, **smoke_tests_utils.LOW_RESOURCE_PARAM))\n+ sky.Resources(infra=generic_cloud,\n+ **smoke_tests_utils.LOW_RESOURCE_PARAM))\n job_id, handle = sky.stream_and_get(sky.jobs.launch(task, name=name))\n assert handle is not None\n # Check the job status from the dashboard\n@@ -558,7 +560,7 @@ def test_multiple_accelerators_ordered_with_default():\n [\n f'sky launch -y -c {name} tests/test_yamls/test_multiple_accelerators_ordered_with_default.yaml | grep \"Using user-specified accelerators list\"',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n- f'sky status {name} | grep Spot',\n+ f'sky status {name} | grep spot',\n ],\n f'sky down -y {name}',\n )\n@@ -593,7 +595,7 @@ def test_multiple_accelerators_unordered_with_default():\n [\n f'sky launch -y -c {name} tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n- f'sky status {name} | grep Spot',\n+ f'sky status {name} | grep spot',\n ],\n f'sky down -y {name}',\n )\n@@ -627,7 +629,7 @@ def test_sky_bench(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'sky-bench',\n [\n- f'sky bench launch -y -b {name} --cloud {generic_cloud} -i0 tests/test_yamls/minimal.yaml',\n+ f'sky bench launch -y -b {name} --infra {generic_cloud} -i0 tests/test_yamls/minimal.yaml',\n 'sleep 120',\n f'sky bench show {name} | grep sky-bench-{name} | grep FINISHED',\n ],\n@@ -738,40 +740,34 @@ def test_kubernetes_context_failover(unreachable_context):\n 'kubectl get namespaces --context kind-skypilot | grep test-namespace || '\n '{ echo \"Should set the namespace to test-namespace for kind-skypilot. Check the instructions in '\n 'tests/test_smoke.py::test_kubernetes_context_failover.\" && exit 1; }',\n- 'sky show-gpus --cloud kubernetes --region kind-skypilot | grep H100 | grep \"1, 2, 4, 8\"',\n+ 'sky show-gpus --infra kubernetes/kind-skypilot | grep H100 | grep \"1, 2, 4, 8\"',\n # Get contexts and set current context to the other cluster that is not kind-skypilot\n f'kubectl config use-context {context}',\n # H100 should not be in the current context\n- f'! sky show-gpus --cloud kubernetes --region {context} | grep H100',\n+ f'! sky show-gpus --infra kubernetes/{context} | grep H100',\n # H100 should be displayed as long as it is available in one of the contexts\n- 'sky show-gpus --cloud kubernetes | grep H100',\n+ 'sky show-gpus --infra kubernetes | grep H100',\n f'sky launch -y -c {name}-1 --cpus 1 echo hi',\n f'sky logs {name}-1 --status',\n # It should be launched not on kind-skypilot\n f'sky status -v {name}-1 | grep \"{context}\"',\n # Test failure for launching H100 on other cluster\n- f'sky launch -y -c {name}-2 --gpus H100 --cpus 1 --cloud kubernetes --region {context} echo hi && exit 1 || true',\n+ f'sky launch -y -c {name}-2 --gpus H100 --cpus 1 --infra kubernetes/{context} echo hi && exit 1 || true',\n # Test failover\n- f'sky launch -y -c {name}-3 --gpus H100 --cpus 1 --cloud kubernetes echo hi',\n+ f'sky launch -y -c {name}-3 --gpus H100 --cpus 1 --infra kubernetes echo hi',\n f'sky logs {name}-3 --status',\n # Test pods\n f'kubectl get pods --context kind-skypilot | grep \"{name}-3\"',\n # It should be launched on kind-skypilot\n f'sky status -v {name}-3 | grep \"kind-skypilot\"',\n # Should be 7 free GPUs\n- f'sky show-gpus --cloud kubernetes --region kind-skypilot | grep H100 | grep \" 7\"',\n+ f'sky show-gpus --infra kubernetes/kind-skypilot | grep H100 | grep \" 7\"',\n # Remove the line with \"kind-skypilot\"\n f'sed -i \"/kind-skypilot/d\" {f.name}',\n- # Should still be able to exec and launch on existing cluster\n- f'sky exec {name}-3 \"echo hi\"',\n- f'sky logs {name}-3 --status',\n- f'sky status -r {name}-3 | grep UP',\n- f'sky launch -c {name}-3 --gpus h100 echo hi',\n- f'sky logs {name}-3 --status',\n- f'sky status -r {name}-3 | grep UP',\n+ f'export KUBECONFIG={f.name}',\n # Test failure for launching on unreachable context\n f'kubectl config use-context {unreachable_context}',\n- f'sky launch -y -c {name}-4 --gpus H100 --cpus 1 --cloud kubernetes --region {unreachable_context} echo hi && exit 1 || true',\n+ f'sky launch -y -c {name}-4 --gpus H100 --cpus 1 --infra kubernetes/{unreachable_context} echo hi && exit 1 || true',\n # Test failover from unreachable context\n f'sky launch -y -c {name}-5 --cpus 1 echo hi',\n ],\n@@ -836,7 +832,7 @@ def test_cancel_launch_and_exec_async(generic_cloud: str):\n wait_cmd = wait_cmd.replace('sleep 10', 'sleep 1')\n test = smoke_tests_utils.Test(\n 'cancel_launch_and_exec_async', [\n- f'sky launch -c {name} -y --cloud {generic_cloud} --async',\n+ f'sky launch -c {name} -y --infra {generic_cloud} --async',\n (f's=$(sky exec {name} echo --async) && '\n 'echo \"$s\" && '\n 'logs_cmd=$(echo \"$s\" | grep \"Check logs with\" | '\n@@ -861,7 +857,7 @@ def test_cli_exit_codes(generic_cloud: str):\n 'cli_exit_codes',\n [\n # Test successful job exit code (0)\n- f'sky launch -y -c {name} --cloud {generic_cloud} \"echo success\" && echo \"Exit code: $?\"',\n+ f'sky launch -y -c {name} --infra {generic_cloud} \"echo success\" && echo \"Exit code: $?\"',\n f'sky logs {name} 1 --status | grep SUCCEEDED',\n \n # Test that sky logs with successful job returns 0\ndiff --git a/tests/smoke_tests/test_cluster_job.py b/tests/smoke_tests/test_cluster_job.py\nindex 0437d35bd2b..8bf3f635bf9 100644\n--- a/tests/smoke_tests/test_cluster_job.py\n+++ b/tests/smoke_tests/test_cluster_job.py\n@@ -54,7 +54,7 @@ def test_job_queue(generic_cloud: str, accelerator: Dict[str, str]):\n test = smoke_tests_utils.Test(\n 'job_queue',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster.yaml',\n f'sky exec {name} -n {name}-1 -d --gpus {accelerator}:0.5 examples/job_queue/job.yaml',\n f'sky exec {name} -n {name}-2 -d --gpus {accelerator}:0.5 examples/job_queue/job.yaml',\n f'sky exec {name} -n {name}-3 -d --gpus {accelerator}:0.5 examples/job_queue/job.yaml',\n@@ -115,7 +115,7 @@ def test_job_queue_with_docker(generic_cloud: str, image_id: str,\n test = smoke_tests_utils.Test(\n 'job_queue_with_docker',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} --image-id {image_id} examples/job_queue/cluster_docker.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} --image-id {image_id} examples/job_queue/cluster_docker.yaml',\n f'sky exec {name} -n {name}-1 -d --gpus {accelerator}:0.5 --image-id {image_id} --env TIME_TO_SLEEP={time_to_sleep*2} examples/job_queue/job_docker.yaml',\n f'sky exec {name} -n {name}-2 -d --gpus {accelerator}:0.5 --image-id {image_id} --env TIME_TO_SLEEP={time_to_sleep} examples/job_queue/job_docker.yaml',\n f'sky exec {name} -n {name}-3 -d --gpus {accelerator}:0.5 --image-id {image_id} --env TIME_TO_SLEEP={time_to_sleep} examples/job_queue/job_docker.yaml',\n@@ -180,10 +180,10 @@ def test_ibm_job_queue():\n test = smoke_tests_utils.Test(\n 'ibm_job_queue',\n [\n- f'sky launch -y -c {name} --cloud ibm --gpus v100',\n- f'sky exec {name} -n {name}-1 --cloud ibm -d examples/job_queue/job_ibm.yaml',\n- f'sky exec {name} -n {name}-2 --cloud ibm -d examples/job_queue/job_ibm.yaml',\n- f'sky exec {name} -n {name}-3 --cloud ibm -d examples/job_queue/job_ibm.yaml',\n+ f'sky launch -y -c {name} --infra ibm --gpus v100',\n+ f'sky exec {name} -n {name}-1 --infra ibm -d examples/job_queue/job_ibm.yaml',\n+ f'sky exec {name} -n {name}-2 --infra ibm -d examples/job_queue/job_ibm.yaml',\n+ f'sky exec {name} -n {name}-3 --infra ibm -d examples/job_queue/job_ibm.yaml',\n f'sky queue {name} | grep {name}-1 | grep RUNNING',\n f'sky queue {name} | grep {name}-2 | grep RUNNING',\n f'sky queue {name} | grep {name}-3 | grep PENDING',\n@@ -239,7 +239,7 @@ def test_job_queue_multinode(generic_cloud: str, accelerator: Dict[str, str]):\n test = smoke_tests_utils.Test(\n 'job_queue_multinode',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster_multinode.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster_multinode.yaml',\n f'sky exec {name} -n {name}-1 -d --gpus {accelerator}:0.5 examples/job_queue/job_multinode.yaml',\n f'sky exec {name} -n {name}-2 -d --gpus {accelerator}:0.5 examples/job_queue/job_multinode.yaml',\n f'sky launch -c {name} -n {name}-3 -d --gpus {accelerator}:0.5 examples/job_queue/job_multinode.yaml',\n@@ -280,7 +280,7 @@ def test_large_job_queue(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'large_job_queue',\n [\n- f'sky launch -y -c {name} --cpus 8 --cloud {generic_cloud}',\n+ f'sky launch -y -c {name} --cpus 8 --infra {generic_cloud}',\n f'for i in `seq 1 75`; do sky exec {name} -n {name}-$i -d \"echo $i; sleep 100000000\"; done',\n f'sky cancel -y {name} 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16',\n 'sleep 90',\n@@ -328,7 +328,7 @@ def test_fast_large_job_queue(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'fast_large_job_queue',\n [\n- f'sky launch -y -c {name} --cpus 8 --cloud {generic_cloud}',\n+ f'sky launch -y -c {name} --cpus 8 --infra {generic_cloud}',\n f'for i in `seq 1 32`; do sky exec {name} -n {name}-$i -d \"echo $i\"; done',\n 'sleep 60',\n f's=$(sky queue {name}); echo \"$s\"; echo; echo; echo \"$s\" | grep -v grep | grep SUCCEEDED | wc -l | grep 32',\n@@ -346,7 +346,7 @@ def test_ibm_job_queue_multinode():\n test = smoke_tests_utils.Test(\n 'ibm_job_queue_multinode',\n [\n- f'sky launch -y -c {name} --cloud ibm --gpus v100 --num-nodes 2',\n+ f'sky launch -y -c {name} --infra ibm --gpus v100 --num-nodes 2',\n f'sky exec {name} -n {name}-1 -d {task_file}',\n f'sky exec {name} -n {name}-2 -d {task_file}',\n f'sky launch -y -c {name} -n {name}-3 -d {task_file}',\n@@ -391,7 +391,7 @@ def test_docker_preinstalled_package(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'docker_with_preinstalled_package',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id docker:nginx',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id docker:nginx',\n f'sky exec {name} \"nginx -V\"',\n f'sky logs {name} 1 --status',\n f'sky exec {name} whoami | grep root',\n@@ -471,7 +471,7 @@ def test_huggingface(generic_cloud: str, accelerator: Dict[str, str]):\n test = smoke_tests_utils.Test(\n 'huggingface_glue_imdb_app',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/huggingface_glue_imdb_app.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/huggingface_glue_imdb_app.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky exec {name} --gpus {accelerator} examples/huggingface_glue_imdb_app.yaml',\n f'sky logs {name} 2 --status', # Ensure the job succeeded.\n@@ -623,7 +623,7 @@ def test_multi_hostname(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'multi_hostname',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/multi_hostname.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/multi_hostname.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky logs {name} 1 | grep \"My hostname:\" | wc -l | grep 2', # Ensure there are 2 hosts.\n f'sky exec {name} examples/multi_hostname.yaml',\n@@ -643,7 +643,7 @@ def test_multi_node_failure(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'multi_node_failure',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/failed_worker_setup.yaml || [ $? -eq 100 ]',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/failed_worker_setup.yaml || [ $? -eq 100 ]',\n f'sky logs {name} 1 --status | grep FAILED_SETUP', # Ensure the job setup failed.\n f'sky exec {name} tests/test_yamls/failed_worker_run.yaml || [ $? -eq 100 ]',\n f'sky logs {name} 2 --status | grep FAILED', # Ensure the job failed.\n@@ -661,7 +661,7 @@ def test_gcp_http_server_with_custom_ports():\n test = smoke_tests_utils.Test(\n 'gcp_http_server_with_custom_ports',\n [\n- f'sky launch -y -d -c {name} --cloud gcp {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',\n+ f'sky launch -y -d -c {name} --infra gcp {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',\n f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',\n # Retry a few times to avoid flakiness in ports being open.\n f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep \"<h1>This is a demo HTML page.</h1>\"; then success=true; break; fi; sleep 10; done; if [ \"$success\" = false ]; then exit 1; fi',\n@@ -678,7 +678,7 @@ def test_aws_http_server_with_custom_ports():\n test = smoke_tests_utils.Test(\n 'aws_http_server_with_custom_ports',\n [\n- f'sky launch -y -d -c {name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',\n+ f'sky launch -y -d -c {name} --infra aws {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',\n f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',\n # Retry a few times to avoid flakiness in ports being open.\n f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep \"<h1>This is a demo HTML page.</h1>\"; then success=true; break; fi; sleep 10; done; if [ \"$success\" = false ]; then exit 1; fi'\n@@ -695,7 +695,7 @@ def test_azure_http_server_with_custom_ports():\n test = smoke_tests_utils.Test(\n 'azure_http_server_with_custom_ports',\n [\n- f'sky launch -y -d -c {name} --cloud azure {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',\n+ f'sky launch -y -d -c {name} --infra azure {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',\n f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',\n # Retry a few times to avoid flakiness in ports being open.\n f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep \"<h1>This is a demo HTML page.</h1>\"; then success=true; break; fi; sleep 10; done; if [ \"$success\" = false ]; then exit 1; fi'\n@@ -713,7 +713,7 @@ def test_kubernetes_http_server_with_custom_ports():\n test = smoke_tests_utils.Test(\n 'kubernetes_http_server_with_custom_ports',\n [\n- f'sky launch -y -d -c {name} --cloud kubernetes examples/http_server_with_custom_ports/task.yaml',\n+ f'sky launch -y -d -c {name} --infra kubernetes examples/http_server_with_custom_ports/task.yaml',\n f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',\n # Retry a few times to avoid flakiness in ports being open.\n f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 100); do if curl $ip | grep \"<h1>This is a demo HTML page.</h1>\"; then success=true; break; fi; sleep 5; done; if [ \"$success\" = false ]; then exit 1; fi'\n@@ -730,7 +730,7 @@ def test_paperspace_http_server_with_custom_ports():\n test = smoke_tests_utils.Test(\n 'paperspace_http_server_with_custom_ports',\n [\n- f'sky launch -y -d -c {name} --cloud paperspace examples/http_server_with_custom_ports/task.yaml',\n+ f'sky launch -y -d -c {name} --infra paperspace examples/http_server_with_custom_ports/task.yaml',\n f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',\n # Retry a few times to avoid flakiness in ports being open.\n f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep \"<h1>This is a demo HTML page.</h1>\"; then success=true; break; fi; sleep 10; done; if [ \"$success\" = false ]; then exit 1; fi',\n@@ -747,7 +747,7 @@ def test_runpod_http_server_with_custom_ports():\n test = smoke_tests_utils.Test(\n 'runpod_http_server_with_custom_ports',\n [\n- f'sky launch -y -d -c {name} --cloud runpod examples/http_server_with_custom_ports/task.yaml',\n+ f'sky launch -y -d -c {name} --infra runpod examples/http_server_with_custom_ports/task.yaml',\n f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',\n # Retry a few times to avoid flakiness in ports being open.\n f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep \"<h1>This is a demo HTML page.</h1>\"; then success=true; break; fi; sleep 10; done; if [ \"$success\" = false ]; then exit 1; fi',\n@@ -862,7 +862,7 @@ def test_add_pod_annotations_for_autodown_with_launch():\n smoke_tests_utils.launch_cluster_for_cloud_cmd('kubernetes', name),\n # Launch Kubernetes cluster with two nodes, each being head node and worker node.\n # Autodown is set.\n- f'sky launch -y -c {name} -i 10 --down --num-nodes 2 --cpus=1 --cloud kubernetes',\n+ f'sky launch -y -c {name} -i 10 --down --num-nodes 2 --cpus=1 --infra kubernetes',\n # Get names of the pods containing cluster name.\n smoke_tests_utils.run_cloud_cmd_on_cluster(\n name,\n@@ -894,7 +894,7 @@ def test_add_and_remove_pod_annotations_with_autostop():\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('kubernetes', name),\n # Launch Kubernetes cluster with two nodes, each being head node and worker node.\n- f'sky launch -y -c {name} --num-nodes 2 --cpus=1 --cloud kubernetes',\n+ f'sky launch -y -c {name} --num-nodes 2 --cpus=1 --infra kubernetes',\n # Set autodown on the cluster with 'autostop' command.\n f'sky autostop -y {name} -i 20 --down',\n # Get names of the pods containing cluster name.\n@@ -1216,7 +1216,7 @@ def test_autostop(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'autostop',\n [\n- f'sky launch -y -d -c {name} --num-nodes 2 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -d -c {name} --num-nodes 2 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n f'sky autostop -y {name} -i 1',\n \n # Ensure autostop is set.\n@@ -1285,7 +1285,7 @@ def test_autodown(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'autodown',\n [\n- f'sky launch -y -d -c {name} --num-nodes 2 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -d -c {name} --num-nodes 2 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n f'sky autostop -y {name} --down -i 1',\n check_autostop_set,\n # Ensure the cluster is not terminated early.\n@@ -1294,14 +1294,14 @@ def test_autodown(generic_cloud: str):\n # Ensure the cluster is terminated.\n f'sleep {autodown_timeout}',\n f's=$(SKYPILOT_DEBUG=0 sky status {name} --refresh) && echo \"$s\" && {{ echo \"$s\" | grep {name} | grep \"Autodowned cluster\\|Cluster \\'{name}\\' not found\"; }} || {{ echo \"$s\" | grep {name} && exit 1 || exit 0; }}',\n- f'sky launch -y -d -c {name} --cloud {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -d -c {name} --infra {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n f'sky status | grep {name} | grep UP', # Ensure the cluster is UP.\n- f'sky exec {name} --cloud {generic_cloud} tests/test_yamls/minimal.yaml',\n+ f'sky exec {name} --infra {generic_cloud} tests/test_yamls/minimal.yaml',\n check_autostop_set,\n f'sleep {autodown_timeout}',\n # Ensure the cluster is terminated.\n f's=$(SKYPILOT_DEBUG=0 sky status {name} --refresh) && echo \"$s\" && {{ echo \"$s\" | grep {name} | grep \"Autodowned cluster\\|Cluster \\'{name}\\' not found\"; }} || {{ echo \"$s\" | grep {name} && exit 1 || exit 0; }}',\n- f'sky launch -y -d -c {name} --cloud {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -d -c {name} --infra {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n f'sky autostop -y {name} --cancel',\n f'sleep {autodown_timeout}',\n # Ensure the cluster is still UP.\n@@ -1352,7 +1352,7 @@ def _get_cancel_task_with_cloud(name, cloud, timeout=15 * 60):\n test = smoke_tests_utils.Test(\n f'{cloud}-cancel-task',\n [\n- f'sky launch -c {name} examples/resnet_app.yaml --cloud {cloud} -y -d',\n+ f'sky launch -c {name} examples/resnet_app.yaml --infra {cloud} -y -d',\n # Wait the job to be scheduled and finished setup.\n f'until sky queue {name} | grep \"RUNNING\"; do sleep 10; done',\n # Wait the setup and initialize before the GPU process starts.\n@@ -1407,7 +1407,7 @@ def test_cancel_pytorch(generic_cloud: str, accelerator: Dict[str, str]):\n test = smoke_tests_utils.Test(\n 'cancel-pytorch',\n [\n- f'sky launch -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/resnet_distributed_torch.yaml -y -d',\n+ f'sky launch -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/resnet_distributed_torch.yaml -y -d',\n # Wait until the setup finishes.\n smoke_tests_utils.\n get_cmd_wait_until_job_status_contains_matching_job_id(\n@@ -1444,7 +1444,7 @@ def test_cancel_ibm():\n test = smoke_tests_utils.Test(\n 'ibm-cancel-task',\n [\n- f'sky launch -y -c {name} --cloud ibm examples/minimal.yaml',\n+ f'sky launch -y -c {name} --infra ibm examples/minimal.yaml',\n f'sky exec {name} -n {name}-1 -d \"while true; do echo \\'Hello SkyPilot\\'; sleep 2; done\"',\n 'sleep 20',\n f'sky queue {name} | grep {name}-1 | grep RUNNING',\n@@ -1472,7 +1472,7 @@ def test_use_spot(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'use-spot',\n [\n- f'sky launch -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',\n+ f'sky launch -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',\n f'sky logs {name} 1 --status',\n f'sky exec {name} echo hi',\n f'sky logs {name} 2 --status',\n@@ -1493,7 +1493,7 @@ def test_azure_spot_instance_verification():\n test = smoke_tests_utils.Test(\n 'azure-spot-verification',\n [\n- f'sky launch -c {name} --cloud azure {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',\n+ f'sky launch -c {name} --infra azure {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',\n f'sky logs {name} 1 --status', f'TARGET_VM_NAME=\"{name}\"; '\n 'VM_INFO=$(az vm list --query \"[?contains(name, \\'$TARGET_VM_NAME\\')].{Name:name, ResourceGroup:resourceGroup}\" -o tsv); '\n '[[ -z \"$VM_INFO\" ]] && exit 1; '\n@@ -1517,7 +1517,7 @@ def test_stop_gcp_spot():\n test = smoke_tests_utils.Test(\n 'stop_gcp_spot',\n [\n- f'sky launch -c {name} --cloud gcp {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y -- touch myfile',\n+ f'sky launch -c {name} --infra gcp {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y -- touch myfile',\n # stop should go through:\n f'sky stop {name} -y',\n f'sky start {name} -y',\n@@ -1550,7 +1550,7 @@ def test_inline_env(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-inline-env',\n [\n- f'sky launch -c {name} -y --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV=\"hello world\" -- \"([[ ! -z \\\\\"\\$TEST_ENV\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_IPS}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_RANK}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NUM_NODES}\\\\\" ]]) || exit 1\"',\n+ f'sky launch -c {name} -y --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV=\"hello world\" -- \"([[ ! -z \\\\\"\\$TEST_ENV\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_IPS}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_RANK}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NUM_NODES}\\\\\" ]]) || exit 1\"',\n 'sleep 20',\n f'sky logs {name} 1 --status',\n f'sky exec {name} --env TEST_ENV2=\"success\" \"([[ ! -z \\\\\"\\$TEST_ENV2\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_IPS}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_RANK}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NUM_NODES}\\\\\" ]]) || exit 1\"',\n@@ -1569,7 +1569,7 @@ def test_inline_env_file(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-inline-env-file',\n [\n- f'sky launch -c {name} -y --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV=\"hello world\" -- \"([[ ! -z \\\\\"\\$TEST_ENV\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_IPS}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_RANK}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NUM_NODES}\\\\\" ]]) || exit 1\"',\n+ f'sky launch -c {name} -y --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV=\"hello world\" -- \"([[ ! -z \\\\\"\\$TEST_ENV\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_IPS}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_RANK}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NUM_NODES}\\\\\" ]]) || exit 1\"',\n f'sky logs {name} 1 --status',\n f'sky exec {name} --env-file examples/sample_dotenv \"([[ ! -z \\\\\"\\$TEST_ENV2\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_IPS}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NODE_RANK}\\\\\" ]] && [[ ! -z \\\\\"\\${constants.SKYPILOT_NUM_NODES}\\\\\" ]]) || exit 1\"',\n f'sky logs {name} 2 --status',\n@@ -1588,7 +1588,7 @@ def test_aws_custom_image():\n test = smoke_tests_utils.Test(\n 'test-aws-custom-image',\n [\n- f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --cloud aws --region us-east-2 --image-id ami-062ddd90fb6f8267a', # Nvidia image\n+ f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --infra aws/us-east-2 --image-id ami-062ddd90fb6f8267a', # Nvidia image\n f'sky logs {name} 1 --status',\n ],\n f'sky down -y {name}',\n@@ -1616,10 +1616,10 @@ def test_kubernetes_custom_image(image_id):\n test = smoke_tests_utils.Test(\n 'test-kubernetes-custom-image',\n [\n- f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --cloud kubernetes --image-id {image_id} --region None --gpus T4:1',\n+ f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --infra kubernetes/none --image-id {image_id} --gpus T4:1',\n f'sky logs {name} 1 --status',\n # Try exec to run again and check if the logs are printed\n- f'sky exec {name} tests/test_yamls/test_custom_image.yaml --cloud kubernetes --image-id {image_id} --region None --gpus T4:1 | grep \"Hello 100\"',\n+ f'sky exec {name} tests/test_yamls/test_custom_image.yaml --infra kubernetes/none --image-id {image_id} --gpus T4:1 | grep \"Hello 100\"',\n # Make sure ssh is working with custom username\n f'ssh {name} echo hi | grep hi',\n ],\n@@ -1677,7 +1677,7 @@ def _get_aws_query_command(region: str, instance_id: str, field: str,\n 'aws-disk-tier-' + disk_tier.value,\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),\n- f'sky launch -y -c {name} --cloud aws --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n+ f'sky launch -y -c {name} --infra aws/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n f'--disk-tier {disk_tier.value} echo \"hello sky\"',\n smoke_tests_utils.run_cloud_cmd_on_cluster(\n name,\n@@ -1736,7 +1736,7 @@ def test_gcp_disk_tier(instance_types: List[str]):\n 'gcp-disk-tier-' + disk_tier.value,\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n- f'sky launch -y -c {name} --cloud gcp --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n+ f'sky launch -y -c {name} --infra gcp/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n f'--disk-tier {disk_tier.value} {instance_type_option} ',\n smoke_tests_utils.run_cloud_cmd_on_cluster(\n name,\n@@ -1766,7 +1766,7 @@ def test_azure_disk_tier():\n test = smoke_tests_utils.Test(\n 'azure-disk-tier-' + disk_tier.value,\n [\n- f'sky launch -y -c {name} --cloud azure --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n+ f'sky launch -y -c {name} --infra azure/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n f'--disk-tier {disk_tier.value} echo \"hello sky\"',\n f'az resource list --tag ray-cluster-name={name_on_cloud} --query '\n f'\"[?type==\\'Microsoft.Compute/disks\\'].sku.name\" '\n@@ -1788,7 +1788,7 @@ def test_azure_best_tier_failover():\n test = smoke_tests_utils.Test(\n 'azure-best-tier-failover',\n [\n- f'sky launch -y -c {name} --cloud azure --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n+ f'sky launch -y -c {name} --infra azure/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '\n f'--disk-tier best --instance-type Standard_D8_v5 echo \"hello sky\"',\n f'az resource list --tag ray-cluster-name={name_on_cloud} --query '\n f'\"[?type==\\'Microsoft.Compute/disks\\'].sku.name\" '\n@@ -1817,7 +1817,7 @@ def test_aws_zero_quota_failover():\n test = smoke_tests_utils.Test(\n 'aws-zero-quota-failover',\n [\n- f'sky launch -y -c {name} --cloud aws --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus V100:8 --use-spot | grep \"Found no quota\"',\n+ f'sky launch -y -c {name} --infra aws/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus V100:8 --use-spot | grep \"Found no quota\"',\n ],\n f'sky down -y {name}',\n )\n@@ -1840,7 +1840,7 @@ def test_gcp_zero_quota_failover():\n test = smoke_tests_utils.Test(\n 'gcp-zero-quota-failover',\n [\n- f'sky launch -y -c {name} --cloud gcp --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus A100-80GB:1 --use-spot | grep \"Found no quota\"',\n+ f'sky launch -y -c {name} --infra gcp/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus A100-80GB:1 --use-spot | grep \"Found no quota\"',\n ],\n f'sky down -y {name}',\n )\n@@ -1872,7 +1872,7 @@ def test_long_setup_run_script(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'long-setup-run-script',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {f.name}',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {f.name}',\n f'sky exec {name} \"echo hello\"',\n f'sky exec {name} {f.name}',\n f'sky logs {name} --status 1',\n@@ -1909,7 +1909,7 @@ def test_min_gpt_kubernetes():\n test = smoke_tests_utils.Test(\n 'min_gpt_kubernetes',\n [\n- f'sky launch -y -c {name} --cloud kubernetes {f.name}',\n+ f'sky launch -y -c {name} --infra kubernetes {f.name}',\n f'sky logs {name} 1 --status',\n ],\n f'sky down -y {name}',\ndiff --git a/tests/smoke_tests/test_images.py b/tests/smoke_tests/test_images.py\nindex f27bef7a5f6..b7825982508 100644\n--- a/tests/smoke_tests/test_images.py\n+++ b/tests/smoke_tests/test_images.py\n@@ -57,10 +57,10 @@ def test_gcp_images():\n test = smoke_tests_utils.Test(\n 'gcp_images',\n [\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-debian-10 --cloud gcp tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-debian-10 --infra gcp tests/test_yamls/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n- f'sky launch -c {name} --image-id skypilot:cpu-debian-10 --cloud gcp tests/test_yamls/minimal.yaml && exit 1 || true',\n- f'sky launch -y -c {name} tests/test_yamls/minimal.yaml',\n+ f'sky launch -c {name} --image-id skypilot:cpu-debian-10 --infra gcp tests/test_yamls/minimal.yaml && exit 1 || true',\n+ f'sky launch -y -c {name} --infra gcp tests/test_yamls/minimal.yaml',\n f'sky logs {name} 2 --status',\n f'sky logs {name} --status | grep \"Job 2: SUCCEEDED\"', # Equivalent.\n f'sky exec {name} \\'echo $SKYPILOT_CLUSTER_INFO | jq .cloud | grep -i gcp\\'',\n@@ -77,9 +77,9 @@ def test_azure_images():\n test = smoke_tests_utils.Test(\n 'azure_images',\n [\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-ubuntu-2204 --cloud azure tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-ubuntu-2204 --infra azure tests/test_yamls/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n- f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:v1-ubuntu-2004 --cloud azure tests/test_yamls/minimal.yaml && exit 1 || true',\n+ f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:v1-ubuntu-2004 --infra azure tests/test_yamls/minimal.yaml && exit 1 || true',\n f'sky launch -y -c {name} tests/test_yamls/minimal.yaml',\n f'sky logs {name} 2 --status',\n f'sky logs {name} --status | grep \"Job 2: SUCCEEDED\"', # Equivalent.\n@@ -140,9 +140,9 @@ def test_aws_image_id_dict_region():\n # us-west-2: skypilot:gpu-ubuntu-1804\n # us-east-2: skypilot:gpu-ubuntu-2004\n # Use region to filter image_id dict.\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1 examples/per_region_images.yaml && exit 1 || true',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra aws/us-east-1 examples/per_region_images.yaml && exit 1 || true',\n f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-2 examples/per_region_images.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra aws/us-east-2 examples/per_region_images.yaml',\n # Should success because the image id match for the region.\n f'sky launch -c {name} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',\n f'sky exec {name} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',\n@@ -152,9 +152,9 @@ def test_aws_image_id_dict_region():\n f'sky logs {name} 3 --status',\n f'sky status -v | grep {name} | grep us-east-2', # Ensure the region is correct.\n # Ensure exec works.\n- f'sky exec {name} --region us-east-2 examples/per_region_images.yaml',\n+ f'sky exec {name} --infra aws/us-east-2 examples/per_region_images.yaml',\n f'sky exec {name} examples/per_region_images.yaml',\n- f'sky exec {name} --cloud aws --region us-east-2 \"ls ~\"',\n+ f'sky exec {name} --infra aws/us-east-2 \"ls ~\"',\n f'sky exec {name} \"ls ~\"',\n f'sky logs {name} 4 --status',\n f'sky logs {name} 5 --status',\n@@ -173,21 +173,21 @@ def test_gcp_image_id_dict_region():\n 'gcp_image_id_dict_region',\n [\n # Use region to filter image_id dict.\n- f'sky launch -y -c {name} --region us-east1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',\n+ f'sky launch -y -c {name} --infra gcp/us-east1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',\n f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.\n- f'sky launch -y -c {name} --region us-west3 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',\n+ f'sky launch -y -c {name} --infra gcp/us-west3 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',\n # Should success because the image id match for the region.\n- f'sky launch -c {name} --cloud gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',\n- f'sky exec {name} --cloud gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',\n- f'sky exec {name} --cloud gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',\n+ f'sky launch -c {name} --infra gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',\n+ f'sky exec {name} --infra gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',\n+ f'sky exec {name} --infra gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',\n f'sky logs {name} 1 --status',\n f'sky logs {name} 2 --status',\n f'sky logs {name} 3 --status',\n f'sky status -v | grep {name} | grep us-west3', # Ensure the region is correct.\n # Ensure exec works.\n- f'sky exec {name} --region us-west3 tests/test_yamls/gcp_per_region_images.yaml',\n+ f'sky exec {name} --infra gcp/us-west3 tests/test_yamls/gcp_per_region_images.yaml',\n f'sky exec {name} tests/test_yamls/gcp_per_region_images.yaml',\n- f'sky exec {name} --cloud gcp --region us-west3 \"ls ~\"',\n+ f'sky exec {name} --infra gcp/us-west3 \"ls ~\"',\n f'sky exec {name} \"ls ~\"',\n f'sky logs {name} 4 --status',\n f'sky logs {name} 5 --status',\n@@ -210,9 +210,9 @@ def test_aws_image_id_dict_zone():\n # us-west-2: skypilot:gpu-ubuntu-1804\n # us-east-2: skypilot:gpu-ubuntu-2004\n # Use zone to filter image_id dict.\n- f'sky launch -y -c {name} --zone us-east-1b {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml && exit 1 || true',\n+ f'sky launch -y -c {name} --infra aws/*/us-east-1b {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml && exit 1 || true',\n f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.\n- f'sky launch -y -c {name} --zone us-east-2a {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml',\n+ f'sky launch -y -c {name} --infra aws/*/us-east-2a {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml',\n # Should success because the image id match for the zone.\n f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',\n f'sky exec {name} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',\n@@ -223,9 +223,9 @@ def test_aws_image_id_dict_zone():\n f'sky logs {name} 3 --status',\n f'sky status -v | grep {name} | grep us-east-2a', # Ensure the zone is correct.\n # Ensure exec works.\n- f'sky exec {name} --zone us-east-2a examples/per_region_images.yaml',\n+ f'sky exec {name} --infra aws/*/us-east-2a examples/per_region_images.yaml',\n f'sky exec {name} examples/per_region_images.yaml',\n- f'sky exec {name} --cloud aws --region us-east-2 \"ls ~\"',\n+ f'sky exec {name} --infra aws/us-east-2 \"ls ~\"',\n f'sky exec {name} \"ls ~\"',\n f'sky logs {name} 4 --status',\n f'sky logs {name} 5 --status',\n@@ -244,22 +244,22 @@ def test_gcp_image_id_dict_zone():\n 'gcp_image_id_dict_zone',\n [\n # Use zone to filter image_id dict.\n- f'sky launch -y -c {name} --zone us-east1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',\n+ f'sky launch -y -c {name} --infra */*/us-east1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',\n f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.\n- f'sky launch -y -c {name} --zone us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',\n+ f'sky launch -y -c {name} --infra */*/us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',\n # Should success because the image id match for the zone.\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',\n- f'sky exec {name} --cloud gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',\n+ f'sky exec {name} --infra gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',\n # Fail due to image id mismatch.\n- f'sky exec {name} --cloud gcp --image-id skypilot:gpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',\n+ f'sky exec {name} --infra gcp --image-id skypilot:gpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',\n f'sky logs {name} 1 --status',\n f'sky logs {name} 2 --status',\n f'sky logs {name} 3 --status',\n f'sky status -v | grep {name} | grep us-central1', # Ensure the zone is correct.\n # Ensure exec works.\n- f'sky exec {name} --cloud gcp --zone us-central1-a tests/test_yamls/gcp_per_region_images.yaml',\n+ f'sky exec {name} --infra gcp/*/us-central1-a tests/test_yamls/gcp_per_region_images.yaml',\n f'sky exec {name} tests/test_yamls/gcp_per_region_images.yaml',\n- f'sky exec {name} --cloud gcp --region us-central1 \"ls ~\"',\n+ f'sky exec {name} --infra gcp/us-central1 \"ls ~\"',\n f'sky exec {name} \"ls ~\"',\n f'sky logs {name} 4 --status',\n f'sky logs {name} 5 --status',\n@@ -279,7 +279,7 @@ def test_clone_disk_aws():\n test = smoke_tests_utils.Test(\n 'clone_disk_aws',\n [\n- f'sky launch -y -c {name} --cloud aws --region us-east-2 --retry-until-up \"echo hello > ~/user_file.txt\"',\n+ f'sky launch -y -c {name} --infra aws/us-east-2 --retry-until-up \"echo hello > ~/user_file.txt\"',\n f'sky launch --clone-disk-from {name} -y -c {name}-clone && exit 1 || true',\n f'sky stop {name} -y',\n smoke_tests_utils.get_cmd_wait_until_cluster_status_contains(\n@@ -289,8 +289,8 @@ def test_clone_disk_aws():\n # Wait for EC2 instance to be in stopped state.\n # TODO: event based wait.\n 'sleep 60',\n- f'sky launch --clone-disk-from {name} -y -c {name}-clone --cloud aws -d --region us-east-2 \"cat ~/user_file.txt | grep hello\"',\n- f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --cloud aws -d --region us-east-2 \"cat ~/user_file.txt | grep hello\"',\n+ f'sky launch --clone-disk-from {name} -y -c {name}-clone --infra aws/us-east-2 -d \"cat ~/user_file.txt | grep hello\"',\n+ f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --infra aws/us-east-2 -d \"cat ~/user_file.txt | grep hello\"',\n f'sky logs {name}-clone 1 --status',\n f'sky logs {name}-clone-2 1 --status',\n ],\n@@ -308,11 +308,11 @@ def test_clone_disk_gcp():\n test = smoke_tests_utils.Test(\n 'clone_disk_gcp',\n [\n- f'sky launch -y -c {name} --cloud gcp --zone us-east1-b --retry-until-up \"echo hello > ~/user_file.txt\"',\n+ f'sky launch -y -c {name} --infra gcp/*/us-east1-b --retry-until-up \"echo hello > ~/user_file.txt\"',\n f'sky launch --clone-disk-from {name} -y -c {name}-clone && exit 1 || true',\n f'sky stop {name} -y',\n- f'sky launch --clone-disk-from {name} -y -c {name}-clone --cloud gcp --zone us-central1-a \"cat ~/user_file.txt | grep hello\"',\n- f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --cloud gcp --zone us-east1-b \"cat ~/user_file.txt | grep hello\"',\n+ f'sky launch --clone-disk-from {name} -y -c {name}-clone --infra gcp/*/us-central1-a \"cat ~/user_file.txt | grep hello\"',\n+ f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --infra gcp/*/us-east1-b \"cat ~/user_file.txt | grep hello\"',\n f'sky logs {name}-clone 1 --status',\n f'sky logs {name}-clone-2 1 --status',\n ],\n@@ -331,9 +331,9 @@ def test_gcp_mig():\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n # Launch a CPU instance asynchronously.\n- f'sky launch -y -c {name}-cpu {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp --zone {zone} --async tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name}-cpu {smoke_tests_utils.LOW_RESOURCE_ARG} --infra gcp/*/us-central1-a --async tests/test_yamls/minimal.yaml',\n # Launch a GPU instance.\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus l4 --num-nodes 2 --image-id skypilot:gpu-debian-10 --cloud gcp --region {region} tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus l4 --num-nodes 2 --image-id skypilot:gpu-debian-10 --infra gcp/{region} tests/test_yamls/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n f'sky logs {name} 2 --status',\n@@ -354,7 +354,7 @@ def test_gcp_mig():\n )),\n # Launch again with the same region. The original instance template\n # should be removed.\n- f'sky launch -y -c {name} --gpus L4 --num-nodes 2 --region {region} nvidia-smi',\n+ f'sky launch -y -c {name} --gpus L4 --num-nodes 2 --infra gcp/{region} nvidia-smi',\n f'sky logs {name} 1 | grep \"L4\"',\n f'sky down -y {name}',\n f'sky status | grep {name}-cpu | grep UP',\n@@ -408,7 +408,7 @@ def test_gcp_force_enable_external_ips():\n \n test_commands = [\n is_on_gcp_command,\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp --cpus 2 tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra gcp --cpus 2 tests/test_yamls/minimal.yaml',\n # Check network of vm is \"default\"\n (f'gcloud compute instances list --filter=name~\"{name}\" --format='\n '\"value(networkInterfaces.network)\" | grep \"networks/default\"'),\n@@ -438,7 +438,7 @@ def test_image_no_conda():\n 'image_no_conda',\n [\n # Use image id dict.\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-2 examples/per_region_images.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra aws/us-east-2 examples/per_region_images.yaml',\n f'sky logs {name} 1 --status',\n f'sky stop {name} -y',\n f'sky start {name} -y',\n@@ -459,7 +459,7 @@ def test_custom_default_conda_env(generic_cloud: str):\n timeout *= 3\n name = smoke_tests_utils.get_cluster_name()\n test = smoke_tests_utils.Test('custom_default_conda_env', [\n- f'sky launch -c {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} tests/test_yamls/test_custom_default_conda_env.yaml',\n+ f'sky launch -c {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} tests/test_yamls/test_custom_default_conda_env.yaml',\n f'sky status -r {name} | grep \"UP\"',\n f'sky logs {name} 1 --status',\n f'sky logs {name} 1 --no-follow | grep -E \"myenv\\\\s+\\\\*\"',\ndiff --git a/tests/smoke_tests/test_managed_job.py b/tests/smoke_tests/test_managed_job.py\nindex 69915e3e89e..9a874f5a6da 100644\n--- a/tests/smoke_tests/test_managed_job.py\n+++ b/tests/smoke_tests/test_managed_job.py\n@@ -51,8 +51,8 @@ def test_managed_jobs_basic(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'managed-jobs',\n [\n- f'sky jobs launch -n {name}-1 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',\n- f'sky jobs launch -n {name}-2 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',\n+ f'sky jobs launch -n {name}-1 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',\n+ f'sky jobs launch -n {name}-2 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=f'{name}-1',\n@@ -97,7 +97,7 @@ def test_managed_jobs_cli_exit_codes(generic_cloud: str):\n 'managed_jobs_exit_codes',\n [\n # Test jobs launch with successful job\n- f'sky jobs launch -y -n jobs-{name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo jobs success\" && echo \"Jobs launch exit code: $?\"',\n+ f'sky jobs launch -y -n jobs-{name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo jobs success\" && echo \"Jobs launch exit code: $?\"',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=f'jobs-{name}',\n@@ -112,7 +112,7 @@ def test_managed_jobs_cli_exit_codes(generic_cloud: str):\n f'sky jobs logs $JOB_ID && echo \"Jobs logs exit code: $?\"',\n \n # Test jobs launch with failing job\n- f'sky jobs launch -y -n jobs-fail-{name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"exit 1\" || echo \"Jobs launch failed exit code: $?\" | grep \"Jobs launch failed exit code: 100\"',\n+ f'sky jobs launch -y -n jobs-fail-{name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"exit 1\" || echo \"Jobs launch failed exit code: $?\" | grep \"Jobs launch failed exit code: 100\"',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=f'jobs-fail-{name}',\n@@ -149,7 +149,7 @@ def test_job_pipeline(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'job_pipeline',\n [\n- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/pipeline.yaml --cloud {generic_cloud} -y -d',\n+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} tests/test_yamls/pipeline.yaml -y -d',\n # Need to wait for setup and job initialization.\n 'sleep 30',\n rf'{smoke_tests_utils.GET_JOB_QUEUE} | grep {name} | head -n1 | grep \"STARTING\\|RUNNING\"',\n@@ -194,7 +194,7 @@ def test_managed_jobs_failed_setup(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'managed_jobs_failed_setup',\n [\n- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} -y -d tests/test_yamls/failed_setup.yaml',\n+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} -y -d tests/test_yamls/failed_setup.yaml',\n # Make sure the job failed quickly.\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n@@ -225,7 +225,7 @@ def test_managed_jobs_pipeline_failed_setup(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'managed_jobs_pipeline_failed_setup',\n [\n- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} -y -d tests/test_yamls/failed_setup_pipeline.yaml',\n+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} -y -d tests/test_yamls/failed_setup_pipeline.yaml',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -265,7 +265,7 @@ def test_managed_jobs_recovery_aws(aws_config_region):\n 'managed_jobs_recovery_aws',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),\n- rf'sky jobs launch --cloud aws --region {region} --use-spot -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n+ rf'sky jobs launch --infra aws/{region} --use-spot -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -316,7 +316,7 @@ def test_managed_jobs_recovery_gcp():\n 'managed_jobs_recovery_gcp',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n- rf'sky jobs launch --cloud gcp --zone {zone} -n {name} --use-spot {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n+ rf'sky jobs launch --infra gcp/*/{zone} -n {name} --use-spot {smoke_tests_utils.LOW_RESOURCE_ARG} \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -468,7 +468,7 @@ def test_managed_jobs_recovery_default_resources(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'managed-spot-recovery-default-resources',\n [\n- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} --use-spot \"sleep 30 && sudo shutdown now && sleep 1000\" -y -d',\n+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} --use-spot \"sleep 30 && sudo shutdown now && sleep 1000\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -497,7 +497,7 @@ def test_managed_jobs_recovery_multi_node_aws(aws_config_region):\n 'managed_jobs_recovery_multi_node_aws',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),\n- rf'sky jobs launch --cloud aws --region {region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n+ rf'sky jobs launch --infra aws/{region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -549,7 +549,7 @@ def test_managed_jobs_recovery_multi_node_gcp():\n 'managed_jobs_recovery_multi_node_gcp',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n- rf'sky jobs launch --cloud gcp --zone {zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n+ rf'sky jobs launch --infra gcp/*/{zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 \"echo SKYPILOT_TASK_ID: \\$SKYPILOT_TASK_ID; sleep 1800\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -591,7 +591,7 @@ def test_managed_jobs_cancellation_aws(aws_config_region):\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),\n # Test cancellation during spot cluster being launched.\n- f'sky jobs launch --cloud aws --region {region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n+ f'sky jobs launch --infra aws/{region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -614,7 +614,7 @@ def test_managed_jobs_cancellation_aws(aws_config_region):\n '--output text) && echo \"$s\" && echo; [[ -z \"$s\" ]] || [[ \"$s\" = \"terminated\" ]] || [[ \"$s\" = \"shutting-down\" ]]'\n )),\n # Test cancelling the spot cluster during spot job being setup.\n- f'sky jobs launch --cloud aws --region {region} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',\n+ f'sky jobs launch --infra aws/{region} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',\n # The job is set up in the cluster, will shown as RUNNING.\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n@@ -636,7 +636,7 @@ def test_managed_jobs_cancellation_aws(aws_config_region):\n '--output text) && echo \"$s\" && echo; [[ -z \"$s\" ]] || [[ \"$s\" = \"terminated\" ]] || [[ \"$s\" = \"shutting-down\" ]]'\n )),\n # Test cancellation during spot job is recovering.\n- f'sky jobs launch --cloud aws --region {region} -n {name}-3 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n+ f'sky jobs launch --infra aws/{region} -n {name}-3 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n # The job is running in the cluster, will shown as RUNNING.\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n@@ -701,7 +701,7 @@ def test_managed_jobs_cancellation_gcp():\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n # Test cancellation during spot cluster being launched.\n- f'sky jobs launch --cloud gcp --zone {zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n+ f'sky jobs launch --infra gcp/*/{zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -714,7 +714,7 @@ def test_managed_jobs_cancellation_gcp():\n job_status=[sky.ManagedJobStatus.CANCELLED],\n timeout=155),\n # Test cancelling the spot cluster during spot job being setup.\n- f'sky jobs launch --cloud gcp --zone {zone} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',\n+ f'sky jobs launch --infra gcp/*/{zone} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',\n # The job is set up in the cluster, will shown as RUNNING.\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n@@ -728,7 +728,7 @@ def test_managed_jobs_cancellation_gcp():\n job_status=[sky.ManagedJobStatus.CANCELLED],\n timeout=155),\n # Test cancellation during spot job is recovering.\n- f'sky jobs launch --cloud gcp --zone {zone} -n {name_3} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n+ f'sky jobs launch --infra gcp/*/{zone} -n {name_3} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot \"sleep 1000\" -y -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name_3,\n@@ -821,16 +821,20 @@ def test_managed_jobs_storage(generic_cloud: str):\n storage_name = f'sky-test-{timestamp}'\n output_storage_name = f'sky-test-output-{timestamp}'\n \n+ # First, add an initialization for region\n+ region = None\n+ region_flag = ''\n+ region_validation_cmd = 'true'\n+ use_spot = ' --use-spot'\n+ output_check_cmd = None\n+\n # Also perform region testing for bucket creation to validate if buckets are\n # created in the correct region and correctly mounted in managed jobs.\n # However, we inject this testing only for AWS and GCP since they are the\n # supported object storage providers in SkyPilot.\n- region_flag = ''\n- region_validation_cmd = 'true'\n- use_spot = ' --use-spot'\n if generic_cloud == 'aws':\n region = 'eu-central-1'\n- region_flag = f' --region {region}'\n+ region_flag = f'/{region}'\n region_cmd = test_mount_and_storage.TestStorageWithCredentials.cli_region_cmd(\n storage_lib.StoreType.S3, bucket_name=output_storage_name)\n region_validation_cmd = f's=$({region_cmd}) && echo \"$s\" && echo; echo \"$s\" | grep {region}'\n@@ -847,7 +851,7 @@ def test_managed_jobs_storage(generic_cloud: str):\n f'{non_persistent_bucket_removed_check_cmd} && exit 1 || true')\n elif generic_cloud == 'gcp':\n region = 'us-west2'\n- region_flag = f' --region {region}'\n+ region_flag = f'/{region}'\n region_cmd = test_mount_and_storage.TestStorageWithCredentials.cli_region_cmd(\n storage_lib.StoreType.GCS, bucket_name=output_storage_name)\n region_validation_cmd = f'{region_cmd} | grep {region}'\n@@ -863,12 +867,12 @@ def test_managed_jobs_storage(generic_cloud: str):\n name,\n f'{non_persistent_bucket_removed_check_cmd} && exit 1 || true')\n elif generic_cloud == 'azure':\n- region = 'centralus'\n- # Region centralus seems don't have the quota for low resource.\n+ # Azure instances with smaller than 7G memory can have flaky performance,\n # so we keep the default resource to avoid flakiness.\n low_resource_arg = \"\"\n- region_flag = f' --region {region}'\n- storage_account_name = test_mount_and_storage.TestStorageWithCredentials. \\\n+ region = 'centralus'\n+ region_flag = f'/{region}'\n+ storage_account_name = test_mount_and_storage.TestStorageWithCredentials.\\\n get_az_storage_account_name(region)\n region_cmd = test_mount_and_storage.TestStorageWithCredentials.cli_region_cmd(\n storage_lib.StoreType.AZURE,\n@@ -948,7 +952,7 @@ def test_managed_jobs_storage(generic_cloud: str):\n generic_cloud, name),\n # Override CPU/memory requirements to relax resource constraints\n # and reduce the chance of out-of-stock\n- f'sky jobs launch -n {name}{use_spot} {low_resource_arg} --cloud {generic_cloud}{region_flag} {file_path} -y -d',\n+ f'sky jobs launch -n {name}{use_spot} {low_resource_arg} --infra {generic_cloud}{region_flag} {file_path} -y -d',\n region_validation_cmd, # Check if the bucket is created in the correct region\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n@@ -1010,7 +1014,7 @@ def test_managed_jobs_intermediate_storage(generic_cloud: str):\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n # Verify command fails with correct error - run only once\n # In API server, we don't error out if the bucket does not exist, instead we create it.\n- # f'err=$(sky jobs launch -n {name} --cloud {generic_cloud} {file_path} -y 2>&1); '\n+ # f'err=$(sky jobs launch -n {name} --infra {generic_cloud} {file_path} -y 2>&1); '\n # f'ret=$?; if [ $ret -ne 0 ] && echo \"$err\" | grep -q \"StorageBucketCreateError: '\n # f'Jobs bucket \\'{intermediate_storage_name}\\' does not exist.\"; then exit 0; '\n # f'else exit 1; fi',\n@@ -1019,7 +1023,7 @@ def test_managed_jobs_intermediate_storage(generic_cloud: str):\n cmd=\n f'aws s3api create-bucket --bucket {intermediate_storage_name}'\n ),\n- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} {file_path} -y',\n+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} {file_path} -y',\n # fail because the bucket does not exist\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n@@ -1090,7 +1094,7 @@ def test_managed_jobs_inline_env(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-managed-jobs-inline-env',\n [\n- rf'sky jobs launch -n {name} -y --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV=\"hello world\" -- \"echo \"\\$TEST_ENV\"; ([[ ! -z \\\"\\$TEST_ENV\\\" ]] && [[ ! -z \\\"\\${constants.SKYPILOT_NODE_IPS}\\\" ]] && [[ ! -z \\\"\\${constants.SKYPILOT_NODE_RANK}\\\" ]] && [[ ! -z \\\"\\${constants.SKYPILOT_NUM_NODES}\\\" ]]) || exit 1\"',\n+ rf'sky jobs launch -n {name} -y --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV=\"hello world\" -- \"echo \"\\$TEST_ENV\"; ([[ ! -z \\\"\\$TEST_ENV\\\" ]] && [[ ! -z \\\"\\${constants.SKYPILOT_NODE_IPS}\\\" ]] && [[ ! -z \\\"\\${constants.SKYPILOT_NODE_RANK}\\\" ]] && [[ ! -z \\\"\\${constants.SKYPILOT_NUM_NODES}\\\" ]]) || exit 1\"',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -1120,7 +1124,7 @@ def test_managed_jobs_logs_sync_down(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-managed-jobs-logs-sync-down',\n [\n- f'sky jobs launch -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y examples/managed_job.yaml -d',\n+ f'sky jobs launch -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y examples/managed_job.yaml -d',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=f'{name}',\ndiff --git a/tests/smoke_tests/test_mount_and_storage.py b/tests/smoke_tests/test_mount_and_storage.py\nindex e2f45ede9e4..91249affd8d 100644\n--- a/tests/smoke_tests/test_mount_and_storage.py\n+++ b/tests/smoke_tests/test_mount_and_storage.py\n@@ -71,7 +71,7 @@ def test_file_mounts(generic_cloud: str):\n extra_flags = '--num-nodes 1'\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {extra_flags} examples/using_file_mounts.yaml',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {extra_flags} examples/using_file_mounts.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n ]\n test = smoke_tests_utils.Test(\n@@ -105,7 +105,7 @@ def test_oci_mounts():\n name = smoke_tests_utils.get_cluster_name()\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {name} --cloud oci --num-nodes 2 examples/oci/oci-mounts.yaml',\n+ f'sky launch -y -c {name} --infra oci --num-nodes 2 examples/oci/oci-mounts.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n ]\n test = smoke_tests_utils.Test(\n@@ -124,12 +124,12 @@ def test_using_file_mounts_with_env_vars(generic_cloud: str):\n storage_name = TestStorageWithCredentials.generate_bucket_name()\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- (f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} '\n+ (f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} '\n 'examples/using_file_mounts_with_env_vars.yaml '\n f'--env MY_BUCKET={storage_name}'),\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n # Override with --env:\n- (f'sky launch -y -c {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} '\n+ (f'sky launch -y -c {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} '\n 'examples/using_file_mounts_with_env_vars.yaml '\n f'--env MY_BUCKET={storage_name} '\n '--env MY_LOCAL_PATH=tmpfile'),\n@@ -176,7 +176,7 @@ def _storage_mounts_commands_generator(f: TextIO, cluster_name: str,\n test_commands = [\n smoke_tests_utils.launch_cluster_for_cloud_cmd(cloud, cluster_name),\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {cluster_name} --cloud {cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {file_path}',\n+ f'sky launch -y -c {cluster_name} --infra {cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {file_path}',\n f'sky logs {cluster_name} 1 --status', # Ensure job succeeded.\n smoke_tests_utils.run_cloud_cmd_on_cluster(cluster_name,\n cmd=ls_hello_command),\n@@ -336,7 +336,7 @@ def test_kubernetes_context_switch():\n \n test_commands = [\n # Launch a cluster and run a simple task\n- f'sky launch -y -c {name} --cloud kubernetes \"echo Hello from original context\"',\n+ f'sky launch -y -c {name} --infra kubernetes \"echo Hello from original context\"',\n f'sky logs {name} 1 --status', # Ensure job succeeded\n \n # Get current context details and save to a file for later use in cleanup\n@@ -420,7 +420,7 @@ def test_docker_storage_mounts(generic_cloud: str, image_id: str):\n # If azure is used, the azure blob storage checking assumes the bucket is\n # created in the centralus region when getting the storage account. We\n # should set the cluster to be launched in the same region.\n- region_str = '--region centralus' if generic_cloud == 'azure' else ''\n+ region_str = f'/centralus' if generic_cloud == 'azure' else ''\n if azure_mount_unsupported_ubuntu_version in image_id:\n # The store for mount_private_mount is not specified in the template.\n # If we're running on Azure, the private mount will be created on\n@@ -449,7 +449,7 @@ def test_docker_storage_mounts(generic_cloud: str, image_id: str):\n file_path = f.name\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {region_str} --image-id {image_id} {file_path}',\n+ f'sky launch -y -c {name} --infra {generic_cloud}{region_str} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id {image_id} {file_path}',\n f'sky logs {name} 1 --status', # Ensure job succeeded.\n # Check AWS, GCP, or Azure storage mount.\n f'sky exec {name} {quoted_check}',\n@@ -479,7 +479,7 @@ def test_cloudflare_storage_mounts(generic_cloud: str):\n file_path = f.name\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {name} --cloud {generic_cloud} {file_path}',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {file_path}',\n f'sky logs {name} 1 --status', # Ensure job succeeded.\n f'AWS_SHARED_CREDENTIALS_FILE={cloudflare.R2_CREDENTIALS_PATH} aws s3 ls s3://{storage_name}/hello.txt --endpoint {endpoint_url} --profile=r2'\n ]\n@@ -507,7 +507,7 @@ def test_nebius_storage_mounts(generic_cloud: str):\n file_path = f.name\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {name} --cloud {generic_cloud} {file_path}',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {file_path}',\n f'sky logs {name} 1 --status', # Ensure job succeeded.\n f'aws s3 ls s3://{storage_name}/hello.txt --profile={nebius.NEBIUS_PROFILE_NAME}'\n ]\n@@ -537,7 +537,7 @@ def test_ibm_storage_mounts():\n file_path = f.name\n test_commands = [\n *smoke_tests_utils.STORAGE_SETUP_COMMANDS,\n- f'sky launch -y -c {name} --cloud ibm {file_path}',\n+ f'sky launch -y -c {name} --infra ibm {file_path}',\n f'sky logs {name} 1 --status', # Ensure job succeeded.\n f'rclone ls {rclone_profile_name}:{storage_name}/hello.txt',\n ]\n@@ -617,7 +617,7 @@ def test_ignore_exclusions(generic_cloud: str, ignore_file: str):\n # Run test commands\n test_commands = [\n # Test with sky launch\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --workdir {temp_dir} {yaml_path}',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --workdir {temp_dir} {yaml_path}',\n f'sky logs {name} 1 --status', # Ensure the job succeeded\n \n # Test with sky jobs launch\ndiff --git a/tests/smoke_tests/test_region_and_zone.py b/tests/smoke_tests/test_region_and_zone.py\nindex ebc8d65ca5a..3d6091bd767 100644\n--- a/tests/smoke_tests/test_region_and_zone.py\n+++ b/tests/smoke_tests/test_region_and_zone.py\n@@ -37,7 +37,7 @@ def test_aws_region():\n test = smoke_tests_utils.Test(\n 'aws_region',\n [\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-2 examples/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra */us-east-2 examples/minimal.yaml',\n f'sky exec {name} examples/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky status -v | grep {name} | grep us-east-2', # Ensure the region is correct.\n@@ -70,15 +70,15 @@ def test_aws_with_ssh_proxy_command():\n test = smoke_tests_utils.Test(\n 'aws_with_ssh_proxy_command',\n [\n- f'sky launch -y -c jump-{name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1',\n+ f'sky launch -y -c jump-{name} --infra aws/us-east-1 {smoke_tests_utils.LOW_RESOURCE_ARG}',\n # Use jump config\n f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; '\n- f'sky launch -y -c {name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1 echo hi',\n+ f'sky launch -y -c {name} --infra aws/us-east-1 {smoke_tests_utils.LOW_RESOURCE_ARG} echo hi',\n f'sky logs {name} 1 --status',\n f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; sky exec {name} echo hi',\n f'sky logs {name} 2 --status',\n # Start a small job to make sure the controller is created.\n- f'sky jobs launch -n {name}-0 --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y echo hi',\n+ f'sky jobs launch -n {name}-0 --infra aws {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y echo hi',\n # Wait other tests to create the job controller first, so that\n # the job controller is not launched with proxy command.\n smoke_tests_utils.\n@@ -86,7 +86,7 @@ def test_aws_with_ssh_proxy_command():\n cluster_name_wildcard='sky-jobs-controller-*',\n cluster_status=[sky.ClusterStatus.UP],\n timeout=300),\n- f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; sky jobs launch -n {name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1 -yd echo hi',\n+ f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; sky jobs launch -n {name} --infra aws/us-east-1 {smoke_tests_utils.LOW_RESOURCE_ARG} -yd echo hi',\n smoke_tests_utils.\n get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n job_name=name,\n@@ -109,7 +109,7 @@ def test_gcp_region_and_service_account():\n test = smoke_tests_utils.Test(\n 'gcp_region',\n [\n- f'sky launch -y -c {name} --region us-central1 {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} --infra gcp/us-central1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n f'sky exec {name} tests/test_yamls/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky exec {name} \\'curl -H \"Metadata-Flavor: Google\" \"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?format=standard&audience=gcp\"\\'',\n@@ -133,8 +133,8 @@ def test_ibm_region():\n test = smoke_tests_utils.Test(\n 'region',\n [\n- f'sky launch -y -c {name} --cloud ibm --region {region} examples/minimal.yaml',\n- f'sky exec {name} --cloud ibm examples/minimal.yaml',\n+ f'sky launch -y -c {name} --infra ibm/{region} examples/minimal.yaml',\n+ f'sky exec {name} --infra ibm examples/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky status -v | grep {name} | grep {region}', # Ensure the region is correct.\n ],\n@@ -149,7 +149,7 @@ def test_azure_region():\n test = smoke_tests_utils.Test(\n 'azure_region',\n [\n- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region eastus2 --cloud azure tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra azure/eastus2 tests/test_yamls/minimal.yaml',\n f'sky exec {name} tests/test_yamls/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky status -v | grep {name} | grep eastus2', # Ensure the region is correct.\n@@ -173,8 +173,8 @@ def test_aws_zone():\n test = smoke_tests_utils.Test(\n 'aws_zone',\n [\n- f'sky launch -y -c {name} examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG} --zone us-east-2b',\n- f'sky exec {name} examples/minimal.yaml --zone us-east-2b',\n+ f'sky launch -y -c {name} examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG} --infra */*/us-east-2b',\n+ f'sky exec {name} examples/minimal.yaml --infra */*/us-east-2b',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky status -v | grep {name} | grep us-east-2b', # Ensure the zone is correct.\n ],\n@@ -190,8 +190,8 @@ def test_ibm_zone():\n test = smoke_tests_utils.Test(\n 'zone',\n [\n- f'sky launch -y -c {name} --cloud ibm examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG} --zone {zone}',\n- f'sky exec {name} --cloud ibm examples/minimal.yaml --zone {zone}',\n+ f'sky launch -y -c {name} --infra ibm/*/{zone} examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG}',\n+ f'sky exec {name} --infra ibm/*/{zone} examples/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky status -v | grep {name} | grep {zone}', # Ensure the zone is correct.\n ],\n@@ -206,8 +206,8 @@ def test_gcp_zone():\n test = smoke_tests_utils.Test(\n 'gcp_zone',\n [\n- f'sky launch -y -c {name} --zone us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp tests/test_yamls/minimal.yaml',\n- f'sky exec {name} --zone us-central1-a --cloud gcp tests/test_yamls/minimal.yaml',\n+ f'sky launch -y -c {name} --infra gcp/*/us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',\n+ f'sky exec {name} --infra gcp/*/us-central1-a tests/test_yamls/minimal.yaml',\n f'sky logs {name} 1 --status', # Ensure the job succeeded.\n f'sky status -v | grep {name} | grep us-central1-a', # Ensure the zone is correct.\n ],\ndiff --git a/tests/smoke_tests/test_sky_serve.py b/tests/smoke_tests/test_sky_serve.py\nindex f766a277f8b..62064148b94 100644\n--- a/tests/smoke_tests/test_sky_serve.py\n+++ b/tests/smoke_tests/test_sky_serve.py\n@@ -113,13 +113,13 @@ def _get_service_name() -> str:\n 'echo \"$s\"')\n \n _WAIT_PROVISION_REPR = (\n- # Once controller is ready, check provisioning vs. vCPU=2. This is for\n- # the `_check_replica_in_status`, which will check number of `vCPU=2` in the\n+ # Once controller is ready, check provisioning vs. cpus=2. This is for\n+ # the `_check_replica_in_status`, which will check number of `cpus=2` in the\n # `sky serve status` output and use that to suggest the number of replicas.\n # However, replicas in provisioning state is possible to have a repr of `-`,\n # since the desired `launched_resources` is not decided yet. This would\n # cause an error when counting desired number of replicas. We wait for the\n- # representation of `vCPU=2` the same with number of provisioning replicas\n+ # representation of `cpus=2` the same with number of provisioning replicas\n # to avoid this error.\n # NOTE(tian): This assumes the replica will not do failover, as the\n # requested resources is only 2 vCPU and likely to be immediately available\n@@ -127,7 +127,7 @@ def _get_service_name() -> str:\n # failover\n # Check #4565 for more information.\n 'num_provisioning=$(echo \"$s\" | grep \"PROVISIONING\" | wc -l); '\n- 'num_vcpu_in_provision=$(echo \"$s\" | grep \"PROVISIONING\" | grep \"vCPU=2\" | wc -l); '\n+ 'num_vcpu_in_provision=$(echo \"$s\" | grep \"PROVISIONING\" | grep \"x(cpus=2, \" | wc -l); '\n 'until [ \"$num_provisioning\" -eq \"$num_vcpu_in_provision\" ]; '\n 'do '\n ' echo \"Waiting for provisioning resource repr ready...\"; '\n@@ -135,10 +135,10 @@ def _get_service_name() -> str:\n ' sleep 2; '\n ' s=$(sky serve status {name}); '\n ' num_provisioning=$(echo \"$s\" | grep \"PROVISIONING\" | wc -l); '\n- ' num_vcpu_in_provision=$(echo \"$s\" | grep \"PROVISIONING\" | grep \"vCPU=2\" | wc -l); '\n+ ' num_vcpu_in_provision=$(echo \"$s\" | grep \"PROVISIONING\" | grep \"x(cpus=2, \" | wc -l); '\n 'done; '\n # Provisioning is complete\n- 'echo \"Provisioning complete. PROVISIONING: $num_provisioning, vCPU=2: $num_vcpu_in_provision\"'\n+ 'echo \"Provisioning complete. PROVISIONING: $num_provisioning, cpus=2: $num_cpus_in_provision\"'\n )\n \n # Shell script snippet to monitor and wait for resolution of NOT_READY status:\n@@ -197,7 +197,7 @@ def _check_replica_in_status(name: str,\n timeout_seconds: int = 0) -> str:\n \"\"\"Check replicas' status and count in sky serve status\n \n- We will check vCPU=2, as all our tests use vCPU=2.\n+ We will check cpus=2, as all our tests use cpus=2.\n \n Args:\n name: the name of the service\n@@ -216,8 +216,8 @@ def _check_replica_in_status(name: str,\n ] and not status.startswith('FAILED'):\n spot_str = ''\n if is_spot:\n- spot_str = r'\\[Spot\\]'\n- resource_str = f'({spot_str}vCPU=2)'\n+ spot_str = r'\\[spot\\]'\n+ resource_str = f'x{spot_str}(cpus=2, '\n check_conditions.append(\n f'echo \"$s\" | grep \"{resource_str}\" | grep \"{status}\" | wc -l | '\n f'grep {count}')\n@@ -342,7 +342,7 @@ def generate_llm_test_command(prompt: str, expected_output: str) -> str:\n test = smoke_tests_utils.Test(\n 'test-skyserve-llm',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} --gpus {accelerator} -y tests/skyserve/llm/service.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} --gpus {accelerator} -y tests/skyserve/llm/service.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n *[\n generate_llm_test_command(prompt, output)\n@@ -395,7 +395,7 @@ def test_skyserve_base_ondemand_fallback(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-base-ondemand-fallback',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/base_ondemand_fallback.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/base_ondemand_fallback.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),\n _check_replica_in_status(name, [(1, True, 'READY'),\n (1, False, 'READY')]),\n@@ -417,7 +417,7 @@ def test_skyserve_dynamic_ondemand_fallback():\n 'test-skyserve-dynamic-ondemand-fallback',\n [\n smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),\n- f'sky serve up -n {name} --cloud gcp {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/dynamic_ondemand_fallback.yaml',\n+ f'sky serve up -n {name} --infra gcp {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/dynamic_ondemand_fallback.yaml',\n f'sleep 40',\n # 2 on-demand (provisioning) + 2 Spot (provisioning).\n f'{_SERVE_STATUS_WAIT.format(name=name)}; echo \"$s\";'\n@@ -475,7 +475,7 @@ def test_skyserve_user_bug_restart(generic_cloud: str):\n 'test-skyserve-user-bug-restart',\n [\n increase_initial_delay_seconds(\n- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/restart/user_bug.yaml'\n+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/restart/user_bug.yaml'\n ),\n f's=$(sky serve status {name}); echo \"$s\";'\n 'until echo \"$s\" | grep -A 100 \"Service Replicas\" | grep \"SHUTTING_DOWN\"; '\n@@ -490,7 +490,7 @@ def test_skyserve_user_bug_restart(generic_cloud: str):\n f'echo \"$s\" | grep -A 100 \"Service Replicas\" | grep \"{name}\" | wc -l | grep 1; '\n f'echo \"$s\" | grep -B 100 \"NO_REPLICA\" | grep \"0/0\"',\n increase_initial_delay_seconds(\n- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/auto_restart.yaml'\n+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/auto_restart.yaml'\n ),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 'until curl --connect-timeout 10 --max-time 10 $endpoint | grep \"Hi, SkyPilot here\"; do sleep 1; done; sleep 2; '\n@@ -513,7 +513,7 @@ def test_skyserve_load_balancer(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-load-balancer',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/load_balancer/service.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/load_balancer/service.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=3),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n f'{_SERVE_STATUS_WAIT.format(name=name)}; '\n@@ -587,7 +587,7 @@ def test_skyserve_cancel(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-cancel',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/cancel/cancel.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/cancel/cancel.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; python3 '\n 'tests/skyserve/cancel/send_cancel_request.py '\n@@ -616,7 +616,7 @@ def test_skyserve_streaming(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-streaming',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/streaming/streaming.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/streaming/streaming.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 'python3 tests/skyserve/streaming/send_streaming_request.py '\n@@ -637,7 +637,7 @@ def test_skyserve_readiness_timeout_fail(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-readiness-timeout-fail',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task.yaml',\n # None of the readiness probe will pass, so the service will be\n # terminated after the initial delay.\n f's=$(sky serve status {name}); '\n@@ -663,7 +663,7 @@ def test_skyserve_large_readiness_timeout(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-large-readiness-timeout',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task_large_timeout.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task_large_timeout.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 'request_output=$(curl $endpoint); echo \"$request_output\"; echo \"$request_output\" | grep \"Hi, SkyPilot here\"',\n@@ -689,10 +689,10 @@ def test_skyserve_update(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-update',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep \"Hi, SkyPilot here\"',\n- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/new.yaml',\n+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/new.yaml',\n # sleep before update is registered.\n 'sleep 20',\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n@@ -733,12 +733,12 @@ def test_skyserve_rolling_update(generic_cloud: str):\n f'test-skyserve-rolling-update',\n [\n increase_initial_delay_seconds(\n- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml'\n+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml'\n ),\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep \"Hi, SkyPilot here\"',\n increase_initial_delay_seconds(\n- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/new.yaml'\n+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/new.yaml'\n ),\n # Make sure the traffic is mixed across two versions, the replicas\n # with even id will sleep 120 seconds before being ready, so we\n@@ -785,10 +785,10 @@ def test_skyserve_fast_update(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-fast-update',\n [\n- f'sky serve up -n {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} tests/skyserve/update/bump_version_before.yaml',\n+ f'sky serve up -n {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} tests/skyserve/update/bump_version_before.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep \"Hi, SkyPilot here\"',\n- f'sky serve update {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode blue_green -y tests/skyserve/update/bump_version_after.yaml',\n+ f'sky serve update {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode blue_green -y tests/skyserve/update/bump_version_after.yaml',\n # sleep to wait for update to be registered.\n 'sleep 40',\n # 2 on-deamnd (ready) + 1 on-demand (provisioning).\n@@ -802,7 +802,7 @@ def test_skyserve_fast_update(generic_cloud: str):\n _check_service_version(name, \"2\"),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep \"Hi, SkyPilot here\"',\n # Test rolling update\n- f'sky serve update {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/bump_version_before.yaml',\n+ f'sky serve update {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/bump_version_before.yaml',\n # sleep to wait for update to be registered.\n 'sleep 25',\n # 2 on-deamnd (ready) + 1 on-demand (shutting down).\n@@ -833,14 +833,14 @@ def test_skyserve_update_autoscale(generic_cloud: str):\n f'test-skyserve-update-autoscale',\n [\n increase_initial_delay_seconds(\n- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'\n+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'\n ),\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2) +\n _check_service_version(name, \"1\"),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 'curl $endpoint | grep \"Hi, SkyPilot here\"',\n increase_initial_delay_seconds(\n- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/num_min_one.yaml'\n+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/num_min_one.yaml'\n ),\n # sleep before update is registered.\n 'sleep 20',\n@@ -851,7 +851,7 @@ def test_skyserve_update_autoscale(generic_cloud: str):\n 'curl $endpoint | grep \"Hi, SkyPilot here!\"',\n # Rolling Update\n increase_initial_delay_seconds(\n- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'\n+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'\n ),\n # sleep before update is registered.\n 'sleep 20',\n@@ -909,12 +909,12 @@ def test_skyserve_new_autoscaler_update(mode: str, generic_cloud: str):\n test = smoke_tests_utils.Test(\n f'test-skyserve-new-autoscaler-update-{mode}',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/new_autoscaler_before.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/new_autoscaler_before.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2) +\n _check_service_version(name, \"1\"),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 's=$(curl $endpoint); echo \"$s\"; echo \"$s\" | grep \"Hi, SkyPilot here\"',\n- f'sky serve update {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode {mode} -y tests/skyserve/update/new_autoscaler_after.yaml',\n+ f'sky serve update {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode {mode} -y tests/skyserve/update/new_autoscaler_after.yaml',\n # Wait for update to be registered\n 'sleep 90',\n wait_until_no_pending,\n@@ -953,7 +953,7 @@ def test_skyserve_failures(generic_cloud: str):\n 'test-skyserve-failures',\n [\n increase_initial_delay_seconds(\n- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/failures/initial_delay.yaml'\n+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/failures/initial_delay.yaml'\n ),\n f's=$(sky serve status {name}); '\n f'until echo \"$s\" | grep \"FAILED_INITIAL_DELAY\"; do '\n@@ -964,7 +964,7 @@ def test_skyserve_failures(generic_cloud: str):\n # Make sure no new replicas are started for early failure.\n f'echo \"$s\" | grep -A 100 \"Service Replicas\" | grep \"{name}\" | wc -l | grep 2;',\n increase_initial_delay_seconds(\n- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/failures/probing.yaml'\n+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/failures/probing.yaml'\n ),\n f's=$(sky serve status {name}); '\n # Wait for replica to be ready.\n@@ -1012,7 +1012,7 @@ def test_skyserve_https(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-https',\n [\n- f'sky serve up -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} -y tests/skyserve/https/service.yaml '\n+ f'sky serve up -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} -y tests/skyserve/https/service.yaml '\n f'--env TLS_KEYFILE_ENV_VAR={keyfile} --env TLS_CERTFILE_ENV_VAR={certfile}',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n@@ -1043,7 +1043,7 @@ def test_skyserve_multi_ports(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'test-skyserve-multi-ports',\n [\n- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/multi_ports.yaml',\n+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/multi_ports.yaml',\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 'curl $replica_endpoint | grep \"Hi, SkyPilot here\"; '\n@@ -1069,7 +1069,7 @@ def test_user_dependencies(generic_cloud: str):\n test = smoke_tests_utils.Test(\n 'user-dependencies',\n [\n- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"pip install ray>2.11; ray start --head\"',\n+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} \"pip install ray>2.11; ray start --head\"',\n f'sky logs {name} 1 --status',\n f'sky exec {name} \"echo hi\"',\n f'sky logs {name} 2 --status',\ndiff --git a/tests/stress/mountedstorage/mount_stress.yaml b/tests/stress/mountedstorage/mount_stress.yaml\nindex 41b9f19656b..8caa3f49f1c 100644\n--- a/tests/stress/mountedstorage/mount_stress.yaml\n+++ b/tests/stress/mountedstorage/mount_stress.yaml\n@@ -10,7 +10,7 @@\n name: stress\n \n resources:\n- cloud: aws\n+ infra: aws\n \n workdir: .\n \ndiff --git a/tests/test_failover.py b/tests/test_failover.py\nindex c8159213d85..8b50115e58e 100644\n--- a/tests/test_failover.py\n+++ b/tests/test_failover.py\n@@ -80,7 +80,7 @@ def mock_create_instances(ec2_fail_fast, cluster_name, node_config, tags,\n monkeypatch.setattr(aws_instance, '_create_instances',\n mock_create_instances)\n task = sky.Task(run='echo hi')\n- task.set_resources(sky.Resources(sky.AWS(), instance_type='t2.micro'))\n+ task.set_resources(sky.Resources(infra='aws', instance_type='t2.micro'))\n \n with unittest.mock.patch.object(\n cloud_vm_ray_backend.FailoverCloudErrorHandlerV2,\ndiff --git a/tests/test_jobs.py b/tests/test_jobs.py\nindex a5cebd0c3d1..1ac2e76be72 100644\n--- a/tests/test_jobs.py\n+++ b/tests/test_jobs.py\n@@ -38,10 +38,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):\n cluster_name_on_cloud='test-cluster1',\n cluster_yaml='/tmp/cluster1.yaml',\n launched_nodes=2,\n- launched_resources=sky.Resources(sky.AWS(),\n- instance_type='p4d.24xlarge',\n- region='us-east-1',\n- zone='us-east-1a'),\n+ launched_resources=sky.Resources(infra='aws/us-east-1/us-east-1a',\n+ instance_type='p4d.24xlarge'),\n )\n global_user_state.add_or_update_cluster(\n 'test-cluster1',\n@@ -53,11 +51,9 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):\n cluster_name_on_cloud='test-cluster2',\n cluster_yaml='/tmp/cluster2.yaml',\n launched_nodes=1,\n- launched_resources=sky.Resources(sky.GCP(),\n+ launched_resources=sky.Resources(infra='gcp/us-west1/us-west1-a',\n instance_type='n1-highmem-64',\n- accelerators='V100:4',\n- region='us-west1',\n- zone='us-west1-a'),\n+ accelerators='V100:4'),\n )\n global_user_state.add_or_update_cluster(\n 'test-cluster2',\n@@ -69,9 +65,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):\n cluster_name_on_cloud='test-cluster3',\n cluster_yaml='/tmp/cluster3.yaml',\n launched_nodes=1,\n- launched_resources=sky.Resources(sky.Azure(),\n- instance_type='Standard_D4s_v3',\n- region='eastus'),\n+ launched_resources=sky.Resources(infra='azure/eastus',\n+ instance_type='Standard_D4s_v3'),\n )\n global_user_state.add_or_update_cluster(\n 'test-cluster3',\n@@ -84,10 +79,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):\n cluster_yaml='/tmp/disk-tier1.yaml',\n launched_nodes=1,\n launched_resources=sky.Resources(\n- sky.AWS(),\n+ infra='aws/us-east-1/us-east-1a',\n instance_type='m6i.2xlarge',\n- region='us-east-1',\n- zone='us-east-1a',\n disk_tier=resources_utils.DiskTier.BEST))\n global_user_state.add_or_update_cluster(\n 'test-disk-tier1',\n@@ -100,10 +93,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):\n cluster_yaml='/tmp/disk-tier2.yaml',\n launched_nodes=1,\n launched_resources=sky.Resources(\n- sky.GCP(),\n+ infra='gcp/us-west1/us-west1-a',\n instance_type='n2-standard-8',\n- region='us-west1',\n- zone='us-west1-a',\n disk_tier=resources_utils.DiskTier.MEDIUM))\n global_user_state.add_or_update_cluster(\n 'test-disk-tier2',\n@@ -150,9 +141,8 @@ def test_launch_exec(self):\n sky.exec(task, cluster_name='test-cluster1', dryrun=True))\n task.set_resources(\n sky.Resources(\n- sky.AWS(),\n+ infra='aws/us-east-1',\n accelerators='A100:1',\n- region='us-east-1',\n ))\n sky.stream_and_get(\n sky.launch(task, cluster_name='test-cluster1', dryrun=True))\n@@ -166,7 +156,7 @@ def test_launch_exec(self):\n sky.stream_and_get(\n sky.exec(task, cluster_name='test-cluster2', dryrun=True))\n task.set_resources(\n- sky.Resources(sky.GCP(), accelerators='V100:3', region='us-west1'))\n+ sky.Resources(infra='gcp/us-west1', accelerators='V100:3'))\n sky.stream_and_get(\n sky.launch(task, cluster_name='test-cluster2', dryrun=True))\n sky.stream_and_get(\n@@ -217,10 +207,10 @@ def test_launch_exec_mismatch(self):\n self._run_launch_exec_with_error(task, 'test-cluster3')\n \n # Cloud mismatch\n- task.set_resources(sky.Resources(sky.AWS(), accelerators='V100'))\n+ task.set_resources(sky.Resources(infra='aws', accelerators='V100'))\n self._run_launch_exec_with_error(task, 'test-cluster2')\n \n- task.set_resources(sky.Resources(sky.GCP()))\n+ task.set_resources(sky.Resources(infra='gcp'))\n self._run_launch_exec_with_error(task, 'test-cluster1')\n \n # Disk tier mismatch\ndiff --git a/tests/test_jobs_and_serve.py b/tests/test_jobs_and_serve.py\nindex 21e369e26f5..3f8d7344c21 100644\n--- a/tests/test_jobs_and_serve.py\n+++ b/tests/test_jobs_and_serve.py\n@@ -74,9 +74,8 @@ def _mock_cluster_state(_mock_db_conn, tmp_path):\n cluster_name_on_cloud='test-cluster1',\n cluster_yaml=_generate_tmp_yaml(tmp_path, 'cluster1.yaml'),\n launched_nodes=2,\n- launched_resources=sky.Resources(sky.AWS(),\n- instance_type='p3.2xlarge',\n- region='us-east-1'),\n+ launched_resources=sky.Resources(infra='aws/us-east-1',\n+ instance_type='p3.2xlarge'),\n )\n global_user_state.add_or_update_cluster(\n 'test-cluster1',\n@@ -88,10 +87,9 @@ def _mock_cluster_state(_mock_db_conn, tmp_path):\n cluster_name_on_cloud='test-cluster2',\n cluster_yaml=_generate_tmp_yaml(tmp_path, 'cluster2.yaml'),\n launched_nodes=1,\n- launched_resources=sky.Resources(sky.GCP(),\n+ launched_resources=sky.Resources(infra='gcp/us-west1',\n instance_type='a2-highgpu-4g',\n- accelerators={'A100': 4},\n- region='us-west1'),\n+ accelerators={'A100': 4}),\n )\n global_user_state.add_or_update_cluster(\n 'test-cluster2',\n@@ -103,9 +101,8 @@ def _mock_cluster_state(_mock_db_conn, tmp_path):\n cluster_name_on_cloud='test-cluster3',\n cluster_yaml=_generate_tmp_yaml(tmp_path, 'cluster3.yaml'),\n launched_nodes=4,\n- launched_resources=sky.Resources(sky.Azure(),\n- instance_type='Standard_D4s_v3',\n- region='eastus'),\n+ launched_resources=sky.Resources(infra='AZURE/eastus',\n+ instance_type='Standard_D4s_v3'),\n )\n global_user_state.add_or_update_cluster(\n 'test-cluster3',\n@@ -121,9 +118,8 @@ def _mock_jobs_controller(_mock_db_conn, tmp_path):\n cluster_name_on_cloud=common.JOB_CONTROLLER_NAME,\n cluster_yaml=_generate_tmp_yaml(tmp_path, 'jobs_controller.yaml'),\n launched_nodes=1,\n- launched_resources=sky.Resources(sky.AWS(),\n- instance_type='m4.2xlarge',\n- region='us-west-1'),\n+ launched_resources=sky.Resources(infra='aws/us-west-1',\n+ instance_type='m4.2xlarge'),\n )\n global_user_state.add_or_update_cluster(\n common.JOB_CONTROLLER_NAME,\n@@ -140,9 +136,8 @@ def _mock_serve_controller(_mock_db_conn, tmp_path):\n cluster_name_on_cloud=common.SKY_SERVE_CONTROLLER_NAME,\n cluster_yaml=yaml_path,\n launched_nodes=1,\n- launched_resources=sky.Resources(sky.AWS(),\n- instance_type='m4.2xlarge',\n- region='us-west-1'),\n+ launched_resources=sky.Resources(infra='aws/us-west-1',\n+ instance_type='m4.2xlarge'),\n stable_internal_external_ips=[('1.2.3.4', '4.3.2.1')],\n stable_ssh_ports=[22],\n )\ndiff --git a/tests/test_optimizer_dryruns.py b/tests/test_optimizer_dryruns.py\nindex 2de21695bd9..4e594025287 100644\n--- a/tests/test_optimizer_dryruns.py\n+++ b/tests/test_optimizer_dryruns.py\n@@ -86,16 +86,16 @@ def _test_resources_launch(*resources_args,\n \n \n def test_resources_aws(enable_all_clouds):\n- _test_resources_launch(sky.AWS(), 'p3.2xlarge')\n+ _test_resources_launch(infra='aws', instance_type='p3.2xlarge')\n \n \n def test_resources_azure(enable_all_clouds):\n- _test_resources_launch(sky.Azure(), 'Standard_NC24s_v3')\n+ _test_resources_launch(infra='azure', instance_type='Standard_NC24s_v3')\n \n \n def test_resources_gcp(enable_all_clouds):\n- _test_resources_launch(sky.GCP(), 'n1-standard-16')\n- _test_resources_launch(sky.GCP(), 'a3-highgpu-8g')\n+ _test_resources_launch(infra='gcp', instance_type='n1-standard-16')\n+ _test_resources_launch(infra='gcp', instance_type='a3-highgpu-8g')\n \n \n def test_partial_cpus(enable_all_clouds):\n@@ -419,20 +419,15 @@ def test_invalid_image(enable_all_clouds):\n \n \n def test_valid_image(enable_all_clouds):\n- _test_resources(cloud=sky.AWS(),\n- region='us-east-1',\n- image_id='ami-0868a20f5a3bf9702')\n+ _test_resources(infra='aws/us-east-1', image_id='ami-0868a20f5a3bf9702')\n _test_resources(\n- cloud=sky.GCP(),\n- region='us-central1',\n+ infra='gcp/us-central1',\n image_id=\n- 'projects/deeplearning-platform-release/global/images/family/common-cpu-v20230126'\n- )\n+ 'projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240927')\n _test_resources(\n- cloud=sky.GCP(),\n+ infra='gcp',\n image_id=\n- 'projects/deeplearning-platform-release/global/images/family/common-cpu-v20230126'\n- )\n+ 'projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240927')\n \n \n def test_parse_cpus_from_yaml():\n@@ -566,9 +561,8 @@ def test_invalid_accelerators_regions(enable_all_clouds):\n task = sky.Task(run='echo hi')\n task.set_resources(\n sky.Resources(\n- sky.AWS(),\n+ infra='aws/us-west-1',\n accelerators='A100:8',\n- region='us-west-1',\n ))\n with pytest.raises(exceptions.ResourcesUnavailableError) as e:\n sky.stream_and_get(\n@@ -591,7 +585,7 @@ def _test_optimize_speed(resources: sky.Resources):\n def test_optimize_speed(enable_all_clouds):\n _test_optimize_speed(sky.Resources(cpus=4))\n for cloud in registry.CLOUD_REGISTRY.values():\n- _test_optimize_speed(sky.Resources(cloud, cpus='4+'))\n+ _test_optimize_speed(sky.Resources(infra=str(cloud), cpus='4+'))\n _test_optimize_speed(sky.Resources(cpus='4+', memory='4+'))\n _test_optimize_speed(\n sky.Resources(cpus='4+', memory='4+', accelerators='V100:1'))\ndiff --git a/tests/test_optimizer_random_dag.py b/tests/test_optimizer_random_dag.py\nindex 1a848097ab7..8efaaf098e4 100644\n--- a/tests/test_optimizer_random_dag.py\n+++ b/tests/test_optimizer_random_dag.py\n@@ -83,7 +83,7 @@ def generate_random_dag(\n if 'tpu' in candidate.accelerator_name:\n instance_type = 'TPU-VM'\n resources = sky.Resources(\n- cloud=registry.CLOUD_REGISTRY.from_str(candidate.cloud),\n+ infra=candidate.cloud,\n instance_type=instance_type,\n accelerators={\n candidate.accelerator_name: candidate.accelerator_count\ndiff --git a/tests/test_yamls/failed_setup_pipeline.yaml b/tests/test_yamls/failed_setup_pipeline.yaml\nindex 81e5f2bde34..3d4b3885b18 100644\n--- a/tests/test_yamls/failed_setup_pipeline.yaml\n+++ b/tests/test_yamls/failed_setup_pipeline.yaml\n@@ -9,8 +9,8 @@ resources:\n cpus: 2\n memory: 4+\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n \n setup: |\n@@ -27,8 +27,8 @@ resources:\n cpus: 2\n memory: 4+\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n \n setup: |\n@@ -47,8 +47,8 @@ resources:\n cpus: 2\n memory: 4+\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n setup: |\n echo setup for eval\n@@ -67,8 +67,8 @@ resources:\n cpus: 2\n memory: 4+\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n setup: |\n echo setup for eval\ndiff --git a/tests/test_yamls/gcp_per_region_images.yaml b/tests/test_yamls/gcp_per_region_images.yaml\nindex db07061d5d9..8d309ca8b8f 100644\n--- a/tests/test_yamls/gcp_per_region_images.yaml\n+++ b/tests/test_yamls/gcp_per_region_images.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: gcp\n+ infra: gcp\n image_id: \n us-central1: skypilot:cpu-debian-10\n us-west3: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112\ndiff --git a/tests/test_yamls/minimal_test_quick_tests_core.yaml b/tests/test_yamls/minimal_test_quick_tests_core.yaml\nindex 15857e972dd..9159f22ad00 100644\n--- a/tests/test_yamls/minimal_test_quick_tests_core.yaml\n+++ b/tests/test_yamls/minimal_test_quick_tests_core.yaml\n@@ -1,5 +1,5 @@\n resources:\n- cloud: aws\n+ infra: aws\n instance_type: t3.small\n \n file_mounts:\ndiff --git a/tests/test_yamls/pipeline.yaml b/tests/test_yamls/pipeline.yaml\nindex 3f3e0b1e563..14514c6c736 100644\n--- a/tests/test_yamls/pipeline.yaml\n+++ b/tests/test_yamls/pipeline.yaml\n@@ -8,8 +8,8 @@ resources:\n memory: 4+\n use_spot: true\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n setup: |\n echo setup for train\n@@ -27,8 +27,8 @@ resources:\n cpus: 2+\n memory: 4+\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n setup: |\n echo setup for train\ndiff --git a/tests/test_yamls/pipeline_aws.yaml b/tests/test_yamls/pipeline_aws.yaml\nindex fe074f74128..f11afae5bbd 100644\n--- a/tests/test_yamls/pipeline_aws.yaml\n+++ b/tests/test_yamls/pipeline_aws.yaml\n@@ -4,8 +4,7 @@ name: pipeline\n name: a\n \n resources:\n- cloud: aws\n- region: us-east-2\n+ infra: aws/us-east-2\n cpus: 2+\n memory: 4+\n \n@@ -21,7 +20,7 @@ run: |\n name: b\n \n resources:\n- cloud: aws\n+ infra: aws\n cpus: 2+\n memory: 4+\n \n@@ -39,7 +38,7 @@ run: |\n name: eval1\n \n resources:\n- cloud: aws\n+ infra: aws\n cpus: 2+\n memory: 4+\n \n@@ -57,7 +56,7 @@ run: |\n name: eval2\n \n resources:\n- cloud: aws\n+ infra: aws\n cpus: 2+\n memory: 4+\n \ndiff --git a/tests/test_yamls/pipeline_gcp.yaml b/tests/test_yamls/pipeline_gcp.yaml\nindex c32b423a171..5d3cb1ba142 100644\n--- a/tests/test_yamls/pipeline_gcp.yaml\n+++ b/tests/test_yamls/pipeline_gcp.yaml\n@@ -4,8 +4,7 @@ name: pipeline\n name: a\n \n resources:\n- cloud: gcp\n- zone: us-east4-b\n+ infra: gcp/*/us-east4-b\n cpus: 2+\n memory: 4+\n \n@@ -21,7 +20,7 @@ run: |\n name: b\n \n resources:\n- cloud: gcp\n+ infra: gcp\n cpus: 2+\n memory: 4+\n \n@@ -39,7 +38,7 @@ run: |\n name: eval1\n \n resources:\n- cloud: gcp\n+ infra: gcp\n cpus: 2+\n memory: 4+\n \n@@ -57,7 +56,7 @@ run: |\n name: eval2\n \n resources:\n- cloud: gcp\n+ infra: gcp\n cpus: 2+\n memory: 4+\n \ndiff --git a/tests/test_yamls/test_custom_image.yaml b/tests/test_yamls/test_custom_image.yaml\nindex 2b304c73bca..6479d25fb65 100644\n--- a/tests/test_yamls/test_custom_image.yaml\n+++ b/tests/test_yamls/test_custom_image.yaml\n@@ -1,6 +1,5 @@\n resources:\n- cloud: aws\n- region: us-east-2\n+ infra: aws/us-east-2\n # Nvidia image from\n # https://aws.amazon.com/marketplace/pp/prodview-rf7na2b2ttvdg\n image_id: ami-062ddd90fb6f8267a\ndiff --git a/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml b/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml\nindex f6e143d8378..233ba8e4caf 100644\n--- a/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml\n+++ b/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml\n@@ -4,8 +4,8 @@ resources:\n use_spot: true\n accelerators: {'A100:1', 'T4:1', 'V100:1'}\n any_of:\n- - cloud: aws\n- - cloud: gcp\n+ - infra: aws\n+ - infra: gcp\n \n run: |\n nvidia-smi\ndiff --git a/tests/test_yamls/test_multiple_resources.yaml b/tests/test_yamls/test_multiple_resources.yaml\nindex 37c0c25e867..6771c52a70c 100644\n--- a/tests/test_yamls/test_multiple_resources.yaml\n+++ b/tests/test_yamls/test_multiple_resources.yaml\n@@ -2,12 +2,11 @@ name: multi-resources\n \n resources:\n any_of:\n- - cloud: aws\n- region: us-east-1\n+ - infra: aws/us-east-1\n accelerators: A100:8\n- - cloud: gcp\n+ - infra: gcp\n accelerators: T4:4\n- - cloud: aws\n+ - infra: aws\n \n run:\n- echo hi\n\\ No newline at end of file\n+ echo hi\ndiff --git a/tests/unit_tests/test_controller_utils.py b/tests/unit_tests/test_controller_utils.py\nindex d3704afcd78..54050be9be7 100644\n--- a/tests/unit_tests/test_controller_utils.py\n+++ b/tests/unit_tests/test_controller_utils.py\n@@ -7,6 +7,7 @@\n from sky.jobs import constants as managed_job_constants\n from sky.serve import constants as serve_constants\n from sky.utils import controller_utils\n+from sky.utils import registry\n \n _DEFAULT_AUTOSTOP = {\n 'down': False,\n@@ -73,21 +74,17 @@ def get_custom_controller_resources(keys, default):\n \n \n def _check_controller_resources(\n- controller_resources: Set[sky.Resources],\n- expected_combinations: Set[Tuple[Optional[str], Optional[str],\n- Optional[str]]],\n+ controller_resources: Set[sky.Resources], expected_infra_list: Set[str],\n default_controller_resources: Dict[str, Any]) -> None:\n \"\"\"Helper function to check that the controller resources match the\n expected combinations.\"\"\"\n for r in controller_resources:\n config = r.to_yaml_config()\n- cloud = config.pop('cloud')\n- region = config.pop('region', None)\n- zone = config.pop('zone', None)\n- assert (cloud, region, zone) in expected_combinations\n- expected_combinations.remove((cloud, region, zone))\n+ infra = config.pop('infra')\n+ assert infra in expected_infra_list\n+ expected_infra_list.remove(infra)\n assert config == default_controller_resources, config\n- assert not expected_combinations\n+ assert not expected_infra_list\n \n \n @pytest.mark.parametrize(('controller_type', 'default_controller_resources'), [\n@@ -107,28 +104,23 @@ def test_get_controller_resources_with_task_resources(\n # 1. All resources has cloud specified. All of them\n # could host controllers. Return a set, each item has\n # one cloud specified plus the default resources.\n- all_clouds = {sky.AWS(), sky.GCP(), sky.Azure()}\n- expected_combinations = {(str(c), None, None) for c in all_clouds}\n+ all_clouds = {'aws', 'gcp', 'azure'}\n+ expected_infra_set = all_clouds\n controller_resources = controller_utils.get_controller_resources(\n controller=controller_utils.Controllers.from_type(controller_type),\n- task_resources=[sky.Resources(cloud=c) for c in all_clouds])\n- _check_controller_resources(controller_resources, expected_combinations,\n+ task_resources=[sky.Resources(infra=c) for c in all_clouds])\n+ _check_controller_resources(controller_resources, expected_infra_set,\n default_controller_resources)\n \n # 2. All resources has cloud specified. Some of them\n # could NOT host controllers. Return a set, only\n # containing those could host controllers.\n all_clouds = {\n- sky.AWS(),\n- sky.GCP(),\n- sky.Azure(),\n- sky.Fluidstack(),\n- sky.Kubernetes(),\n- sky.Lambda(),\n- sky.RunPod()\n+ 'aws', 'gcp', 'azure', 'fluidstack', 'kubernetes', 'lambda', 'runpod'\n }\n \n- def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:\n+ def _could_host_controllers(cloud_str: str) -> bool:\n+ cloud = registry.CLOUD_REGISTRY.from_str(cloud_str)\n try:\n cloud.check_features_are_supported(\n sky.Resources(),\n@@ -137,13 +129,11 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:\n return False\n return True\n \n- expected_combinations = {\n- (str(c), None, None) for c in all_clouds if _could_host_controllers(c)\n- }\n+ expected_infra_set = {c for c in all_clouds if _could_host_controllers(c)}\n controller_resources = controller_utils.get_controller_resources(\n controller=controller_utils.Controllers.from_type(controller_type),\n- task_resources=[sky.Resources(cloud=c) for c in all_clouds])\n- _check_controller_resources(controller_resources, expected_combinations,\n+ task_resources=[sky.Resources(infra=c) for c in all_clouds])\n+ _check_controller_resources(controller_resources, expected_infra_set,\n default_controller_resources)\n \n # 3. Some resources does not have cloud specified.\n@@ -152,7 +142,7 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:\n controller=controller_utils.Controllers.from_type(controller_type),\n task_resources=[\n sky.Resources(accelerators='L4'),\n- sky.Resources(cloud=sky.RunPod(), accelerators='A40'),\n+ sky.Resources(infra='runpod', accelerators='A40'),\n ])\n assert len(controller_resources) == 1\n config = list(controller_resources)[0].to_yaml_config()\n@@ -170,16 +160,18 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:\n zone='us-central1-a'),\n sky.Resources(cloud=sky.GCP(),\n region='europe-west1',\n- zone='europe-west1-b')\n+ zone='europe-west1-b'),\n ]\n- expected_combinations = {('AWS', 'us-east-1', 'us-east-1a'),\n- ('AWS', 'ap-south-1', 'ap-south-1b'),\n- ('GCP', 'us-central1', 'us-central1-a'),\n- ('GCP', 'europe-west1', 'europe-west1-b')}\n+ expected_infra_set = {\n+ 'aws/us-east-1/us-east-1a',\n+ 'aws/ap-south-1/ap-south-1b',\n+ 'gcp/us-central1/us-central1-a',\n+ 'gcp/europe-west1/europe-west1-b',\n+ }\n controller_resources = controller_utils.get_controller_resources(\n controller=controller_utils.Controllers.from_type(controller_type),\n task_resources=all_cloud_regions_zones)\n- _check_controller_resources(controller_resources, expected_combinations,\n+ _check_controller_resources(controller_resources, expected_infra_set,\n default_controller_resources)\n \n # 5. Clouds and regions are specified, but zones are partially specified.\n@@ -190,17 +182,15 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:\n controller_resources = controller_utils.get_controller_resources(\n controller=controller_utils.Controllers.from_type(controller_type),\n task_resources=[\n- sky.Resources(cloud=sky.AWS(), region='us-west-2'),\n- sky.Resources(cloud=sky.AWS(),\n- region='us-west-2',\n- zone='us-west-2b'),\n- sky.Resources(cloud=sky.GCP(),\n- region='us-central1',\n- zone='us-central1-a')\n+ sky.Resources(infra='aws/us-west-2'),\n+ sky.Resources(infra='aws/us-west-2/us-west-2b'),\n+ sky.Resources(infra='gcp/us-central1/us-central1-a')\n ])\n- expected_combinations = {('AWS', 'us-west-2', None),\n- ('GCP', 'us-central1', 'us-central1-a')}\n- _check_controller_resources(controller_resources, expected_combinations,\n+ expected_infra_set = {\n+ 'aws/us-west-2',\n+ 'gcp/us-central1/us-central1-a',\n+ }\n+ _check_controller_resources(controller_resources, expected_infra_set,\n default_controller_resources)\n \n # 6. Mixed case: Some resources have clouds and regions or zones, others do\n@@ -219,11 +209,11 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:\n sky.Resources(cloud=sky.AWS(), region='ap-south-1'),\n sky.Resources(cloud=sky.Azure()),\n ])\n- expected_combinations = {\n- ('AWS', 'eu-north-1', None),\n- ('AWS', 'ap-south-1', None),\n- ('GCP', None, None),\n- ('Azure', None, None),\n+ expected_infra_set = {\n+ 'aws/eu-north-1',\n+ 'aws/ap-south-1',\n+ 'gcp',\n+ 'azure',\n }\n- _check_controller_resources(controller_resources, expected_combinations,\n+ _check_controller_resources(controller_resources, expected_infra_set,\n default_controller_resources)\ndiff --git a/tests/unit_tests/test_resources.py b/tests/unit_tests/test_resources.py\nindex b79211c9a9f..4c2a0bc5039 100644\n--- a/tests/unit_tests/test_resources.py\n+++ b/tests/unit_tests/test_resources.py\n@@ -50,8 +50,9 @@ def _run_label_test(allowed_labels: Dict[str, str],\n r = Resources(cloud=cloud, labels=l)\n with pytest.raises(ValueError):\n r.validate()\n- assert False, (f'Resources were initialized with '\n- f'invalid label {invalid_label}={value}')\n+ assert False, (f'Resources {r.to_yaml_config()} were initialized '\n+ f'with invalid label {invalid_label}={value} but no '\n+ 'error was raised.')\n \n \n def test_gcp_labels_resources():\n@@ -208,3 +209,283 @@ def test_aws_make_deploy_variables(*mocks) -> None:\n dryrun=True)\n assert config == expected_config, ('unexpected resource '\n 'variables generated')\n+\n+\[email protected](['resources_kwargs', 'expected_yaml_config'], [\n+ ({\n+ 'infra': '*/*/us-east-1b',\n+ 'accelerators': 'A10'\n+ }, {\n+ 'infra': '*/*/us-east-1b',\n+ 'accelerators': {\n+ 'A10': 1\n+ },\n+ 'disk_size': 256,\n+ }),\n+ ({\n+ 'infra': 'gcp/*/us-east1-b',\n+ 'accelerators': 'A10:8',\n+ 'labels': {\n+ 'key': 'value'\n+ }\n+ }, {\n+ 'infra': 'gcp/*/us-east1-b',\n+ 'accelerators': {\n+ 'A10': 8\n+ },\n+ 'labels': {\n+ 'key': 'value'\n+ },\n+ 'disk_size': 256,\n+ }),\n+])\n+def test_to_yaml_and_load(resources_kwargs, expected_yaml_config):\n+ r = Resources(**resources_kwargs)\n+ yaml_config = r.to_yaml_config()\n+ assert yaml_config == expected_yaml_config\n+\n+ loaded_r = list(Resources.from_yaml_config(yaml_config))[0]\n+ assert loaded_r.cloud == r.cloud\n+ assert loaded_r.region == r.region\n+ assert loaded_r.zone == r.zone\n+ original_accelerators = r.accelerators\n+ assert loaded_r.accelerators == original_accelerators\n+ assert original_accelerators == r.accelerators\n+ assert loaded_r.labels == r.labels\n+\n+\n+def test_resources_any_of():\n+ \"\"\"Test Resources creation with any_of option.\"\"\"\n+ # Test any_of with different resources options\n+ config = {\n+ 'any_of': [\n+ {\n+ 'cpus': 8,\n+ 'memory': 16\n+ },\n+ {\n+ 'cpus': 4,\n+ 'memory': 32\n+ },\n+ {\n+ 'accelerators': 'V100:1'\n+ },\n+ ]\n+ }\n+ resources_set = Resources.from_yaml_config(config)\n+\n+ # Verify it returns a set of resources\n+ assert isinstance(resources_set, set)\n+ assert len(resources_set) == 3\n+\n+ # Validate the resources options are correctly created\n+ resources_list = list(resources_set)\n+\n+ # Find resources by properties (order may not be preserved)\n+ r_cpus8 = next((r for r in resources_list if r.cpus == '8'), None)\n+ r_cpus4 = next((r for r in resources_list if r.cpus == '4'), None)\n+ r_gpu = next((r for r in resources_list if r.accelerators is not None),\n+ None)\n+\n+ assert r_cpus8 is not None\n+ assert r_cpus8.memory == '16'\n+\n+ assert r_cpus4 is not None\n+ assert r_cpus4.memory == '32'\n+\n+ assert r_gpu is not None\n+ assert r_gpu.accelerators == {'V100': 1}\n+\n+\n+def test_resources_ordered():\n+ \"\"\"Test Resources creation with ordered option.\"\"\"\n+ # Test ordered with different resources options\n+ config = {\n+ 'ordered': [\n+ {\n+ 'infra': 'gcp',\n+ 'accelerators': 'A100:8'\n+ },\n+ {\n+ 'infra': 'aws',\n+ 'accelerators': 'V100:8'\n+ },\n+ {\n+ 'accelerators': 'T4:8'\n+ },\n+ ]\n+ }\n+ resources_list = Resources.from_yaml_config(config)\n+\n+ # Verify it returns a list of resources\n+ assert isinstance(resources_list, list)\n+ assert len(resources_list) == 3\n+\n+ # Ordered resources should preserve order\n+ assert resources_list[0].infra.cloud.lower() == 'gcp'\n+ assert resources_list[0].accelerators == {'A100': 8}\n+\n+ assert resources_list[1].infra.cloud.lower() == 'aws'\n+ assert resources_list[1].accelerators == {'V100': 8}\n+\n+ assert resources_list[2].accelerators == {'T4': 8}\n+\n+\n+def test_resources_any_of_spot_flag():\n+ \"\"\"Test Resources with any_of option including spot flag variations.\"\"\"\n+ config = {\n+ 'accelerators': 'A100:8',\n+ 'any_of': [{\n+ 'use_spot': True\n+ }, {\n+ 'use_spot': False\n+ }]\n+ }\n+ resources_set = Resources.from_yaml_config(config)\n+\n+ # Verify it returns a set of resources\n+ assert isinstance(resources_set, set)\n+ assert len(resources_set) == 2\n+\n+ # Find spot and on-demand resources\n+ resources_list = list(resources_set)\n+ r_spot = next((r for r in resources_list if r.use_spot), None)\n+ r_ondemand = next((r for r in resources_list if not r.use_spot), None)\n+\n+ assert r_spot is not None\n+ assert r_spot.accelerators == {'A100': 8}\n+ assert r_spot.use_spot is True\n+\n+ assert r_ondemand is not None\n+ assert r_ondemand.accelerators == {'A100': 8}\n+ assert r_ondemand.use_spot is False\n+\n+\n+def test_resources_ordered_preference():\n+ \"\"\"Test Resources creation with ordered preference correctly preserves order.\"\"\"\n+ config = {\n+ 'ordered': [\n+ {\n+ 'infra': 'aws/us-east-1',\n+ 'accelerators': 'A100:8'\n+ },\n+ {\n+ 'infra': 'gcp/us-central1',\n+ 'accelerators': 'A100:8'\n+ },\n+ {\n+ 'infra': 'azure/eastus',\n+ 'accelerators': 'A100:8'\n+ },\n+ ]\n+ }\n+ resources_list = Resources.from_yaml_config(config)\n+\n+ # Verify order matches the input order\n+ assert resources_list[0].infra.cloud.lower() == 'aws'\n+ assert resources_list[0].infra.region == 'us-east-1'\n+\n+ assert resources_list[1].infra.cloud.lower() == 'gcp'\n+ assert resources_list[1].infra.region == 'us-central1'\n+\n+ assert resources_list[2].infra.cloud.lower() == 'azure'\n+ assert resources_list[2].infra.region == 'eastus'\n+\n+\n+def test_resources_any_of_ordered_exclusive():\n+ \"\"\"Test that Resources raises ValueError if both any_of and ordered are specified.\"\"\"\n+ config = {'any_of': [{'cpus': 8}], 'ordered': [{'cpus': 4}]}\n+\n+ # Should raise ValueError because both any_of and ordered are specified\n+ with pytest.raises(ValueError,\n+ match='Cannot specify both \"any_of\" and \"ordered\"'):\n+ Resources.from_yaml_config(config)\n+\n+\n+def test_resources_any_of_with_base_infra():\n+ \"\"\"Test Resources creation with any_of option and base infra.\"\"\"\n+ # Test any_of with base infra and additional infra specifications\n+ config = {\n+ 'infra': 'aws', # Base infra\n+ 'cpus': 8,\n+ 'any_of': [\n+ {\n+ 'infra': 'aws/us-east-1'\n+ }, # Override with specific region\n+ {\n+ 'infra': 'aws/us-west-2'\n+ }, # Different region\n+ {\n+ 'infra': 'gcp/us-central1'\n+ }, # Different cloud\n+ ]\n+ }\n+ resources_set = Resources.from_yaml_config(config)\n+\n+ # Verify it returns a set of resources\n+ assert isinstance(resources_set, set)\n+ assert len(resources_set) == 3\n+\n+ # Validate the resources are correctly created with proper infra\n+ resources_list = list(resources_set)\n+\n+ # All resources should have cpus=8 from the base config\n+ for r in resources_list:\n+ assert r.cpus == '8'\n+\n+ # Find resources by infra properties\n+ r_east = next((r for r in resources_list if r.infra.region == 'us-east-1'),\n+ None)\n+ r_west = next((r for r in resources_list if r.infra.region == 'us-west-2'),\n+ None)\n+ r_gcp = next((r for r in resources_list if r.infra.cloud.lower() == 'gcp'),\n+ None)\n+\n+ assert r_east is not None\n+ assert str(r_east.cloud).lower() == 'aws'\n+\n+ assert r_west is not None\n+ assert str(r_west.cloud).lower() == 'aws'\n+\n+ assert r_gcp is not None\n+ assert r_gcp.infra.region == 'us-central1'\n+\n+\n+def test_resources_ordered_with_base_infra():\n+ \"\"\"Test Resources creation with ordered option and base infra.\"\"\"\n+ # Test ordered with base infra and additional infra specifications\n+ config = {\n+ 'infra': 'azure', # Base infra\n+ 'accelerators': 'A100:8', # Base accelerator\n+ 'ordered': [\n+ {\n+ 'infra': 'gcp/us-central1'\n+ }, # Specific region in same cloud\n+ {\n+ 'infra': 'aws/us-east-1'\n+ }, # Different cloud\n+ {\n+ 'accelerators': 'T4:8'\n+ }, # Another cloud\n+ ]\n+ }\n+ resources_list = Resources.from_yaml_config(config)\n+\n+ # Verify it returns a list of resources with right length\n+ assert isinstance(resources_list, list)\n+ assert len(resources_list) == 3\n+\n+ # All resources should have A100:8 from the base config\n+ assert resources_list[0].accelerators == {'A100': 8}\n+ assert resources_list[1].accelerators == {'A100': 8}\n+ assert resources_list[2].accelerators == {'T4': 8}\n+\n+ # Ordered resources should preserve order and have correct infra\n+ assert str(resources_list[0].cloud).lower() == 'gcp'\n+ assert resources_list[0].region == 'us-central1'\n+\n+ assert str(resources_list[1].cloud).lower() == 'aws'\n+ assert resources_list[1].region == 'us-east-1'\n+\n+ assert str(resources_list[2].cloud).lower() == 'azure'\n+ assert resources_list[2].region is None\ndiff --git a/tests/unit_tests/test_sky/utils/test_cli_utils.py b/tests/unit_tests/test_sky/utils/test_cli_utils.py\nnew file mode 100644\nindex 00000000000..c1c302e634e\n--- /dev/null\n+++ b/tests/unit_tests/test_sky/utils/test_cli_utils.py\n@@ -0,0 +1,378 @@\n+\"\"\"Tests for CLI utilities.\n+\n+This module contains tests for the CLI utilities in sky.utils.cli_utils.\n+\"\"\"\n+import time\n+\n+import pytest\n+\n+import sky\n+from sky import backends\n+from sky.resources import Resources\n+from sky.utils import status_lib\n+from sky.utils.cli_utils import status_utils\n+\n+\n+def test_status_table_format():\n+ \"\"\"Test the status table format.\"\"\"\n+ # Test AWS case\n+ mock_resources = Resources(infra='aws/us-east-1',\n+ instance_type='m6i.2xlarge')\n+ mock_handle = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-cluster',\n+ cluster_name_on_cloud='test-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=mock_resources)\n+ mock_record = {\n+ 'name': 'test-cluster',\n+ 'handle': mock_handle,\n+ 'launched_at': int(time.time()) - 3600, # 1 hour ago\n+ 'status': status_lib.ClusterStatus.UP,\n+ 'autostop': 300, # 5 minutes\n+ 'to_down': False,\n+ }\n+\n+ # Test the infra format\n+ infra_str = status_utils._get_infra(mock_record)\n+ assert infra_str == 'AWS (us-east-1)'\n+\n+ # Test the resources format\n+ resources_str = status_utils._get_resources(mock_record)\n+ assert resources_str == '1x(cpus=8, mem=32, type=m6i.2xlarge, ...)'\n+\n+ # Test Kubernetes case\n+ mock_k8s_resources = Resources(infra='k8s/my-ctx', cpus='2+', memory='4+')\n+ mock_k8s_handle = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-k8s-cluster',\n+ cluster_name_on_cloud='test-k8s-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=2,\n+ launched_resources=mock_k8s_resources)\n+ mock_k8s_record = {\n+ 'name': 'test-k8s-cluster',\n+ 'handle': mock_k8s_handle,\n+ 'launched_at': int(time.time()) - 3600, # 1 hour ago\n+ 'status': status_lib.ClusterStatus.UP,\n+ 'autostop': -1, # No autostop\n+ 'to_down': False,\n+ 'resources_str': '2x (...)',\n+ }\n+\n+ # Test K8S infra format\n+ k8s_infra_str = status_utils._get_infra(mock_k8s_record)\n+ assert k8s_infra_str == 'Kubernetes (my-ctx)'\n+\n+ # Test K8S resources format\n+ k8s_resources_str = status_utils._get_resources(mock_k8s_record)\n+ assert k8s_resources_str == '2x (...)'\n+\n+ # For test purposes, override _get_resources to avoid trying to call\n+ # resources_utils.get_readable_resources_repr on a Resources object\n+ orig_get_resources = status_utils._get_resources\n+\n+ def mock_get_resources(cluster_record, truncate=True):\n+ return cluster_record.get('resources_str', '-')\n+\n+ status_utils._get_resources = mock_get_resources\n+\n+ try:\n+ # Test SSH case\n+ mock_ssh_handle = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-ssh-cluster',\n+ cluster_name_on_cloud='test-ssh-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=None)\n+ mock_ssh_record = {\n+ 'name': 'test-ssh-cluster',\n+ 'handle': mock_ssh_handle,\n+ 'launched_at': int(time.time()) - 3600, # 1 hour ago\n+ 'status': status_lib.ClusterStatus.UP,\n+ 'autostop': -1, # No autostop\n+ 'to_down': False,\n+ 'resources_str': '1x (...)',\n+ 'infra': 'SSH/my-tobi-box',\n+ }\n+\n+ # Test SSH infra format\n+ ssh_infra_str = status_utils._get_infra(mock_ssh_record)\n+ assert ssh_infra_str == 'SSH/my-tobi-box'\n+\n+ # Test SSH resources format\n+ ssh_resources_str = status_utils._get_resources(mock_ssh_record)\n+ assert ssh_resources_str == '1x (...)'\n+ finally:\n+ # Restore original function\n+ status_utils._get_resources = orig_get_resources\n+\n+\n+def test_show_status_table():\n+ \"\"\"Test the full status table output.\"\"\"\n+ mock_resources = Resources(infra='aws/us-east-1',\n+ instance_type='m6i.2xlarge')\n+ mock_handle = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-cluster',\n+ cluster_name_on_cloud='test-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=mock_resources)\n+\n+ # Test different cluster statuses\n+ statuses = [\n+ status_lib.ClusterStatus.UP,\n+ status_lib.ClusterStatus.INIT,\n+ status_lib.ClusterStatus.STOPPED,\n+ ]\n+\n+ for status in statuses:\n+ mock_record = {\n+ 'name': 'test-cluster',\n+ 'handle': mock_handle,\n+ 'launched_at': int(time.time()) - 3600, # 1 hour ago\n+ 'status': status,\n+ 'autostop': 300, # 5 minutes\n+ 'to_down': False,\n+ 'last_use': 'sky launch test.yaml',\n+ 'user_name': 'test_user',\n+ 'user_hash': 'abc123',\n+ 'head_ip': '1.2.3.4',\n+ 'resources_str': '1x(cpus=8, mem=32, type=m6i.2xlarge, ...)',\n+ 'resources_str_full': ('1x(cpus=8, mem=32, type=m6i.2xlarge, '\n+ 'disk=50)'),\n+ }\n+\n+ # Test basic table\n+ num_pending = status_utils.show_status_table([mock_record],\n+ show_all=False,\n+ show_user=False)\n+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED\n+ else 0)\n+\n+ # Test with user info\n+ num_pending = status_utils.show_status_table([mock_record],\n+ show_all=False,\n+ show_user=True)\n+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED\n+ else 0)\n+\n+ # Test with show_all\n+ num_pending = status_utils.show_status_table([mock_record],\n+ show_all=True,\n+ show_user=True)\n+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED\n+ else 0)\n+\n+ # Test with query_clusters\n+ num_pending = status_utils.show_status_table(\n+ [mock_record],\n+ show_all=False,\n+ show_user=False,\n+ query_clusters=['test-cluster'])\n+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED\n+ else 0)\n+\n+ # Test with non-existent query_clusters\n+ num_pending = status_utils.show_status_table(\n+ [mock_record],\n+ show_all=False,\n+ show_user=False,\n+ query_clusters=['non-existent'])\n+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED\n+ else 0)\n+\n+\n+def test_get_command():\n+ \"\"\"Test command display in status table.\"\"\"\n+ mock_record = {\n+ 'last_use': 'sky launch test.yaml --env FOO=bar',\n+ }\n+\n+ # Test normal command\n+ cmd_str = status_utils._get_command(mock_record)\n+ assert cmd_str == 'sky launch test.yaml --env...'\n+\n+ # Test command without truncation\n+ cmd_str = status_utils._get_command(mock_record, truncate=False)\n+ assert cmd_str == 'sky launch test.yaml --env FOO=bar'\n+\n+ # Test short command\n+ mock_record['last_use'] = 'sky status'\n+ cmd_str = status_utils._get_command(mock_record)\n+ assert cmd_str == 'sky status'\n+\n+\n+def test_get_autostop():\n+ \"\"\"Test autostop display in status table.\"\"\"\n+ mock_record = {\n+ 'autostop': 300, # 5 minutes\n+ 'to_down': False,\n+ }\n+\n+ # Test normal autostop\n+ autostop_str = status_utils._get_autostop(mock_record)\n+ assert autostop_str == '300m'\n+\n+ # Test autostop with to_down\n+ mock_record['to_down'] = True\n+ autostop_str = status_utils._get_autostop(mock_record)\n+ assert autostop_str == '300m (down)'\n+\n+ # Test no autostop\n+ mock_record['autostop'] = -1\n+ autostop_str = status_utils._get_autostop(mock_record)\n+ assert autostop_str == '(down)'\n+\n+ # Test no autostop and no to_down\n+ mock_record['to_down'] = False\n+ autostop_str = status_utils._get_autostop(mock_record)\n+ assert autostop_str == '-'\n+\n+\n+def test_get_resources():\n+ \"\"\"Test resources display in status table.\"\"\"\n+ mock_resources = Resources(infra='aws/us-east-1',\n+ instance_type='m6i.2xlarge')\n+ mock_handle = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-cluster',\n+ cluster_name_on_cloud='test-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=mock_resources)\n+ mock_record = {\n+ 'handle': mock_handle,\n+ 'resources_str': '1x(cpus=8, mem=32, type=m6i.2xlarge, ...)',\n+ 'resources_str_full': '1x(cpus=8, mem=32, type=m6i.2xlarge, disk=50)',\n+ }\n+\n+ # Test normal resources\n+ resources_str = status_utils._get_resources(mock_record)\n+ assert resources_str == '1x(cpus=8, mem=32, type=m6i.2xlarge, ...)'\n+\n+ # Test full resources\n+ resources_str = status_utils._get_resources(mock_record, truncate=False)\n+ assert resources_str == '1x(cpus=8, mem=32, type=m6i.2xlarge, disk=50)'\n+\n+ # Test no resources\n+ mock_record['handle'].launched_resources = None\n+ resources_str = status_utils._get_resources(mock_record)\n+ assert resources_str == '-'\n+\n+\n+def test_get_resources_gpu():\n+ \"\"\"Test resources display for clusters with GPUs.\"\"\"\n+ # Test AWS with GPU resources\n+ mock_resources_aws_gpu = Resources(infra='aws/us-east-1',\n+ instance_type='p3.2xlarge',\n+ accelerators='V100')\n+ mock_handle_aws_gpu = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-gpu-cluster',\n+ cluster_name_on_cloud='test-gpu-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=mock_resources_aws_gpu)\n+ mock_record_aws_gpu = {\n+ 'handle': mock_handle_aws_gpu,\n+ 'resources_str': '1x(V100:1, cpus=8, mem=61, ...)',\n+ 'resources_str_full': '1x(V100:1, cpus=8, mem=61, disk=50)',\n+ }\n+\n+ # Test GPU resources\n+ resources_str = status_utils._get_resources(mock_record_aws_gpu)\n+ assert resources_str == '1x(V100:1, cpus=8, mem=61, ...)'\n+\n+ # Test full GPU resources\n+ resources_str = status_utils._get_resources(mock_record_aws_gpu,\n+ truncate=False)\n+ assert resources_str == '1x(V100:1, cpus=8, mem=61, disk=50)'\n+\n+ # Test GCP with multiple GPUs\n+ mock_resources_gcp_multi_gpu = Resources(infra='gcp/us-central1',\n+ instance_type='a2-highgpu-4g',\n+ accelerators='A100:4')\n+ mock_handle_gcp_multi_gpu = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-gcp-multi-gpu',\n+ cluster_name_on_cloud='test-gcp-multi-gpu-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=2,\n+ launched_resources=mock_resources_gcp_multi_gpu)\n+ mock_record_gcp_multi_gpu = {\n+ 'handle': mock_handle_gcp_multi_gpu,\n+ 'resources_str': '2x(gpus=A100:4, cpus=12, mem=85, ...)',\n+ 'resources_str_full': '2x(gpus=A100:4, cpus=12, mem=85, disk=50)',\n+ }\n+\n+ # Test multiple GPU resources\n+ resources_str = status_utils._get_resources(mock_record_gcp_multi_gpu)\n+ assert resources_str == '2x(gpus=A100:4, cpus=12, mem=85, ...)'\n+\n+\n+def test_get_resources_kubernetes():\n+ \"\"\"Test resources display for Kubernetes clusters.\"\"\"\n+ # Test Kubernetes with CPU resources\n+ mock_resources_k8s_cpu = Resources(infra='k8s/my-cluster-ctx',\n+ cpus=4,\n+ memory=16)\n+ mock_handle_k8s_cpu = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-k8s-cluster',\n+ cluster_name_on_cloud='test-k8s-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=mock_resources_k8s_cpu)\n+ mock_record_k8s_cpu = {\n+ 'handle': mock_handle_k8s_cpu,\n+ 'resources_str': '1x(cpus=4, mem=16, ...)',\n+ 'resources_str_full': '1x(cpus=4, mem=16, disk=50)',\n+ }\n+\n+ # Test K8s CPU resources\n+ resources_str = status_utils._get_resources(mock_record_k8s_cpu)\n+ assert resources_str == '1x(cpus=4, mem=16, ...)'\n+\n+ # Test Kubernetes with GPU resources\n+ mock_resources_k8s_gpu = Resources(infra='k8s/gpu-cluster-ctx',\n+ cpus=8,\n+ memory=32,\n+ accelerators='A100:2')\n+ mock_handle_k8s_gpu = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-k8s-gpu-cluster',\n+ cluster_name_on_cloud='test-k8s-gpu-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=2,\n+ launched_resources=mock_resources_k8s_gpu)\n+ mock_record_k8s_gpu = {\n+ 'handle': mock_handle_k8s_gpu,\n+ 'resources_str': '2x(gpus=A100:2, cpus=8, mem=32, ...)',\n+ 'resources_str_full': '2x(gpus=A100:2, cpus=8, mem=32, disk=50)',\n+ }\n+\n+ # Test K8s GPU resources\n+ resources_str = status_utils._get_resources(mock_record_k8s_gpu)\n+ assert resources_str == '2x(gpus=A100:2, cpus=8, mem=32, ...)'\n+\n+ # Test full K8s GPU resources\n+ resources_str = status_utils._get_resources(mock_record_k8s_gpu,\n+ truncate=False)\n+ assert resources_str == '2x(gpus=A100:2, cpus=8, mem=32, disk=50)'\n+\n+ # Test K8s with TPU resources\n+ mock_resources_k8s_tpu = Resources(infra='k8s/gke-tpu-cluster',\n+ cpus=8,\n+ memory=32,\n+ accelerators='tpu-v4-8')\n+ mock_handle_k8s_tpu = backends.CloudVmRayResourceHandle(\n+ cluster_name='test-k8s-tpu-cluster',\n+ cluster_name_on_cloud='test-k8s-tpu-cluster-cloud',\n+ cluster_yaml=None,\n+ launched_nodes=1,\n+ launched_resources=mock_resources_k8s_tpu)\n+ mock_record_k8s_tpu = {\n+ 'handle': mock_handle_k8s_tpu,\n+ 'resources_str': '1x(gpus=tpu-v4-8:1, cpus=8, mem=32, ...)',\n+ 'resources_str_full': ('1x(gpus=tpu-v4-8:1, cpus=8, mem=32, '\n+ 'disk=50)'),\n+ }\n+\n+ # Test K8s TPU resources\n+ resources_str = status_utils._get_resources(mock_record_k8s_tpu)\n+ assert resources_str == '1x(gpus=tpu-v4-8:1, cpus=8, mem=32, ...)'\ndiff --git a/tests/unit_tests/test_common_utils.py b/tests/unit_tests/test_sky/utils/test_common_utils.py\nsimilarity index 79%\nrename from tests/unit_tests/test_common_utils.py\nrename to tests/unit_tests/test_sky/utils/test_common_utils.py\nindex f8a0a17498a..d42d908c579 100644\n--- a/tests/unit_tests/test_common_utils.py\n+++ b/tests/unit_tests/test_sky/utils/test_common_utils.py\n@@ -9,6 +9,64 @@\n MOCKED_USER_HASH = 'ab12cd34'\n \n \n+class TestTruncateLongString:\n+\n+ def test_no_truncation_needed(self):\n+ s = \"short string\"\n+ result = common_utils.truncate_long_string(s, 15)\n+ assert result == s\n+\n+ def test_end_truncation(self):\n+ s = \"this is a very long string that needs truncation\"\n+ result = common_utils.truncate_long_string(s, 20)\n+ assert len(result) <= 20 + 3 # +3 for '...'\n+ assert result.endswith('...')\n+ assert result.startswith('this is a very')\n+\n+ def test_middle_truncation(self):\n+ s = \"us-west-2-availability-zone-1\"\n+ result = common_utils.truncate_long_string(s, 20, truncate_middle=True)\n+ assert len(result) <= 20\n+ assert '...' in result\n+ assert result.startswith('us-west')\n+ assert result.endswith('zone-1')\n+\n+ def test_middle_truncation_odd_length(self):\n+ s = \"us-west-2-availability-zone-1\"\n+ result = common_utils.truncate_long_string(s, 15, truncate_middle=True)\n+ assert len(result) <= 15\n+ assert '...' in result\n+ assert result.startswith('us-w')\n+ assert result.endswith('ne-1')\n+\n+ def test_middle_truncation_very_short(self):\n+ s = \"us-west-2-availability-zone-1\"\n+ result = common_utils.truncate_long_string(s, 3, truncate_middle=True)\n+ assert result == '...'\n+\n+ def test_empty_string(self):\n+ assert common_utils.truncate_long_string('', 10) == ''\n+\n+ def test_exact_length_no_truncation(self):\n+ assert common_utils.truncate_long_string(\n+ 'abcde', 5, truncate_middle=True) == 'abcde'\n+\n+ def test_one_less_than_length(self):\n+ assert common_utils.truncate_long_string('abcde',\n+ 4,\n+ truncate_middle=True) == 'a...'\n+\n+ def test_middle_truncation_even_length(self):\n+ assert common_utils.truncate_long_string(\n+ 'abcdefghijklmnopqrstuvwxyz', 10,\n+ truncate_middle=True) == 'abcd...xyz'\n+\n+ def test_middle_truncation_odd_max_length(self):\n+ assert common_utils.truncate_long_string(\n+ 'abcdefghijklmnopqrstuvwxyz', 11,\n+ truncate_middle=True) == 'abcd...wxyz'\n+\n+\n class TestCheckClusterNameIsValid:\n \n def test_check(self):\ndiff --git a/tests/unit_tests/test_sky/utils/test_infra_utils.py b/tests/unit_tests/test_sky/utils/test_infra_utils.py\nnew file mode 100644\nindex 00000000000..c164a24ddfc\n--- /dev/null\n+++ b/tests/unit_tests/test_sky/utils/test_infra_utils.py\n@@ -0,0 +1,163 @@\n+\"\"\"Tests for infra_utils.py\"\"\"\n+import unittest\n+\n+from sky.utils import infra_utils\n+\n+\n+class TestInfraUtils(unittest.TestCase):\n+ \"\"\"Tests for infra_utils.py\"\"\"\n+\n+ def test_from_str(self):\n+ \"\"\"Test the from_str function with various inputs.\"\"\"\n+ test_cases = [\n+ # Format: (infra_str, expected_cloud, expected_region, expected_zone)\n+ ('aws/us-east-1', 'aws', 'us-east-1', None),\n+ ('aws/us-east-1/us-east-1a', 'aws', 'us-east-1', 'us-east-1a'),\n+ ('gcp/us-central1', 'gcp', 'us-central1', None),\n+ ('k8s/my-cluster-ctx', 'kubernetes', 'my-cluster-ctx', None),\n+ ('kubernetes/my-cluster-ctx', 'kubernetes', 'my-cluster-ctx', None),\n+ # Test Kubernetes context with slashes\n+ ('k8s/my/cluster/ctx', 'kubernetes', 'my/cluster/ctx', None),\n+ # Test AWS with empty zone\n+ ('aws/us-east-1/', 'aws', 'us-east-1', None),\n+ # Test with just cloud\n+ ('aws', 'aws', None, None),\n+ # Test with asterisk\n+ ('*/us-east-1', None, 'us-east-1', None),\n+ ('aws/*/us-east-1a', 'aws', None, 'us-east-1a'),\n+ ('aws/*', 'aws', None, None),\n+ ('*/*/us-east-1a', None, None, 'us-east-1a'),\n+ (None, None, None, None),\n+ ('*', None, None, None),\n+ # Test case sensitivity\n+ ('AWS/US-EAST-1', 'aws', 'US-EAST-1', None),\n+ ('GCP/US-CENTRAL1', 'gcp', 'US-CENTRAL1', None),\n+ ('K8S/MY-CLUSTER', 'kubernetes', 'MY-CLUSTER', None),\n+ # Test whitespace handling\n+ (' aws/us-east-1 ', 'aws', 'us-east-1', None),\n+ (' aws / us-east-1 / us-east-1a ', 'aws', 'us-east-1',\n+ 'us-east-1a'),\n+ # Test local and lambda clouds\n+ ('local', 'local', None, None),\n+ ('lambda', 'lambda', None, None),\n+ ]\n+\n+ for infra_str, expected_cloud, expected_region, expected_zone in test_cases:\n+ info = infra_utils.InfraInfo.from_str(infra_str)\n+ cloud_str = info.cloud\n+\n+ self.assertEqual(\n+ cloud_str, expected_cloud,\n+ f'Failed on {infra_str}: Expected cloud={expected_cloud}, got {cloud_str}'\n+ )\n+ self.assertEqual(\n+ info.region, expected_region,\n+ f'Failed on {infra_str}: Expected region={expected_region}, got {info.region}'\n+ )\n+ self.assertEqual(\n+ info.zone, expected_zone,\n+ f'Failed on {infra_str}: Expected zone={expected_zone}, got {info.zone}'\n+ )\n+\n+ def test_from_str_errors(self):\n+ \"\"\"Test the from_str function with invalid inputs.\"\"\"\n+ error_test_cases = [\n+ # Too many segments\n+ 'aws/us-east-1/us-east-1a/extra',\n+ # Invalid format\n+ 'aws//us-east-1',\n+ # Just slashes\n+ '///',\n+ # Multiple consecutive slashes\n+ 'aws///us-east-1',\n+ ]\n+\n+ for infra_str in error_test_cases:\n+ with self.assertRaises((ValueError, TypeError),\n+ msg=f'Expected error for {infra_str!r}'):\n+ infra_utils.InfraInfo.from_str(infra_str)\n+\n+ def test_to_str(self):\n+ \"\"\"Test the to_str function with various inputs.\"\"\"\n+ test_cases = [\n+ # Format: (cloud, region, zone, expected)\n+ ('aws', 'us-east-1', None, 'aws/us-east-1'),\n+ ('aws', 'us-east-1', 'us-east-1a', 'aws/us-east-1/us-east-1a'),\n+ ('gcp', 'us-central1', None, 'gcp/us-central1'),\n+ ('kubernetes', 'my-cluster-ctx', None, 'kubernetes/my-cluster-ctx'),\n+ # Test with slashes in Kubernetes context\n+ ('kubernetes', 'my/cluster/ctx', None, 'kubernetes/my/cluster/ctx'),\n+ # Test with zone in Kubernetes\n+ ('kubernetes', 'my-cluster-ctx', 'some-zone',\n+ 'kubernetes/my-cluster-ctx/some-zone'),\n+ # Test with just cloud\n+ ('aws', None, None, 'aws'),\n+ # Test with None cloud\n+ (None, 'us-east-1', None, '*/us-east-1'),\n+ # Additional test cases for simplified implementation\n+ ('aws', '*', '*', 'aws'),\n+ ('gcp', 'us-central1', '*', 'gcp/us-central1'),\n+ ('aws', '*', 'us-east-1a', 'aws/*/us-east-1a'),\n+ (None, None, None, None),\n+ ('*', '*', '*', None),\n+ ('*', 'us-east-1', None, '*/us-east-1'),\n+ # Test case sensitivity preservation\n+ ('aws', 'US-EAST-1', 'US-EAST-1A', 'aws/US-EAST-1/US-EAST-1A'),\n+ # Test local and lambda clouds\n+ ('local', None, None, 'local'),\n+ ('lambda', 'region-name', None, 'lambda/region-name'),\n+ ]\n+\n+ for cloud, region, zone, expected in test_cases:\n+ result = infra_utils.InfraInfo(cloud, region, zone).to_str()\n+ self.assertEqual(result, expected,\n+ f'Failed: Expected {expected}, got {result}')\n+\n+ def test_formatted_str(self):\n+ \"\"\"Test the formatted_str function with various inputs.\"\"\"\n+ test_cases = [\n+ # Format: (cloud, region, zone, truncate, expected)\n+ ('aws', 'us-east-1', None, True, 'aws (us-east-1)'),\n+ ('aws', 'us-east-1', 'us-east-1a', True, 'aws (us-east-1a)'),\n+ ('gcp', 'us-central1', None, True, 'gcp (us-central1)'),\n+ ('kubernetes', 'my-cluster-ctx', None, True,\n+ 'kubernetes (my-cluster-ctx)'),\n+ # Test with slashes in Kubernetes context\n+ ('kubernetes', 'my/cluster/ctx', None, True,\n+ 'kubernetes (my/cluster/ctx)'),\n+ # Test with just cloud\n+ ('aws', None, None, True, 'aws'),\n+ # Test with None cloud\n+ (None, 'us-east-1', None, True, '-'),\n+ # Test with long region/zone (truncation)\n+ ('aws', 'us-east-1-very-long-region', None, True,\n+ 'aws (us-east-1-v...long-region)'),\n+ ('aws', 'us-east-1-very-very-very-long-region', None, True,\n+ 'aws (us-east-1-v...long-region)'),\n+ ('aws', 'us-east-1-very-long-region', None, False,\n+ 'aws (us-east-1-very-long-region)'),\n+ # Test with asterisk\n+ ('*', '*', '*', True, '-'),\n+ ('aws', '*', '*', True, 'aws'),\n+ ('aws', '*', 'us-east-1a', True, 'aws (us-east-1a)'),\n+ ('*', 'us-east-1', None, True, '-'),\n+ # Test truncation boundary cases\n+ ('aws', 'x' * 25, None, True, 'aws (' + 'x' * 25 + ')'),\n+ ('aws', 'x' * 26, None, True,\n+ 'aws (' + 'x' * 11 + '...' + 'x' * 11 + ')'),\n+ ('aws', 'x' * 24, None, True, 'aws (' + 'x' * 24 + ')'),\n+ # Test with empty strings\n+ ('aws', '', None, True, 'aws'),\n+ ('aws', '', '', True, 'aws'),\n+ # Test local and lambda clouds\n+ ('local', None, None, True, 'local'),\n+ ('lambda', 'region-name', None, True, 'lambda (region-name)'),\n+ ]\n+\n+ for cloud, region, zone, truncate, expected in test_cases:\n+ result = infra_utils.InfraInfo(\n+ cloud, region, zone).formatted_str(truncate=truncate)\n+ self.assertEqual(\n+ result, expected, f'Failed: Expected {expected}, got {result}, '\n+ f'cloud={cloud}, region={region}, zone={zone}, '\n+ f'truncate={truncate}')\ndiff --git a/tests/unit_tests/test_sky/utils/test_schemas.py b/tests/unit_tests/test_sky/utils/test_schemas.py\nnew file mode 100644\nindex 00000000000..39c5bb4f2a1\n--- /dev/null\n+++ b/tests/unit_tests/test_sky/utils/test_schemas.py\n@@ -0,0 +1,113 @@\n+\"\"\"Tests for schemas.py\"\"\"\n+import unittest\n+\n+import jsonschema\n+\n+from sky.utils import schemas\n+\n+\n+class TestResourcesSchema(unittest.TestCase):\n+ \"\"\"Tests for the resources schema in schemas.py\"\"\"\n+\n+ def test_valid_infra_configs(self):\n+ \"\"\"Test validation of valid infra field configs.\"\"\"\n+ resources_schema = schemas.get_resources_schema()\n+\n+ # Valid infra configurations\n+ valid_infra_configs = [\n+ {\n+ 'infra': 'aws'\n+ },\n+ {\n+ 'infra': 'gcp'\n+ },\n+ {\n+ 'infra': 'azure'\n+ },\n+ {\n+ 'infra': 'kubernetes'\n+ },\n+ {\n+ 'infra': 'aws/us-east-1'\n+ },\n+ {\n+ 'infra': 'aws/us-east-1/us-east-1a'\n+ },\n+ {\n+ 'infra': 'gcp/us-central1'\n+ },\n+ {\n+ 'infra': 'k8s/my-cluster-ctx'\n+ },\n+ {\n+ 'infra': 'kubernetes/my/complex/context/path'\n+ },\n+ {\n+ 'infra': '*'\n+ },\n+ {\n+ 'infra': '*/us-east-1'\n+ },\n+ {\n+ 'infra': '*/us-east-1/us-east-1a'\n+ },\n+ {\n+ 'infra': '*/*'\n+ },\n+ {\n+ 'infra': '*/*/us-east-1a'\n+ },\n+ ]\n+\n+ for config in valid_infra_configs:\n+ # Should not raise an exception\n+ jsonschema.validate(instance=config, schema=resources_schema)\n+\n+ def test_invalid_infra_type(self):\n+ \"\"\"Test validation rejects invalid infra field types.\"\"\"\n+ resources_schema = schemas.get_resources_schema()\n+\n+ # Invalid infra configurations - wrong type\n+ invalid_type_config = {'infra': 123} # Not a string\n+ with self.assertRaises(jsonschema.exceptions.ValidationError):\n+ jsonschema.validate(instance=invalid_type_config,\n+ schema=resources_schema)\n+\n+ def test_invalid_infra_format(self):\n+ \"\"\"Test validation rejects invalid infra field formats.\"\"\"\n+ resources_schema = schemas.get_resources_schema()\n+\n+ # Invalid formats\n+ invalid_formats = [\n+ {\n+ 'infra': 'aws/'\n+ }, # Trailing slash without region\n+ {\n+ 'infra': 'aws//us-east-1a'\n+ }, # Empty region\n+ {\n+ 'infra': '/us-east-1'\n+ }, # Missing cloud\n+ {\n+ 'infra': 'aws/us-east-1/zone/extra'\n+ }, # Too many segments\n+ {\n+ 'infra': 'invalid-cloud/us-east-1'\n+ }, # Invalid cloud name\n+ {\n+ 'infra': 'invalid-cloud'\n+ }, # Invalid cloud name without region\n+ {\n+ 'infra': '**/us-east-1'\n+ }, # Multiple asterisks (invalid syntax)\n+ ]\n+\n+ for config in invalid_formats:\n+ with self.assertRaises(\n+ jsonschema.exceptions.ValidationError,\n+ msg=f\"Expected '{config['infra']}' to be rejected\"):\n+ jsonschema.validate(instance=config, schema=resources_schema)\n+\n+\n+if __name__ == \"__main__\":\n+ unittest.main()\n"
}
|
[
{
"diff_hunk": "@@ -85,6 +85,19 @@ def _get_single_resources_schema():\n 'zone': {\n 'type': 'string',\n },\n+ 'infra': {\n+ 'type': 'string',\n+ 'description':\n+ ('Infrastructure specification in format: '\n+ 'cloud[/region[/zone]]. Use \"*\" as a wildcard.'),\n+ # Create a pattern validator that uses a big regex to match all\n+ # valid formats. This allows us to maintain JSON Schema\n+ # validation while supporting all formats\n+ 'pattern':\n+ ('^(?:(?i:(' + '|'.join(list(service_catalog.ALL_CLOUDS)) +\n+ '))(?:/[^/]+(?:/[^/]+)?)?|\\\\*(?:/[^/]+(?:/[^/]+)?|/\\\\*'\n+ '(?:/[^/]+)?)?|(?i:k8s|kubernetes)/.+)$')",
"line": null,
"original_line": 99,
"original_start_line": null,
"path": "sky/utils/schemas.py",
"start_line": null,
"text": "@user1:\nHuman interpretation for future reference:\r\n- `cloud/x?/x?` format:\r\n - Cloud name\r\n - Optionally /region\r\n - Optionally /zone (must have /region)\r\n - (Note: technically I believe this could be simplified e.g. `(?:/[^/]){0,2}` but it's equivalent)\r\n- `*/x/x?`\r\n - */region - region is required\r\n - Optionally /zone\r\n- k8s/anything or kubernetes/anything\r\n - context name can include slashes\r\n - this will need to be updated to include ssh\n\n@user1:\nReview comment: `*/*` matches which seems unintentional.\r\nEither way, the `|/\\*'(?:/[^/]+)?` after `*` cloud seems completely redundant to the simpler `/[^/]`... match before\n\n@author:\nUpdated the comments and fixed the schemas : )"
},
{
"diff_hunk": "@@ -22,6 +22,6 @@\n \n with sky.Dag() as dag:\n t = sky.Task(run=run_command, setup=setup_cmd)\n- t.set_resources(sky.Resources(sky.AWS(), accelerators='V100'))\n+ t.set_resources(infra='aws', accelerators='V100')",
"line": null,
"original_line": 25,
"original_start_line": null,
"path": "examples/containerized_app.py",
"start_line": null,
"text": "@user1:\nSeems wrong - Task.set_resources isn't changed in the PR?\r\n\r\n```suggestion\r\n t.set_resources(sky.Resources(infra='aws', accelerators='V100'))\r\n```\n\n@author:\nOops, good catch! Added"
}
] |
eccdf0163e617bec70d8e7dfafe3cb2d5b88749a
|
diff --git a/docs/source/cloud-setup/cloud-permissions/aws.rst b/docs/source/cloud-setup/cloud-permissions/aws.rst
index 91e4eea2d5d..8a25b67756d 100644
--- a/docs/source/cloud-setup/cloud-permissions/aws.rst
+++ b/docs/source/cloud-setup/cloud-permissions/aws.rst
@@ -90,10 +90,10 @@ Example of mixing the default profile and another profile:
.. code-block:: console
$ # A cluster launched under the default AWS identity.
- $ sky launch --cloud aws -c default
+ $ sky launch --infra aws -c default
$ # A cluster launched under a different profile.
- $ AWS_PROFILE=AdministratorAccess-12345 sky launch --cloud aws -c other-profile-cluster
+ $ AWS_PROFILE=AdministratorAccess-12345 sky launch --infra aws -c other-profile-cluster
If you are using a :ref:`remote API server <sky-api-server>`, the AWS credentials are configured on the remote server. Overriding ``AWS_PROFILE`` on the client side won't work.
diff --git a/docs/source/cloud-setup/cloud-permissions/gcp.rst b/docs/source/cloud-setup/cloud-permissions/gcp.rst
index 40e42ee6d78..f0a2bbb34be 100644
--- a/docs/source/cloud-setup/cloud-permissions/gcp.rst
+++ b/docs/source/cloud-setup/cloud-permissions/gcp.rst
@@ -69,7 +69,7 @@ The easiest way to grant permissions to a user access your GCP project without t
roles/iam.securityAdmin
.. note::
- If the ``roles/iam.securityAdmin`` role is undesirable, you can do the following. First, include the role and have any user (e.g., the admin) run ``sky launch --cloud gcp`` successfully once. This is to create the necessary service account. Then, replace the role ``roles/iam.securityAdmin`` with ``roles/iam.roleViewer`` in the list above.
+ If the ``roles/iam.securityAdmin`` role is undesirable, you can do the following. First, include the role and have any user (e.g., the admin) run ``sky launch --infra gcp`` successfully once. This is to create the necessary service account. Then, replace the role ``roles/iam.securityAdmin`` with ``roles/iam.roleViewer`` in the list above.
Optionally, to use TPUs, add the following role:
diff --git a/docs/source/cloud-setup/quota.rst b/docs/source/cloud-setup/quota.rst
index f30862b75fd..ce2e76ed327 100644
--- a/docs/source/cloud-setup/quota.rst
+++ b/docs/source/cloud-setup/quota.rst
@@ -17,7 +17,7 @@ AWS
1. Go to the `EC2 Quotas console <https://console.aws.amazon.com/servicequotas/home/services/ec2/quotas>`_.
2. **Select a region** on the top right.
-3. Choose an EC2 instance type from the list (e.g, ``Running On-Demand P instances`` or ``All P Spot Instance Requests``). Use ``sky show-gpus --cloud aws --all`` or check `here <https://aws.amazon.com/ec2/instance-types/>`__ for more instance types.
+3. Choose an EC2 instance type from the list (e.g, ``Running On-Demand P instances`` or ``All P Spot Instance Requests``). Use ``sky show-gpus --infra aws --all`` or check `here <https://aws.amazon.com/ec2/instance-types/>`__ for more instance types.
4. Click the quota name, and then choose **Request quota increase**.
5. For **Change quota value**, enter the new value.
6. Choose **Request**.
@@ -57,7 +57,7 @@ OCI
1. Go to the `OCI Limits, Quotas and Usage console <https://cloud.oracle.com/limits>`_ to check your current resources status.
2. Click the **request a service limit increase** link on the page if you want to increase quotas.
3. Choose a **Service Category** from the list (e.g, ``Compute``).
-4. Choose a **Resource** from the list (e.g, ``GPUs for GPU.A10 based VM and BM Instances``). Use ``sky show-gpus --cloud oci --all`` or check `here <https://docs.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm>`__ for more instance types.
+4. Choose a **Resource** from the list (e.g, ``GPUs for GPU.A10 based VM and BM Instances``). Use ``sky show-gpus --infra oci --all`` or check `here <https://docs.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm>`__ for more instance types.
5. Enter the **Limit** field for your new limit and **Reason for request** for justification.
6. Click **Create Support Request** to submit.
7. You may check `OCI Service Limits <https://docs.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm#computelimits>`_ for more information.
diff --git a/docs/source/compute/gpus.rst b/docs/source/compute/gpus.rst
index 4c30021b7ba..3fbcd583cbc 100644
--- a/docs/source/compute/gpus.rst
+++ b/docs/source/compute/gpus.rst
@@ -26,7 +26,7 @@ You can query the accelerators available in your Kubernetes clusters with:
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
.. code-block:: text
diff --git a/docs/source/examples/auto-failover.rst b/docs/source/examples/auto-failover.rst
index 596e9d2c415..b01988f01a7 100644
--- a/docs/source/examples/auto-failover.rst
+++ b/docs/source/examples/auto-failover.rst
@@ -91,11 +91,11 @@ GCP, where it succeeded after one region:
Considered resources (1 node):
----------------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
----------------------------------------------------------------------------------------------------
- Azure Standard_ND96asr_v4 96 900 A100:8 eastus 27.20 ✔
- GCP a2-highgpu-8g 96 680 A100:8 us-central1-a 29.39
- AWS p4d.24xlarge 96 1152 A100:8 us-east-1 32.77
+ Azure (eastus) Standard_ND96asr_v4 96 900 A100:8 27.20 ✔
+ GCP (us-central1-a) a2-highgpu-8g 96 680 A100:8 29.39
+ AWS (us-east-1) p4d.24xlarge 96 1152 A100:8 32.77
----------------------------------------------------------------------------------------------------
Launching a new cluster 'a100-8'. Proceed? [Y/n]:
@@ -135,11 +135,11 @@ A10, L4, and A10g GPUs, using :code:`sky launch task.yaml`.
$ sky launch task.yaml
...
-----------------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
-----------------------------------------------------------------------------------------------------
- Azure Standard_NV6ads_A10_v5 6 55 A10:1 eastus 0.45 ✔
- GCP g2-standard-4 4 16 L4:1 us-east4-a 0.70
- AWS g5.xlarge 4 16 A10G:1 us-east-1 1.01
+ Azure (eastus) Standard_NV6ads_A10_v5 6 55 A10:1 0.45 ✔
+ GCP (us-east4-a) g2-standard-4 4 16 L4:1 0.70
+ AWS (us-east-1) g5.xlarge 4 16 A10G:1 1.01
-----------------------------------------------------------------------------------------------------
@@ -165,11 +165,10 @@ If a task would like to specify multiple candidate resources (not only GPUs), th
resources:
ordered: # Candidate resources in a preference order
- - cloud: gcp
+ - infra: gcp
accelerators: A100-80GB
- instance_type: g5.xlarge
- - cloud: azure
- region: eastus
+ - infra: azure/eastus
accelerators: A100
@@ -178,11 +177,10 @@ If a task would like to specify multiple candidate resources (not only GPUs), th
resources:
any_of: # Candidate resources that can be chosen in any order
- - cloud: gcp
+ - infra: gcp
accelerators: A100-80GB
- instance_type: g5.xlarge
- - cloud: azure
- region: eastus
+ - infra: azure/eastus
accelerators: A100
.. tip::
@@ -198,18 +196,18 @@ If a task would like to specify multiple candidate resources (not only GPUs), th
accelerators: {A10g:8, A10:8, L4:8, A100:8}
any_of:
# AWS:
- - region: us-east-1
- - region: us-east-2
- - region: us-west-1
- - region: us-west-2
+ - infra: aws/us-east-1
+ - infra: aws/us-east-2
+ - infra: aws/us-west-1
+ - infra: aws/us-west-2
# GCP
- - region: us-central1
- - region: us-east1
- - region: us-east4
- - region: us-west1
- - region: us-west2
- - region: us-west3
- - region: us-west4
+ - infra: gcp/us-central1
+ - infra: gcp/us-east1
+ - infra: gcp/us-east4
+ - infra: gcp/us-west1
+ - infra: gcp/us-west2
+ - infra: gcp/us-west3
+ - infra: gcp/us-west4
.. hint::
@@ -224,12 +222,12 @@ This will generate the following output:
Considered resources (1 node):
---------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
---------------------------------------------------------------------------------------------
- GCP g2-standard-96 96 384 L4:8 us-east4-a 7.98 ✔
- AWS g5.48xlarge 192 768 A10G:8 us-east-1 16.29
- GCP a2-highgpu-8g 96 680 A100:8 us-east1-b 29.39
- AWS p4d.24xlarge 96 1152 A100:8 us-east-1 32.77
+ GCP (us-east4-a) g2-standard-96 96 384 L4:8 7.98 ✔
+ AWS (us-east-1) g5.48xlarge 192 768 A10G:8 16.29
+ GCP (us-east1-b) a2-highgpu-8g 96 680 A100:8 29.39
+ AWS (us-east-1) p4d.24xlarge 96 1152 A100:8 32.77
---------------------------------------------------------------------------------------------
Launching a new cluster 'mycluster'. Proceed? [Y/n]:
diff --git a/docs/source/examples/managed-jobs.rst b/docs/source/examples/managed-jobs.rst
index 5d033fbd3f7..9c8287c6ca6 100644
--- a/docs/source/examples/managed-jobs.rst
+++ b/docs/source/examples/managed-jobs.rst
@@ -19,9 +19,9 @@ To start a managed job, use :code:`sky jobs launch`:
Managed job 'myjob' will be launched on (estimated):
Considered resources (1 node):
------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
------------------------------------------------------------------------------------------
- AWS m6i.2xlarge 8 32 - us-east-1 0.38 ✔
+ AWS (us-east-1) m6i.2xlarge 8 32 - 0.38 ✔
------------------------------------------------------------------------------------------
Launching a managed job 'myjob'. Proceed? [Y/n]: Y
... <job is submitted and launched>
@@ -446,7 +446,7 @@ To submit the pipeline, the same command :code:`sky jobs launch` is used. The pi
Fetching managed job statuses...
Managed jobs
In progress jobs: 1 RECOVERING
- ID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
+ ID TASK NAME REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
8 pipeline - 50 mins ago 47m 45s - 1 RECOVERING
↳ 0 train 1x [V100:8][Spot|On-demand] 50 mins ago 47m 45s - 1 RECOVERING
↳ 1 eval 1x [T4:1] - - - 0 PENDING
@@ -560,8 +560,7 @@ To achieve the above, you can specify custom configs in :code:`~/.sky/config.yam
resources:
# All configs below are optional.
# Specify the location of the jobs controller.
- cloud: gcp
- region: us-central1
+ infra: gcp/us-central1
# Bump cpus to allow more managed jobs to be launched concurrently. (Default: 4+)
cpus: 8+
# Bump memory to allow more managed jobs to be running at once.
@@ -584,10 +583,10 @@ To see your current jobs controller, use :code:`sky status`.
$ sky status --refresh
Clusters
- NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
- my-cluster-1 1 week ago 1x AWS(m6i.4xlarge) STOPPED - sky launch --cpus 16 --cloud...
- my-other-cluster 1 week ago 1x GCP(n2-standard-16) STOPPED - sky launch --cloud gcp --...
- sky-jobs-controller-919df126 1 day ago 1x AWS(r6i.xlarge, disk_size=50) STOPPED 10m sky jobs launch --cpus 2 ...
+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED
+ my-cluster-1 AWS (us-east-1) 1x(cpus=16, m6i.4xlarge, ...) STOPPED - 1 week ago
+ my-other-cluster GCP (us-central1) 1x(cpus=16, n2-standard-16, ...) STOPPED - 1 week ago
+ sky-jobs-controller-919df126 AWS (us-east-1) 1x(cpus=2, r6i.xlarge, disk_size=50) STOPPED 10m 1 day ago
Managed jobs
No in-progress managed jobs.
@@ -642,7 +641,7 @@ For maximum parallelism, the following configuration is recommended:
controller:
resources:
# In our testing, aws > gcp > azure
- cloud: aws
+ infra: aws
cpus: 128
# Azure does not have 128+ CPU instances, so use 96 instead
# cpus: 96
diff --git a/docs/source/getting-started/installation.rst b/docs/source/getting-started/installation.rst
index c14c1718a48..58a34e8f787 100644
--- a/docs/source/getting-started/installation.rst
+++ b/docs/source/getting-started/installation.rst
@@ -21,7 +21,7 @@ Install SkyPilot using pip:
conda create -y -n sky python=3.10
conda activate sky
- # Choose your cloud:
+ # Choose your infra:
pip install "skypilot[kubernetes]"
pip install "skypilot[aws]"
@@ -50,7 +50,7 @@ Install SkyPilot using pip:
conda create -y -n sky python=3.10
conda activate sky
- # Choose your cloud:
+ # Choose your infra:
pip install "skypilot-nightly[kubernetes]"
pip install "skypilot-nightly[aws]"
@@ -83,7 +83,7 @@ Install SkyPilot using pip:
git clone https://github.com/skypilot-org/skypilot.git
cd skypilot
- # Choose your cloud:
+ # Choose your infra:
pip install -e ".[kubernetes]"
pip install -e ".[aws]"
diff --git a/docs/source/getting-started/quickstart.rst b/docs/source/getting-started/quickstart.rst
index 2a7fecf7709..6740d6f8b75 100644
--- a/docs/source/getting-started/quickstart.rst
+++ b/docs/source/getting-started/quickstart.rst
@@ -32,7 +32,7 @@ Copy the following YAML into a ``hello_sky.yaml`` file:
resources:
# Optional; if left out, automatically pick the cheapest cloud.
- cloud: aws
+ infra: aws
# 8x NVIDIA A100 GPU
accelerators: A100:8
@@ -126,9 +126,9 @@ This may show multiple clusters, if you have created several:
.. code-block::
- NAME LAUNCHED RESOURCES COMMAND STATUS
- mygcp 1 day ago 1x GCP(n1-highmem-8) sky launch -c mygcp --cloud gcp STOPPED
- mycluster 4 mins ago 1x AWS(p4d.24xlarge, {'A100': 8}) sky exec mycluster hello_sky.yaml UP
+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED
+ mygcp GCP (us-central1-a) 1x(cpus=4, mem=16, n2-standard-4, ...) STOPPED - 1 day ago
+ mycluster AWS (us-east-1) 1x(gpus=A100:8, p4d.24xlarge, ...) UP - 4 mins ago
See here for a list of all possible :ref:`cluster states <sky-status>`.
diff --git a/docs/source/reference/api-server/api-server.rst b/docs/source/reference/api-server/api-server.rst
index 42fe4b43b31..519ae72d690 100644
--- a/docs/source/reference/api-server/api-server.rst
+++ b/docs/source/reference/api-server/api-server.rst
@@ -128,16 +128,16 @@ To see other users' clusters and the job/serve controllers, use the ``-u`` flag.
$ sky status -u
Clusters
- NAME USER LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
- my-cluster-2 my-user 2 hrs ago 1x GCP(n2-standard-8) STOPPED - sky launch task-2.yaml
- other-cluster other-user 1 week ago 1x AWS(m6i.16xlarge) UP - sky launch --cloud aws...
- my-cluster-1 my-user 2 months ago 1x AWS(m6i.4xlarge) STOPPED - sky launch task-1.yaml
- sky-jobs-controller-7c3d4ff7 root 2 days ago 1x AWS(r6i.xlarge, disk_size=50) STOPPED 10m sky jobs launch --env PART...
+ NAME USER LAUNCHED INFRA RESOURCES STATUS AUTOSTOP
+ my-cluster-2 my-user 2 hrs ago GCP (us-central1-a) 1x(cpus=8, mem=32, n2-standard-8, ...) STOPPED -
+ other-cluster other-user 1 week ago AWS (us-east-1) 1x(cpus=64, mem=256, m6i.16xlarge, ...) UP -
+ my-cluster-1 my-user 2 months ago AWS (us-east-1) 1x(cpus=16, mem=64, m6i.4xlarge, ...) STOPPED -
+ sky-jobs-controller-7c3d4ff7 root 2 days ago AWS (us-east-1) 1x(cpus=4, mem=32, r6i.xlarge, ...) STOPPED 10m
$ sky jobs queue -u
Fetching managed job statuses...
Managed jobs
- ID TASK NAME USER RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
+ ID TASK NAME USER REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
3 - job-2 my-user 1x[CPU:2] 2 days ago 2m 10s 1m 14s 0 CANCELLED
2 - other-job other-user 1x[CPU:2] 2 days ago 11m 54s 10m 52s 0 CANCELLED
1 - job-1 my-use 1x[CPU:2] 5 days ago 1m 7s 3s 0 SUCCEEDED
diff --git a/docs/source/reference/async.rst b/docs/source/reference/async.rst
index 32316bfbfe1..1ce2e6015ae 100644
--- a/docs/source/reference/async.rst
+++ b/docs/source/reference/async.rst
@@ -38,10 +38,10 @@ For example, when a user runs ``sky launch -c my-cluster``, the following output
$ sky launch -c my-cluster --cpus 2
Considered resources (1 node):
---------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
---------------------------------------------------------------------------------------------
- Kubernetes 2CPU--2GB 2 2 - in-cluster 0.00 ✔
- AWS m6i.large 2 8 - us-east-1 0.10
+ Kubernetes (my-cluster) 2CPU--2GB 2 2 - 0.00 ✔
+ AWS (us-east-1) m6i.large 2 8 - 0.098
---------------------------------------------------------------------------------------------
Launching a new cluster 'my-cluster'. Proceed? [Y/n]:
⚙︎ Launching on Kubernetes.
diff --git a/docs/source/reference/auto-stop.rst b/docs/source/reference/auto-stop.rst
index f034c520d3b..e6071dd76c8 100644
--- a/docs/source/reference/auto-stop.rst
+++ b/docs/source/reference/auto-stop.rst
@@ -58,15 +58,15 @@ To view the status of the cluster, use ``sky status [--refresh]``:
.. code-block:: bash
$ sky status
- NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
- mycluster 1 min ago 2x AWS(m4.2xlarge) UP 10 min sky launch -d -c ...
- mycluster2 1 min ago 2x AWS(m4.2xlarge) UP 10 min(down) sky launch -d -c ...
+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED
+ mycluster AWS (us-east-1) 2x(cpus=8, m4.2xlarge, ...) UP 10 min 1 min ago
+ mycluster2 AWS (us-east-1) 2x(cpus=8, m4.2xlarge, ...) UP 10 min(down) 1 min ago
# Refresh the statuses by querying the cloud providers
$ sky status --refresh
I 06-27 13:36:11 backend_utils.py:2273] Autodowned cluster: mycluster2
- NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
- mycluster 11 min ago 2x AWS(m4.2xlarge) STOPPED 10 min sky launch -d -c ...
+ NAME INFRA RESOURCES STATUS AUTOSTOP LAUNCHED
+ mycluster AWS (us-east-1) 2x(cpus=8, m4.2xlarge, ...) STOPPED 10 min 11 min ago
Note that :code:`sky status` shows the cached statuses, which can be outdated for clusters with autostop/autodown scheduled.
To query the latest statuses of those clusters, use :code:`sky status --refresh`.
diff --git a/docs/source/reference/config.rst b/docs/source/reference/config.rst
index 58bb7b66094..18460060836 100644
--- a/docs/source/reference/config.rst
+++ b/docs/source/reference/config.rst
@@ -37,8 +37,7 @@ Below is the configuration syntax and some example values. See detailed explanat
:ref:`bucket <config-yaml-jobs-bucket>`: s3://my-bucket/
controller:
:ref:`resources <config-yaml-jobs-controller-resources>`: # same spec as 'resources' in a task YAML
- cloud: gcp
- region: us-central1
+ infra: gcp/us-central1
cpus: 4+ # number of vCPUs, max concurrent spot jobs = 2 * cpus
disk_size: 100
:ref:`autostop <config-yaml-jobs-controller-autostop>`:
@@ -214,8 +213,7 @@ Example:
controller:
resources: # same spec as 'resources' in a task YAML
# optionally set specific cloud/region
- cloud: gcp
- region: us-central1
+ infra: gcp/us-central1
# default resources:
cpus: 4+
memory: 8x
diff --git a/docs/source/reference/kubernetes/kubernetes-deployment.rst b/docs/source/reference/kubernetes/kubernetes-deployment.rst
index 3324999007f..d42826e5b63 100644
--- a/docs/source/reference/kubernetes/kubernetes-deployment.rst
+++ b/docs/source/reference/kubernetes/kubernetes-deployment.rst
@@ -143,11 +143,11 @@ Deploying on Google Cloud GKE
$ sky check
-5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --cloud k8s`
+5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --infra k8s`
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
L4 1, 2, 4 6 of 8 free
A100 1, 2 2 of 4 free
@@ -198,11 +198,11 @@ Deploying on Amazon EKS
$ sky check
-5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --cloud k8s`
+5. [If using GPUs] Check available GPUs in the kubernetes cluster with :code:`sky show-gpus --infra k8s`
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
A100 1, 2 2 of 2 free
diff --git a/docs/source/reference/kubernetes/kubernetes-getting-started.rst b/docs/source/reference/kubernetes/kubernetes-getting-started.rst
index b6ef3fba103..7f46b64912a 100644
--- a/docs/source/reference/kubernetes/kubernetes-getting-started.rst
+++ b/docs/source/reference/kubernetes/kubernetes-getting-started.rst
@@ -111,15 +111,15 @@ Once your cluster administrator has :ref:`setup a Kubernetes cluster <kubernetes
Considered resources (1 node):
---------------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
---------------------------------------------------------------------------------------------------
- Kubernetes 2CPU--2GB 2 2 - kubernetes 0.00 ✔
- AWS m6i.large 2 8 - us-east-1 0.10
- Azure Standard_D2s_v5 2 8 - eastus 0.10
- GCP n2-standard-2 2 8 - us-central1 0.10
- IBM bx2-8x32 8 32 - us-east 0.38
- Lambda gpu_1x_a10 30 200 A10:1 us-east-1 0.60
- ---------------------------------------------------------------------------------------------------.
+ Kubernetes (kind-skypilot) - 2 2 - 0.00 ✔
+ AWS (us-east-1) m6i.large 2 8 - 0.10
+ Azure (eastus) Standard_D2s_v5 2 8 - 0.10
+ GCP (us-central1-a) n2-standard-2 2 8 - 0.10
+ IBM (us-east) bx2-8x32 8 32 - 0.38
+ Lambda (us-east-1) gpu_1x_a10 30 200 A10:1 0.60
+ ----------------------------------------------------------------------------------------------------
.. note::
@@ -152,28 +152,28 @@ Unlike :code:`sky status` which lists only the SkyPilot resources launched by th
$ sky status --k8s
Kubernetes cluster state (context: mycluster)
SkyPilot clusters
- USER NAME LAUNCHED RESOURCES STATUS
- alice infer-svc-1 23 hrs ago 1x Kubernetes(cpus=1, mem=1, {'L4': 1}) UP
- alice sky-jobs-controller-80b50983 2 days ago 1x Kubernetes(cpus=4, mem=4) UP
- alice sky-serve-controller-80b50983 23 hrs ago 1x Kubernetes(cpus=4, mem=4) UP
- bob dev 1 day ago 1x Kubernetes(cpus=2, mem=8, {'H100': 1}) UP
- bob multinode-dev 1 day ago 2x Kubernetes(cpus=2, mem=2) UP
- bob sky-jobs-controller-2ea485ea 2 days ago 1x Kubernetes(cpus=4, mem=4) UP
+ USER NAME LAUNCHED INFRA RESOURCES STATUS
+ alice infer-svc-1 23 hrs ago Kubernetes 1x(gpus=L4:1, ...) UP
+ alice sky-jobs-controller-80b50983 2 days ago Kubernetes 1x(cpus=4, mem=4, ...) UP
+ alice sky-serve-controller-80b50983 23 hrs ago Kubernetes 1x(cpus=4, mem=4, ...) UP
+ bob dev 1 day ago Kubernetes 1x(gpus=H100:1, ...) UP
+ bob multinode-dev 1 day ago Kubernetes 2x(cpus=2, mem=2, ...) UP
+ bob sky-jobs-controller-2ea485ea 2 days ago Kubernetes 1x(cpus=4, mem=4, ...) UP
Managed jobs
In progress tasks: 1 STARTING
- USER ID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
+ USER ID TASK NAME REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
alice 1 - eval 1x[CPU:1+] 2 days ago 49s 8s 0 SUCCEEDED
bob 4 - pretrain 1x[H100:4] 1 day ago 1h 1m 11s 1h 14s 0 SUCCEEDED
bob 3 - bigjob 1x[CPU:16] 1 day ago 1d 21h 11m 4s - 0 STARTING
bob 2 - failjob 1x[CPU:1+] 1 day ago 54s 9s 0 FAILED
bob 1 - shortjob 1x[CPU:1+] 2 days ago 1h 1m 19s 1h 16s 0 SUCCEEDED
-You can also inspect the real-time GPU usage on the cluster with :code:`sky show-gpus --cloud k8s`.
+You can also inspect the real-time GPU usage on the cluster with :code:`sky show-gpus --infra k8s`.
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
Kubernetes GPUs
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
L4 1, 2, 4 12 of 12 free
diff --git a/docs/source/reference/kubernetes/kubernetes-priorities.rst b/docs/source/reference/kubernetes/kubernetes-priorities.rst
index ed8cec474ea..f7911ed9a1b 100644
--- a/docs/source/reference/kubernetes/kubernetes-priorities.rst
+++ b/docs/source/reference/kubernetes/kubernetes-priorities.rst
@@ -70,7 +70,7 @@ We use two simple counter jobs in this example:
# high-priority-job.yaml
resources:
- cloud: kubernetes
+ infra: kubernetes
cpus: 4
run: |
@@ -91,7 +91,7 @@ We use two simple counter jobs in this example:
# low-priority-job.yaml
resources:
- cloud: kubernetes
+ infra: kubernetes
cpus: 4
run: |
diff --git a/docs/source/reference/kubernetes/kubernetes-setup.rst b/docs/source/reference/kubernetes/kubernetes-setup.rst
index 2fa3e80f119..e6b9ef101a3 100644
--- a/docs/source/reference/kubernetes/kubernetes-setup.rst
+++ b/docs/source/reference/kubernetes/kubernetes-setup.rst
@@ -217,7 +217,7 @@ You can also check the GPUs available on your nodes by running:
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
Kubernetes GPUs
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
L4 1, 2, 4 12 of 12 free
diff --git a/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst b/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst
index 3abb8e42076..41ac13be372 100644
--- a/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst
+++ b/docs/source/reference/kubernetes/kubernetes-troubleshooting.rst
@@ -87,7 +87,7 @@ Next, try running a simple hello world task to verify that SkyPilot can launch t
.. code-block:: bash
- $ sky launch -y -c mycluster --cloud k8s -- "echo hello world"
+ $ sky launch -y -c mycluster --infra k8s -- "echo hello world"
# Task should run and print "hello world" to the console
# Once you have verified that the task runs, you can delete it
@@ -174,7 +174,7 @@ Run :code:`sky check` to verify that SkyPilot can see your GPUs.
# Should show `Kubernetes: Enabled` and should not print any warnings about GPU support.
# List the available GPUs in your cluster
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
Step B4 - Try launching a dummy GPU task
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -184,7 +184,7 @@ Next, try running a simple GPU task to verify that SkyPilot can launch GPU tasks
.. code-block:: bash
# Replace the GPU type from the sky show-gpus output in the task launch command
- $ sky launch -y -c mygpucluster --cloud k8s --gpu <gpu-type>:1 -- "nvidia-smi"
+ $ sky launch -y -c mygpucluster --infra k8s --gpu <gpu-type>:1 -- "nvidia-smi"
# Task should run and print the nvidia-smi output to the console
@@ -298,7 +298,7 @@ Next, try running a simple task with a service to verify that SkyPilot can launc
.. code-block:: bash
- $ sky launch -y -c myserver --cloud k8s --ports 8080 -- "python -m http.server 8080"
+ $ sky launch -y -c myserver --infra k8s --ports 8080 -- "python -m http.server 8080"
# Obtain the endpoint of the service
$ sky status --endpoint 8080 myserver
diff --git a/docs/source/reference/kubernetes/multi-kubernetes.rst b/docs/source/reference/kubernetes/multi-kubernetes.rst
index 3d70cebb253..acbf2cde6f6 100644
--- a/docs/source/reference/kubernetes/multi-kubernetes.rst
+++ b/docs/source/reference/kubernetes/multi-kubernetes.rst
@@ -96,11 +96,11 @@ To check the enabled Kubernetes clusters, you can run ``sky check k8s``.
├── my-h100-cluster
└── my-tpu-cluster
-To check GPUs available in a Kubernetes cluster, you can run ``sky show-gpus --cloud k8s``.
+To check GPUs available in a Kubernetes cluster, you can run ``sky show-gpus --infra k8s``.
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
Kubernetes GPUs
GPU UTILIZATION
H100 16 of 16 free
@@ -128,31 +128,33 @@ through the Kubernetes clusters in the same order as they are specified in the f
.. code-block:: console
- $ sky launch --gpus H100 --cloud k8s echo 'Hello World'
+ $ sky launch --gpus H100 --infra k8s echo 'Hello World'
Considered resources (1 node):
- ------------------------------------------------------------------------------------------------------------
- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
- ------------------------------------------------------------------------------------------------------------
- Kubernetes 2CPU--8GB--1H100 2 8 H100:1 my-h100-cluster-gke 0.00 ✔
- Kubernetes 2CPU--8GB--1H100 2 8 H100:1 my-h100-cluster-eks 0.00
- ------------------------------------------------------------------------------------------------------------
+ ---------------------------------------------------------------------------------------------------------
+ INFRA INSTANCE vCPUs Mem(GB) GPUS COST ($) CHOSEN
+ ---------------------------------------------------------------------------------------------------------
+ Kubernetes (my-eks-cluster) 2CPU--2GB 2 2 - 0.00 ✔
+ Kubernetes (gke-skypilot) 4CPU--8GB 4 8 - 0.00
+ AWS (us-east-1) m6i.large 2 8 - 0.10
+ GCP (us-central1-a) n2-standard-2 2 8 - 0.10
+ ---------------------------------------------------------------------------------------------------------
Launching in a specific Kubernetes cluster
------------------------------------------
-SkyPilot uses the ``region`` field to denote a Kubernetes context. You can point to a Kubernetes cluster
-by specifying the ``--region`` with the context name for that cluster.
+SkyPilot uses the ``infra`` field to denote a Kubernetes context. You can point to a Kubernetes cluster
+by specifying the ``--infra`` with the context name for that cluster.
.. code-block:: console
$ # Launch in a specific Kubernetes cluster.
- $ sky launch --cloud k8s --region my-tpu-cluster echo 'Hello World'
+ $ sky launch --infra k8s/my-tpu-cluster echo 'Hello World'
$ # Check the GPUs available in a Kubernetes cluster
- $ sky show-gpus --cloud k8s --region my-h100-cluster ✭ ✱
+ $ sky show-gpus --infra k8s/my-h100-cluster
Kubernetes GPUs
Context: my-h100-cluster
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
@@ -163,7 +165,7 @@ by specifying the ``--region`` with the context name for that cluster.
my-h100-cluster gke-skypilotalpha-largecpu-05dae726-1usy H100 8 of 8 free
my-h100-cluster gke-skypilotalpha-largecpu-05dae726-4rxa H100 8 of 8 free
-When launching a SkyPilot cluster or task, you can also specify the context name with ``--region`` to launch the cluster or task in.
+When launching a SkyPilot cluster or task, you can also specify the context name with ``--infra`` to launch the cluster or task in.
Dynamically updating clusters to use
diff --git a/docs/source/reference/yaml-spec.rst b/docs/source/reference/yaml-spec.rst
index 9c49cec3915..b800227aca8 100644
--- a/docs/source/reference/yaml-spec.rst
+++ b/docs/source/reference/yaml-spec.rst
@@ -26,10 +26,9 @@ Below is the configuration syntax and some example values. See details under ea
:ref:`num_nodes <yaml-spec-num-nodes>`: 4
:ref:`resources <yaml-spec-resources>`:
- # Location.
- :ref:`cloud <yaml-spec-resources-cloud>`: aws
- :ref:`region <yaml-spec-resources-region>`: us-east-1
- :ref:`zone <yaml-spec-resources-zone>`: us-east-1a
+ # Infra to use. For example: ``aws``, ``aws/us-east-1``, ``kubernetes``,
+ # or, ``kubernetes/my-h100-cluster-context``.
+ :ref:`infra <yaml-spec-resources-infra>`: aws
# Hardware.
:ref:`accelerators <yaml-spec-resources-accelerators>`: H100:8
@@ -49,17 +48,14 @@ Below is the configuration syntax and some example values. See details under ea
my-label: my-value
:ref:`any_of <yaml-spec-resources-any-of>`:
- - cloud: aws
- region: us-west-2
+ - infra: aws/us-west-2
accelerators: H100
- - cloud: gcp
+ - infra: gcp/us-central1
accelerators: H100
:ref:`ordered <yaml-spec-resources-ordered>`:
- - cloud: aws
- region: us-east-1
- - cloud: aws
- region: us-west-2
+ - infra: aws/us-east-1
+ - infra: aws/us-west-2
:ref:`job_recovery <yaml-spec-resources-job-recovery>`: none
@@ -151,58 +147,50 @@ Per-node resource requirements (optional).
.. code-block:: yaml
resources:
- cloud: aws
+ infra: aws
instance_type: p3.8xlarge
-.. _yaml-spec-resources-cloud:
+.. _yaml-spec-resources-infra:
-``resources.cloud``
+``resources.infra``
~~~~~~~~~~~~~~~~~~~
-The cloud to use (optional).
-.. code-block:: yaml
-
- resources:
- cloud: aws
+Infrastructure to use (optional). Format: ``<cloud>``, ``<cloud>/<region>``, ``<cloud>/<region>/<zone>``, ``kubernetes/<context-name>``.
-OR
+Examples: ``aws``, ``aws/us-east-1``, ``aws/us-east-1/us-east-1a``, ``aws/*/us-east-1a``, ``kubernetes/my-cluster-context``.
.. code-block:: yaml
resources:
- cloud: gcp
+ infra: aws
-.. _yaml-spec-resources-region:
-
-``resources.region``
-~~~~~~~~~~~~~~~~~~~~
+.. code-block:: yaml
-The region to use (optional).
+ resources:
+ infra: kubernetes
-Auto-failover will be disabled if this is specified.
+You can also specify a specific region, zone or kubernetes context.
.. code-block:: yaml
resources:
- region: us-east-1
+ infra: aws/us-east-1
-.. _yaml-spec-resources-zone:
-
-``resources.zone``
-~~~~~~~~~~~~~~~~~~
+.. code-block:: yaml
-The zone to use (optional).
+ resources:
+ infra: aws/us-east-1/us-east-1a
-Auto-failover will be disabled if this is specified.
.. code-block:: yaml
resources:
- zone: us-east-1a
+ infra: kubernetes/my-h100-cluster-context
+
.. _yaml-spec-resources-accelerators:
@@ -658,12 +646,10 @@ Example:
.. code-block:: yaml
resources:
+ accelerators: H100
any_of:
- - cloud: aws
- region: us-west-2
- accelerators: H100
- - cloud: gcp
- accelerators: H100
+ - infra: aws/us-west-2
+ - infra: gcp/us-central1
.. _yaml-spec-resources-ordered:
@@ -683,10 +669,8 @@ Example:
resources:
ordered:
- - cloud: aws
- region: us-east-1
- - cloud: aws
- region: us-west-2
+ - infra: aws/us-east-1
+ - infra: aws/us-west-2
.. _yaml-spec-resources-job-recovery:
diff --git a/docs/source/reservations/existing-machines.rst b/docs/source/reservations/existing-machines.rst
index 1a6c14db730..e2a44b45310 100644
--- a/docs/source/reservations/existing-machines.rst
+++ b/docs/source/reservations/existing-machines.rst
@@ -106,7 +106,7 @@ Deploying SkyPilot
✔ Remote k3s is running.
✔ Nvidia GPU Operator installed successfully.
Cluster deployment done. You can now run tasks on this cluster.
- E.g., run a task with: sky launch --cloud kubernetes -- echo hello world.
+ E.g., run a task with: sky launch --infra kubernetes -- echo hello world.
🎉 Remote cluster deployed successfully.
@@ -120,7 +120,7 @@ Deploying SkyPilot
.. code-block:: console
- $ sky show-gpus --cloud k8s
+ $ sky show-gpus --infra k8s
Kubernetes GPUs
GPU REQUESTABLE_QTY_PER_NODE UTILIZATION
L4 1, 2, 4 12 of 12
@@ -135,7 +135,7 @@ Deploying SkyPilot
my-cluster-4 H100 8 of 8
my-cluster-5 H100 8 of 8
- $ sky launch --cloud k8s --gpus H100:1 -- nvidia-smi
+ $ sky launch --infra k8s --gpus H100:1 -- nvidia-smi
.. tip::
@@ -194,27 +194,27 @@ You can then configure SkyPilot to use :ref:`multiple Kubernetes clusters <multi
# ~/.sky/config.yaml
allowed_contexts:
- - cluster1
- - cluster2
+ - cluster1-ctx
+ - cluster2-ctx
.. code-block:: bash
# Run on cluster1
- sky launch --cloud k8s --region cluster1 -- echo "Running on cluster 1"
+ sky launch --infra k8s/cluster1-ctx -- echo "Running on cluster 1"
# Run on cluster2
- sky launch --cloud k8s --region cluster2 -- echo "Running on cluster 2"
+ sky launch --infra k8s/cluster2-ctx -- echo "Running on cluster 2"
# Let SkyPilot automatically select the cluster with available resources
- sky launch --cloud k8s -- echo "Running on SkyPilot selected cluster"
+ sky launch --infra k8s -- echo "Running on SkyPilot selected cluster"
You can view the available clusters and GPUs using:
.. code-block:: bash
# List GPUs on cluster1
- sky show-gpus --cloud k8s --region cluster1
+ sky show-gpus --infra k8s/cluster1-ctx
# List GPUs on cluster2
- sky show-gpus --cloud k8s --region cluster2
+ sky show-gpus --infra k8s/cluster2-ctx
diff --git a/docs/source/reservations/reservations.rst b/docs/source/reservations/reservations.rst
index a846f80e2f8..9dbf37a022d 100644
--- a/docs/source/reservations/reservations.rst
+++ b/docs/source/reservations/reservations.rst
@@ -77,7 +77,7 @@ For example, if you are launching a cluster with the following SkyPilot YAML:
.. code-block:: yaml
resources:
- cloud: aws
+ infra: aws
accelerators: A100:8
num_nodes: 2
@@ -95,7 +95,7 @@ SkyPilot will utilize the capacity reservation/block as follows:
.. hint::
- If you have a capacity block with a starting time in the future, you can run ``sky jobs launch --region us-east-1 --gpus H100:8 task.yaml`` to let SkyPilot automatically wait until the starting time is reached. Namely, you don't have to wake up at 4:30am PDT to launch your job on a newly available capacity block.
+ If you have a capacity block with a starting time in the future, you can run ``sky jobs launch --infra aws/us-east-1 --gpus H100:8 task.yaml`` to let SkyPilot automatically wait until the starting time is reached. Namely, you don't have to wake up at 4:30am PDT to launch your job on a newly available capacity block.
GCP reservations
@@ -163,7 +163,7 @@ In case you want to specify the DWS configuration for each job/cluster, you can
provision_timeout: 900
resources:
- cloud: gcp
+ infra: gcp
accelerators: A100:8
num_nodes: 4
@@ -188,7 +188,7 @@ To launch a SkyPilot cluster or job on GKE with DWS, you can specify the DWS con
provision_timeout: 900
resources:
- cloud: kubernetes
+ infra: kubernetes
accelerators: A100:8
labels:
kueue.x-k8s.io/queue-name: dws-local-queue
diff --git a/docs/source/running-jobs/many-jobs.rst b/docs/source/running-jobs/many-jobs.rst
index 60e122b5f91..15e66ccd32c 100644
--- a/docs/source/running-jobs/many-jobs.rst
+++ b/docs/source/running-jobs/many-jobs.rst
@@ -301,10 +301,10 @@ Job statuses can be checked via ``sky jobs queue``:
Fetching managed jobs...
Managed jobs
In progress tasks: 10 RUNNING
- ID TASK NAME RESOURCES SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
- 10 - train-job10 1x[V100:4] 5 mins ago 5m 5s 1m 12s 0 RUNNING
- 9 - train-job9 1x[V100:4] 6 mins ago 6m 11s 2m 23s 0 RUNNING
- 8 - train-job8 1x[V100:4] 7 mins ago 7m 15s 3m 31s 0 RUNNING
+ ID TASK NAME REQUESTED SUBMITTED TOT. DURATION JOB DURATION #RECOVERIES STATUS
+ 10 - train-job10 1x[V100:4] 5 mins ago 5m 5s 1m 12s 0 RUNNING
+ 9 - train-job9 1x[V100:4] 6 mins ago 6m 11s 2m 23s 0 RUNNING
+ 8 - train-job8 1x[V100:4] 7 mins ago 7m 15s 3m 31s 0 RUNNING
...
diff --git a/docs/source/serving/sky-serve.rst b/docs/source/serving/sky-serve.rst
index 15bf4232b52..046031f6420 100644
--- a/docs/source/serving/sky-serve.rst
+++ b/docs/source/serving/sky-serve.rst
@@ -527,8 +527,7 @@ To achieve the above, you can specify custom configs in :code:`~/.sky/config.yam
resources:
# All configs below are optional.
# Specify the location of the SkyServe controller.
- cloud: gcp
- region: us-central1
+ infra: gcp/us-central1
# Specify the maximum number of services that can be run concurrently.
cpus: 2+ # number of vCPUs, max concurrent services = min(4 * cpus, memory in GiB)
# Specify the disk_size in GB of the SkyServe controller.
diff --git a/docs/source/serving/spot-policy.rst b/docs/source/serving/spot-policy.rst
index 02af9a79f26..86b11dd7fdc 100644
--- a/docs/source/serving/spot-policy.rst
+++ b/docs/source/serving/spot-policy.rst
@@ -88,11 +88,11 @@ When the service is up, we can check the status of the service and the replicas
http-server 1 1m 17s NO_REPLICA 0/4 54.227.229.217:30001
Service Replicas
- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 - 1 min ago 1x GCP([Spot]vCPU=2) PROVISIONING us-east1
- http-server 2 1 - 1 min ago 1x GCP([Spot]vCPU=2) PROVISIONING us-central1
- http-server 3 1 - 1 mins ago 1x GCP(vCPU=2) PROVISIONING us-east1
- http-server 4 1 - 1 min ago 1x GCP(vCPU=2) PROVISIONING us-central1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 - 1 min ago GCP (us-east1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) PROVISIONING
+ http-server 2 1 - 1 min ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) PROVISIONING
+ http-server 3 1 - 1 mins ago GCP (us-east1) 1x(cpus=2, mem=8, n2-standard-2, ...) PROVISIONING
+ http-server 4 1 - 1 min ago GCP (us-central1) 1x(cpus=2, mem=8, n2-standard-2, ...) PROVISIONING
When the required number of spot replicas are not available, SkyServe will provision on-demand replicas to meet the target number of replicas. For example, when the target number is 2 and no spot replicas are ready, SkyServe will provision 2 on-demand replicas to meet the target number of replicas.
@@ -105,11 +105,11 @@ When the required number of spot replicas are not available, SkyServe will provi
http-server 1 1m 17s READY 2/4 54.227.229.217:30001
Service Replicas
- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 http://34.23.22.160:8081 3 min ago 1x GCP([Spot]vCPU=2) READY us-east1
- http-server 2 1 http://34.68.226.193:8081 3 min ago 1x GCP([Spot]vCPU=2) READY us-central1
- http-server 3 1 - 3 mins ago 1x GCP(vCPU=2) SHUTTING_DOWN us-east1
- http-server 4 1 - 3 min ago 1x GCP(vCPU=2) SHUTTING_DOWN us-central1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://34.23.22.160:8081 3 min ago GCP (us-east1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
+ http-server 2 1 http://34.68.226.193:8081 3 min ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
+ http-server 3 1 - 3 mins ago GCP (us-east1) 1x(cpus=2, mem=8, n2-standard-2, ...) SHUTTING_DOWN
+ http-server 4 1 - 3 min ago GCP (us-central1) 1x(cpus=2, mem=8, n2-standard-2, ...) SHUTTING_DOWN
When the spot replicas are ready, SkyServe will automatically scale down on-demand replicas to maximize cost savings.
@@ -122,9 +122,9 @@ When the spot replicas are ready, SkyServe will automatically scale down on-dema
http-server 1 3m 59s READY 2/2 54.227.229.217:30001
Service Replicas
- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 http://34.23.22.160:8081 4 mins ago 1x GCP([Spot]vCPU=2) READY us-east1
- http-server 2 1 http://34.68.226.193:8081 4 mins ago 1x GCP([Spot]vCPU=2) READY us-central1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://34.23.22.160:8081 4 mins ago GCP (us-east1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
+ http-server 2 1 http://34.68.226.193:8081 4 mins ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
In the event of spot instance interruptions (e.g. replica 1), SkyServe will automatically fallback to on-demand replicas (e.g. launch one on-demand replica) to meet the required capacity of replicas. SkyServe will continue trying to provision one spot replica in the event where spot availability is back. Note that SkyServe will try different regions and clouds to maximize the chance of successfully provisioning spot instances.
@@ -137,10 +137,10 @@ In the event of spot instance interruptions (e.g. replica 1), SkyServe will auto
http-server 1 7m 2s READY 1/3 54.227.229.217:30001
Service Replicas
- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
- http-server 2 1 http://34.68.226.193:8081 7 mins ago 1x GCP([Spot]vCPU=2) READY us-central1
- http-server 5 1 - 13 secs ago 1x GCP([Spot]vCPU=2) PROVISIONING us-central1
- http-server 6 1 - 13 secs ago 1x GCP(vCPU=2) PROVISIONING us-central1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 2 1 http://34.68.226.193:8081 7 mins ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
+ http-server 5 1 - 13 secs ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) PROVISIONING
+ http-server 6 1 - 13 secs ago GCP (us-central1) 1x(cpus=2, mem=8, n2-standard-2, ...) PROVISIONING
Eventually, when the spot availability is back, SkyServe will automatically scale down on-demand replicas.
@@ -153,6 +153,6 @@ Eventually, when the spot availability is back, SkyServe will automatically scal
http-server 1 10m 5s READY 2/3 54.227.229.217:30001
Service Replicas
- SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
- http-server 2 1 http://34.68.226.193:8081 10 mins ago 1x GCP([Spot]vCPU=2) READY us-central1
- http-server 5 1 http://34.121.49.94:8081 1 min ago 1x GCP([Spot]vCPU=2) READY us-central1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 2 1 http://34.68.226.193:8081 10 mins ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
+ http-server 5 1 http://34.121.49.94:8081 1 min ago GCP (us-central1) 1x[spot](cpus=2, mem=8, n2-standard-2, ...) READY
diff --git a/docs/source/serving/update.rst b/docs/source/serving/update.rst
index ca4f5ddb0ba..72c96c30b20 100644
--- a/docs/source/serving/update.rst
+++ b/docs/source/serving/update.rst
@@ -57,9 +57,9 @@ We can use :code:`sky serve status http-server` to check the status of the servi
http-server 1 1m 41s READY 2/2 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 2 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 2 1 52.87.241.103 2 mins ago 1x AWS(vCPU=2) READY us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 2 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 2 1 http://52.87.241.103:8081 2 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
Service ``http-server`` has an initial version of 1.
@@ -102,12 +102,12 @@ SkyServe will trigger launching three new replicas.
http-server 2 6m 15s READY 2/5 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 6 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 2 1 52.87.241.103 6 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 3 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1
- http-server 4 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1
- http-server 5 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 2 1 http://52.87.241.103:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 3 2 - 21 secs ago AWS (us-east-1b) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+ http-server 4 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+ http-server 5 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
Whenever a new replica is ready, the traffic will be redirected to both old and new replicas.
@@ -121,12 +121,12 @@ Whenever a new replica is ready, the traffic will be redirected to both old and
http-server 1,2 10m 4s READY 3/5 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=2) READY us-east-1
- http-server 4 2 - 1 min ago 1x AWS(vCPU=2) PROVISIONING us-east-1
- http-server 5 2 - 1 min ago 1x AWS(vCPU=2) PROVISIONING us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 4 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+ http-server 5 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
Once the total number of both old and new replicas exceeds the requested number, old replicas will be scaled down.
@@ -140,12 +140,13 @@ Once the total number of both old and new replicas exceeds the requested number,
http-server 1,2 10m 4s READY 3/5 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) SHUTTING_DOWN us-east-1
- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=2) READY us-east-1
- http-server 4 2 18.206.226.82 1 min ago 1x AWS(vCPU=2) READY us-east-1
- http-server 5 2 - 1 min ago 1x AWS(vCPU=2) PROVISIONING us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 4 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+ http-server 5 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+
Eventually, we will only have new replicas ready to serve user requests.
@@ -158,10 +159,10 @@ Eventually, we will only have new replicas ready to serve user requests.
http-server 2 11m 42s READY 3/3 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 3 2 3.93.241.163 3 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 4 2 18.206.226.82 3 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 5 2 3.26.232.31 1 min ago 1x AWS(vCPU=2) READY us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 3 2 http://3.93.241.163:8081 3 mins ago AWS (us-east-1b) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 4 2 http://18.206.226.82:8081 3 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 5 2 http://3.26.232.31:8081 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
@@ -210,12 +211,12 @@ SkyServe will trigger launching three new replicas.
http-server 2 6m 15s READY 2/5 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 6 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 2 1 52.87.241.103 6 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 3 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1
- http-server 4 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1
- http-server 5 2 - 21 secs ago 1x AWS(vCPU=2) PROVISIONING us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 2 1 http://52.87.241.103:8081 6 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 3 2 - 21 secs ago AWS (us-east-1b) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+ http-server 4 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
+ http-server 5 2 - 21 secs ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
When a new replica is ready, the traffic will still be redirected to old replicas.
@@ -229,12 +230,12 @@ When a new replica is ready, the traffic will still be redirected to old replica
http-server 1 10m 4s READY 3/5 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) READY us-east-1
- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=4) READY us-east-1
- http-server 4 2 - 1 min ago 1x AWS(vCPU=4) PROVISIONING us-east-1
- http-server 5 2 - 1 min ago 1x AWS(vCPU=4) PROVISIONING us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) SHUTTING_DOWN
+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) READY
+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
+ http-server 4 2 http://18.206.226.82:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
+ http-server 5 2 - 1 min ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) PROVISIONING
Once the total number of new replicas satisfies the requirements, traffics will be redirected to new replicas and old replicas will be scaled down.
@@ -248,12 +249,12 @@ Once the total number of new replicas satisfies the requirements, traffics will
http-server 2 10m 4s READY 3/5 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 1 1 54.173.203.169 10 mins ago 1x AWS(vCPU=2) SHUTTING_DOWN us-east-1
- http-server 2 1 52.87.241.103 10 mins ago 1x AWS(vCPU=2) SHUTTING_DOWN us-east-1
- http-server 3 2 3.93.241.163 1 min ago 1x AWS(vCPU=4) READY us-east-1
- http-server 4 2 18.206.226.82 1 min ago 1x AWS(vCPU=4) READY us-east-1
- http-server 5 2 3.26.232.31 1 min ago 1x AWS(vCPU=4) READY us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 1 1 http://54.173.203.169:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) SHUTTING_DOWN
+ http-server 2 1 http://52.87.241.103:8081 10 mins ago AWS (us-east-1a) 1x(cpus=2, mem=8, m5.large, ...) SHUTTING_DOWN
+ http-server 3 2 http://3.93.241.163:8081 1 min ago AWS (us-east-1b) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
+ http-server 4 2 http://18.206.226.82:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
+ http-server 5 2 http://3.26.232.31:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
Eventually, same as the rolling update, we will only have new replicas ready to serve user requests.
@@ -266,7 +267,7 @@ Eventually, same as the rolling update, we will only have new replicas ready to
http-server 2 11m 42s READY 3/3 44.206.240.249:30002
Service Replicas
- SERVICE_NAME ID VERSION IP LAUNCHED RESOURCES STATUS REGION
- http-server 3 2 3.93.241.163 3 mins ago 1x AWS(vCPU=4) READY us-east-1
- http-server 4 2 18.206.226.82 3 mins ago 1x AWS(vCPU=4) READY us-east-1
- http-server 5 2 3.26.232.31 1 min ago 1x AWS(vCPU=4) READY us-east-1
+ SERVICE_NAME ID VERSION ENDPOINT LAUNCHED INFRA RESOURCES STATUS
+ http-server 3 2 http://3.93.241.163:8081 3 mins ago AWS (us-east-1b) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
+ http-server 4 2 http://18.206.226.82:8081 3 mins ago AWS (us-east-1a) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
+ http-server 5 2 http://3.26.232.31:8081 1 min ago AWS (us-east-1a) 1x(cpus=4, mem=16, m5.xlarge, ...) READY
diff --git a/examples/admin_policy/task.yaml b/examples/admin_policy/task.yaml
index 065b4cbfb11..d3d4789c7ee 100644
--- a/examples/admin_policy/task.yaml
+++ b/examples/admin_policy/task.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
cpus: 2
labels:
other_labels: test
diff --git a/examples/autogluon.yaml b/examples/autogluon.yaml
index 00e5804f809..66093004b5e 100644
--- a/examples/autogluon.yaml
+++ b/examples/autogluon.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: gcp
+ infra: gcp
setup: |
git clone https://github.com/autogluon/autogluon.git
diff --git a/examples/aws_efa/nccl_efa.yaml b/examples/aws_efa/nccl_efa.yaml
index de6212a1c52..c73f1c01beb 100644
--- a/examples/aws_efa/nccl_efa.yaml
+++ b/examples/aws_efa/nccl_efa.yaml
@@ -1,7 +1,7 @@
name: nccl-test-efa
resources:
- cloud: kubernetes
+ infra: kubernetes
accelerators: A100:8
cpus: 90+
image_id: docker:public.ecr.aws/hpc-cloud/nccl-tests:latest
diff --git a/examples/azure_start_stop.yaml b/examples/azure_start_stop.yaml
index f6337267c1c..33dcbdfd187 100644
--- a/examples/azure_start_stop.yaml
+++ b/examples/azure_start_stop.yaml
@@ -2,7 +2,7 @@
name: azure-start-stop
resources:
- cloud: azure
+ infra: azure
# Optimizing for smoke tests
# 2 nodes: smoke tests ~37 mins
diff --git a/examples/containerized_app.py b/examples/containerized_app.py
index be58de3152b..3188145a5f9 100644
--- a/examples/containerized_app.py
+++ b/examples/containerized_app.py
@@ -22,6 +22,6 @@
with sky.Dag() as dag:
t = sky.Task(run=run_command, setup=setup_cmd)
- t.set_resources(sky.Resources(sky.AWS(), accelerators='V100'))
+ t.set_resources(sky.Resources(infra='aws', accelerators='V100'))
sky.launch(dag)
diff --git a/examples/custom_image.yaml b/examples/custom_image.yaml
index 535b91bfa4e..602985aa955 100644
--- a/examples/custom_image.yaml
+++ b/examples/custom_image.yaml
@@ -1,6 +1,5 @@
resources:
- cloud: aws
- region: us-east-2
+ infra: aws/us-east-2
# Nvidia image from
# https://aws.amazon.com/marketplace/pp/prodview-rf7na2b2ttvdg
image_id: ami-062ddd90fb6f8267a
diff --git a/examples/disk_size.yaml b/examples/disk_size.yaml
index 7384533b17c..eb97a978bc3 100644
--- a/examples/disk_size.yaml
+++ b/examples/disk_size.yaml
@@ -9,7 +9,7 @@
name: minimal
resources:
- cloud: azure
+ infra: azure
disk_size: 512
setup: |
diff --git a/examples/dvc/dvc_pipeline.yaml b/examples/dvc/dvc_pipeline.yaml
index e3ff3bce8bb..1a377e55e7a 100644
--- a/examples/dvc/dvc_pipeline.yaml
+++ b/examples/dvc/dvc_pipeline.yaml
@@ -2,8 +2,8 @@
name: dvc-pipeline
resources:
accelerators: T4:1
- cloud: aws
- region: us-east-2
+ infra: aws/us-east-2
+
workdir: .
file_mounts:
~/.ssh/id_rsa: ~/.ssh/id_rsa
@@ -18,4 +18,4 @@ run: |
# run DVC pipeline as an experiment
dvc exp run --pull --allow-missing
# push experiment results to DVC remote
- dvc exp push origin
\ No newline at end of file
+ dvc exp push origin
diff --git a/examples/example_app.py b/examples/example_app.py
index 82162d11ac3..c86c123c13c 100644
--- a/examples/example_app.py
+++ b/examples/example_app.py
@@ -40,10 +40,12 @@ def make_application():
train_op.set_outputs('CLOUD://my-model', estimated_size_gigabytes=0.1)
train_op.set_resources({
- sky.Resources(sky.AWS(), 'p3.2xlarge'), # 1 V100, EC2.
- sky.Resources(sky.AWS(), 'p3.8xlarge'), # 4 V100s, EC2.
+ sky.Resources(infra='aws',
+ instance_type='p3.2xlarge'), # 1 V100, EC2.
+ sky.Resources(infra='aws',
+ instance_type='p3.8xlarge'), # 4 V100s, EC2.
# Tuples mean all resources are required.
- sky.Resources(sky.GCP(), accelerators='tpu-v3-8'),
+ sky.Resources(infra='gcp', accelerators='tpu-v3-8'),
})
train_op.set_time_estimator(time_estimators.resnet50_estimate_runtime)
@@ -58,10 +60,14 @@ def make_application():
estimated_size_gigabytes=0.1)
infer_op.set_resources({
- sky.Resources(sky.AWS(), 'inf1.2xlarge'),
- sky.Resources(sky.AWS(), 'p3.2xlarge'),
- sky.Resources(sky.GCP(), 'n1-standard-4', accelerators='T4'),
- sky.Resources(sky.GCP(), 'n1-standard-8', accelerators='T4'),
+ sky.Resources(infra='aws', instance_type='inf1.2xlarge'),
+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),
+ sky.Resources(infra='gcp',
+ instance_type='n1-standard-4',
+ accelerators='T4'),
+ sky.Resources(infra='gcp',
+ instance_type='n1-standard-8',
+ accelerators='T4'),
})
infer_op.set_time_estimator(
diff --git a/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml b/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml
index 23a205cf810..f74c5af7017 100644
--- a/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml
+++ b/examples/gcp_gpu_direct_tcpx/gpu_direct_tcpx.yaml
@@ -1,7 +1,7 @@
name: nccl-gpu-direct-tcpx
resources:
- cloud: gcp
+ infra: gcp
instance_type: a3-highgpu-8g
image_id: docker:us-docker.pkg.dev/gce-ai-infra/gpudirect-tcpx/nccl-plugin-gpudirecttcpx
diff --git a/examples/gcp_start_stop.yaml b/examples/gcp_start_stop.yaml
index 507e75eb0ac..cdd833addbf 100644
--- a/examples/gcp_start_stop.yaml
+++ b/examples/gcp_start_stop.yaml
@@ -2,7 +2,7 @@
name: gcp-start-stop
resources:
- cloud: gcp
+ infra: gcp
num_nodes: 2
diff --git a/examples/horovod_distributed_tf_app.py b/examples/horovod_distributed_tf_app.py
index 273f653a710..9dd23485b3a 100644
--- a/examples/horovod_distributed_tf_app.py
+++ b/examples/horovod_distributed_tf_app.py
@@ -55,7 +55,7 @@ def run_fn(ip_list: List[IPAddr]) -> Dict[IPAddr, str]:
estimated_size_gigabytes=70)
train.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)
train.set_resources({
- sky.Resources(sky.AWS(), 'p3.2xlarge'),
+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),
})
dag = sky.Optimizer.optimize(dag)
diff --git a/examples/huggingface_glue_imdb_grid_search_app.py b/examples/huggingface_glue_imdb_grid_search_app.py
index 89965f62fa2..4fc7b04dd9e 100644
--- a/examples/huggingface_glue_imdb_grid_search_app.py
+++ b/examples/huggingface_glue_imdb_grid_search_app.py
@@ -1,7 +1,7 @@
"""Grid search version of huggingface_glue_imdb_app.py."""
import sky
-resources_to_launch = sky.Resources(sky.AWS(), accelerators={'V100': 4})
+resources_to_launch = sky.Resources(infra='aws', accelerators={'V100': 4})
with sky.Dag() as dag:
# Setup command, run once (pip, download dataset).
common_setup = """\
diff --git a/examples/image_with_tag.yaml b/examples/image_with_tag.yaml
index 480cec69e99..7b18406bc38 100644
--- a/examples/image_with_tag.yaml
+++ b/examples/image_with_tag.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
image_id: skypilot:gpu-ubuntu-1804
diff --git a/examples/k8s_cloud_deploy/README.md b/examples/k8s_cloud_deploy/README.md
index efe2586a6ee..15fbe6182c0 100644
--- a/examples/k8s_cloud_deploy/README.md
+++ b/examples/k8s_cloud_deploy/README.md
@@ -21,7 +21,7 @@ pip install "skypilot-nightly[lambda,kubernetes]"
1. Edit `cloud_k8s.yaml` to set the desired number of workers and GPUs per node. If using GCP, AWS or Azure, uncomment the ports line to allow inbound connections to the Kubernetes API server.
```yaml
resources:
- cloud: lambda
+ infra: lambda
accelerators: A10:1
# ports: 6443
diff --git a/examples/k8s_cloud_deploy/cloud_k8s.yaml b/examples/k8s_cloud_deploy/cloud_k8s.yaml
index 2db46fb502b..dd8aeffe2f9 100644
--- a/examples/k8s_cloud_deploy/cloud_k8s.yaml
+++ b/examples/k8s_cloud_deploy/cloud_k8s.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: lambda
+ infra: lambda
accelerators: A10:1
# Uncomment the following line to expose ports on a different cloud
# ports: 6443
diff --git a/examples/managed_job_with_storage.yaml b/examples/managed_job_with_storage.yaml
index 77b69485269..41d3648e074 100644
--- a/examples/managed_job_with_storage.yaml
+++ b/examples/managed_job_with_storage.yaml
@@ -7,7 +7,7 @@
# sky down spot-storage
resources:
- cloud: aws
+ infra: aws
use_spot: true
job_recovery: failover
diff --git a/examples/many_gpu_vms.yaml b/examples/many_gpu_vms.yaml
index 453392cdeb1..6a2789242a9 100644
--- a/examples/many_gpu_vms.yaml
+++ b/examples/many_gpu_vms.yaml
@@ -1,7 +1,7 @@
name: many_gpu_vms
resources:
- cloud: aws
+ infra: aws
accelerators: V100:8
# use_spot: true
diff --git a/examples/minimal.yaml b/examples/minimal.yaml
index e76182c114a..89357a86112 100644
--- a/examples/minimal.yaml
+++ b/examples/minimal.yaml
@@ -9,7 +9,7 @@
name: minimal
resources:
- cloud: aws
+ infra: aws
setup: |
echo "running setup"
diff --git a/examples/mpirun.yaml b/examples/mpirun.yaml
index 4ec7ce0107c..d002b63e985 100644
--- a/examples/mpirun.yaml
+++ b/examples/mpirun.yaml
@@ -1,7 +1,7 @@
workdir: .
resources:
- cloud: aws
+ infra: aws
num_nodes: 2 # Total number of nodes (1 head + 1 worker)
diff --git a/examples/multi_echo.py b/examples/multi_echo.py
index 2512fc3a437..1bab8bce523 100644
--- a/examples/multi_echo.py
+++ b/examples/multi_echo.py
@@ -9,7 +9,7 @@
def run(cluster: Optional[str] = None,
- cloud: Optional[str] = None,
+ infra: Optional[str] = None,
use_spot: bool = True):
if cluster is None:
# (username, last 4 chars of hash of hostname): for uniquefying users on
@@ -19,14 +19,13 @@ def run(cluster: Optional[str] = None,
_user_and_host = f'{getpass.getuser()}-{hostname_hash}'
cluster = f'test-multi-echo-{_user_and_host}'
- if cloud is None:
- cloud = 'gcp'
- cloud = sky.CLOUD_REGISTRY.from_str(cloud)
+ if infra is None:
+ infra = 'gcp'
# Create the cluster.
with sky.Dag() as dag:
cluster_resources = sky.Resources(
- cloud,
+ infra=infra,
# We need to set CPUs to 5+ so that the total number of RUNNING jobs
# is not limited by the number of CPU cores (5 x 2 x 2 = 20).
cpus='5+',
@@ -56,13 +55,13 @@ def _exec(i):
if __name__ == '__main__':
cluster = None
- cloud = None
+ infra = None
use_spot = True
if len(sys.argv) > 1:
# For smoke test passing in a cluster name.
cluster = sys.argv[1]
if len(sys.argv) > 2:
- cloud = sys.argv[2]
+ infra = sys.argv[2]
if len(sys.argv) > 3:
use_spot = sys.argv[3] == '1'
- run(cluster, cloud, use_spot)
+ run(cluster, infra, use_spot)
diff --git a/examples/multi_hostname.py b/examples/multi_hostname.py
index 2c03a46fa19..e44a60e6d29 100644
--- a/examples/multi_hostname.py
+++ b/examples/multi_hostname.py
@@ -6,6 +6,6 @@
# My hostname: <host1>
# My hostname: <host2>
sky.Task(run='echo My hostname: $(hostname)',
- num_nodes=2).set_resources(sky.Resources(sky.AWS()))
+ num_nodes=2).set_resources(sky.Resources(infra='aws'))
sky.launch(dag)
diff --git a/examples/multi_resources.yaml b/examples/multi_resources.yaml
index 56656b7cd1b..11f7c3eb23c 100644
--- a/examples/multi_resources.yaml
+++ b/examples/multi_resources.yaml
@@ -2,16 +2,16 @@ name: multi-resources
resources:
ordered:
- - cloud: AWS
+ - infra: aws
accelerators: A10g
- - cloud: GCP
+ - infra: gcp
accelerators: L4
# resources:
# any_of:
- # - cloud: AWS
+ # - infra: aws
# accelerators: A10g
- # - cloud: GCP
+ # - infra: gcp
# accelerators: L4
run: |
diff --git a/examples/oci/dataset-mount.yaml b/examples/oci/dataset-mount.yaml
index 1f62360a5a3..96a34c72af5 100644
--- a/examples/oci/dataset-mount.yaml
+++ b/examples/oci/dataset-mount.yaml
@@ -1,8 +1,7 @@
name: cpu-task1
resources:
- cloud: oci
- region: us-sanjose-1
+ infra: oci/us-sanjose-1
cpus: 2
disk_size: 256
disk_tier: medium
diff --git a/examples/oci/dataset-upload-and-mount.yaml b/examples/oci/dataset-upload-and-mount.yaml
index 13ddc4d2b35..b28e754c126 100644
--- a/examples/oci/dataset-upload-and-mount.yaml
+++ b/examples/oci/dataset-upload-and-mount.yaml
@@ -1,8 +1,7 @@
name: cpu-task1
resources:
- cloud: oci
- region: us-sanjose-1
+ infra: oci/us-sanjose-1
cpus: 2
disk_size: 256
disk_tier: medium
diff --git a/examples/oci/gpu-oraclelinux9.yaml b/examples/oci/gpu-oraclelinux9.yaml
index cc7b05ea0fc..4d24d6c9526 100644
--- a/examples/oci/gpu-oraclelinux9.yaml
+++ b/examples/oci/gpu-oraclelinux9.yaml
@@ -2,7 +2,7 @@ name: gpu-task
resources:
# Optional; if left out, automatically pick the cheapest cloud.
- cloud: oci
+ infra: oci
accelerators: A10:1
diff --git a/examples/oci/gpu-ubuntu-2204.yaml b/examples/oci/gpu-ubuntu-2204.yaml
index e0012a31a1a..b9fb1b35986 100644
--- a/examples/oci/gpu-ubuntu-2204.yaml
+++ b/examples/oci/gpu-ubuntu-2204.yaml
@@ -2,7 +2,7 @@ name: gpu-task
resources:
# Optional; if left out, automatically pick the cheapest cloud.
- cloud: oci
+ infra: oci
accelerators: A10:1
diff --git a/examples/oci/oci-mounts.yaml b/examples/oci/oci-mounts.yaml
index 6fd2aaf16eb..0d675fb3fe2 100644
--- a/examples/oci/oci-mounts.yaml
+++ b/examples/oci/oci-mounts.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: oci
+ infra: oci
file_mounts:
~/tmpfile: ~/tmpfile
diff --git a/examples/oci/oci_cpu-sky-preemptible.yaml b/examples/oci/oci_cpu-sky-preemptible.yaml
index fb1c6e5f838..0d504a30ec4 100644
--- a/examples/oci/oci_cpu-sky-preemptible.yaml
+++ b/examples/oci/oci_cpu-sky-preemptible.yaml
@@ -2,12 +2,8 @@ name: cpu-task2
resources:
# Optional; if left out, automatically pick the cheapest cloud.
- cloud: oci
+ infra: oci/ap-seoul-1
- region: ap-seoul-1
-
- # zone: AP-SEOUL-1-AD-1
-
instance_type: VM.Standard.E4.Flex$_2_16
cpus: 2
diff --git a/examples/oci/oci_cpu-sky.yaml b/examples/oci/oci_cpu-sky.yaml
index 41367a0700b..5a14f130ad8 100644
--- a/examples/oci/oci_cpu-sky.yaml
+++ b/examples/oci/oci_cpu-sky.yaml
@@ -2,12 +2,8 @@ name: cpu-task1
resources:
# Optional; if left out, automatically pick the cheapest cloud.
- cloud: oci
+ infra: oci/ap-seoul-1
- region: ap-seoul-1
-
- # zone: AP-SEOUL-1-AD-1
-
instance_type: VM.Standard.E4.Flex$_2_16
cpus: 2
diff --git a/examples/oci/oci_gpu-sky.yaml b/examples/oci/oci_gpu-sky.yaml
index a3592145c89..ca05beb26a9 100644
--- a/examples/oci/oci_gpu-sky.yaml
+++ b/examples/oci/oci_gpu-sky.yaml
@@ -2,14 +2,10 @@ name: gpu-task1
resources:
# Optional; if left out, automatically pick the cheapest cloud.
- cloud: oci
+ infra: oci/ap-seoul-1
accelerators: A10:1 # 1x NVIDIA A10 GPU
- region: ap-seoul-1
-
- # zone: AP-SEOUL-1-AD-1
-
# instance_type: VM.GPU.A10.1
# image_id: skypilot:gpu-ubuntu-2004
diff --git a/examples/oci/serve-http-cpu.yaml b/examples/oci/serve-http-cpu.yaml
index 68e3d18c9e5..011b58ff10f 100644
--- a/examples/oci/serve-http-cpu.yaml
+++ b/examples/oci/serve-http-cpu.yaml
@@ -3,8 +3,7 @@ service:
replicas: 2
resources:
- cloud: oci
- region: us-sanjose-1
+ infra: oci/us-sanjose-1
ports: 8080
cpus: 2+
diff --git a/examples/oci/serve-qwen-7b.yaml b/examples/oci/serve-qwen-7b.yaml
index 004e912b088..d0a5d1f014d 100644
--- a/examples/oci/serve-qwen-7b.yaml
+++ b/examples/oci/serve-qwen-7b.yaml
@@ -5,8 +5,7 @@ service:
# Fields below describe each replica.
resources:
- cloud: oci
- region: us-sanjose-1
+ infra: oci/us-sanjose-1
ports: 8080
accelerators: {A10:1}
diff --git a/examples/per_region_images.yaml b/examples/per_region_images.yaml
index 99bc6e4f0c5..4e0e470969f 100644
--- a/examples/per_region_images.yaml
+++ b/examples/per_region_images.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
instance_type: g4dn.xlarge
image_id:
us-west-2: skypilot:gpu-ubuntu-1804
diff --git a/examples/perf/storage_rawperf.yaml b/examples/perf/storage_rawperf.yaml
index 982a1e7c43a..cc6263c712d 100644
--- a/examples/perf/storage_rawperf.yaml
+++ b/examples/perf/storage_rawperf.yaml
@@ -17,7 +17,7 @@
name: storage-demo
resources:
- cloud: aws
+ infra: aws
instance_type: m5.8xlarge
file_mounts:
diff --git a/examples/playground/min_fail.yaml b/examples/playground/min_fail.yaml
index 215f3268855..bd64ba3b0fc 100644
--- a/examples/playground/min_fail.yaml
+++ b/examples/playground/min_fail.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
setup: |
echo "running setup"
diff --git a/examples/playground/min_progress_bar.yaml b/examples/playground/min_progress_bar.yaml
index 06ba3b027e0..43499f0bf4c 100644
--- a/examples/playground/min_progress_bar.yaml
+++ b/examples/playground/min_progress_bar.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
setup: |
echo "running setup"
diff --git a/examples/playground/symlink_playground.yaml b/examples/playground/symlink_playground.yaml
index 398373af85c..d53753f3efa 100644
--- a/examples/playground/symlink_playground.yaml
+++ b/examples/playground/symlink_playground.yaml
@@ -4,7 +4,7 @@
name: symlink-playground
resources:
- cloud: aws
+ infra: aws
instance_type: m5.2xlarge
# Symlink: ln -s [data_path] ~/Downloads/temp1
diff --git a/examples/ray_tune_app.py b/examples/ray_tune_app.py
index 1993eb7e7d4..b08756ede0b 100644
--- a/examples/ray_tune_app.py
+++ b/examples/ray_tune_app.py
@@ -30,7 +30,7 @@ def run_fn(node_rank: int, ip_list: List[str]) -> Optional[str]:
)
train.set_resources({
- sky.Resources(sky.AWS(), 'p3.2xlarge'),
+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),
})
sky.launch(dag)
diff --git a/examples/ray_tune_app.yaml b/examples/ray_tune_app.yaml
index 96146b1ee2d..9c7bae9b099 100644
--- a/examples/ray_tune_app.yaml
+++ b/examples/ray_tune_app.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
accelerators: V100
num_nodes: 2
diff --git a/examples/resnet_app.py b/examples/resnet_app.py
index 17ebf9fa5d6..c7f43744ca3 100644
--- a/examples/resnet_app.py
+++ b/examples/resnet_app.py
@@ -68,10 +68,10 @@
task.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)
task.set_resources({
##### Fully specified
- # sky.Resources(sky.AWS(), 'p3.2xlarge'),
- # sky.Resources(sky.GCP(), 'n1-standard-16'),
+ # sky.Resources(infra='aws', instance_type='p3.2xlarge'),
+ # sky.Resources(infra='gcp', instance_type='n1-standard-16'),
# sky.Resources(
- # sky.GCP(),
+ # infra='gcp',
# 'n1-standard-8',
# # Options: 'V100', {'V100': <num>}.
# 'V100',
@@ -79,16 +79,16 @@
##### Partially specified
# sky.Resources(accelerators='T4'),
# sky.Resources(accelerators={'T4': 8}, use_spot=True),
- # sky.Resources(sky.AWS(), accelerators={'T4': 8}, use_spot=True),
- # sky.Resources(sky.AWS(), accelerators='K80'),
- # sky.Resources(sky.AWS(), accelerators='K80', use_spot=True),
+ # sky.Resources(infra='aws', accelerators={'T4': 8}, use_spot=True),
+ # sky.Resources(infra='aws', accelerators='K80'),
+ # sky.Resources(infra='aws', accelerators='K80', use_spot=True),
# sky.Resources(accelerators='tpu-v3-8'),
# sky.Resources(accelerators='V100', use_spot=True),
# sky.Resources(accelerators={'T4': 4}),
- sky.Resources(sky.AWS(), accelerators='V100'),
- # sky.Resources(sky.GCP(), accelerators={'V100': 4}),
- # sky.Resources(sky.AWS(), accelerators='V100', use_spot=True),
- # sky.Resources(sky.AWS(), accelerators={'V100': 8}),
+ sky.Resources(infra='aws', accelerators='V100'),
+ # sky.Resources(infra='gcp', accelerators={'V100': 4}),
+ # sky.Resources(infra='aws', accelerators='V100', use_spot=True),
+ # sky.Resources(infra='aws', accelerators={'V100': 8}),
})
# Optionally, specify a time estimator: Resources -> time in seconds.
diff --git a/examples/resnet_app.yaml b/examples/resnet_app.yaml
index 4a37d332415..473dcea173c 100644
--- a/examples/resnet_app.yaml
+++ b/examples/resnet_app.yaml
@@ -1,7 +1,7 @@
name: resnet-app
resources:
- cloud: aws
+ infra: aws
accelerators:
V100: 1
diff --git a/examples/resnet_app_storage.py b/examples/resnet_app_storage.py
index 9d8063ea6ab..707acecf11f 100644
--- a/examples/resnet_app_storage.py
+++ b/examples/resnet_app_storage.py
@@ -71,7 +71,7 @@
train.set_inputs('s3://imagenet-bucket', estimated_size_gigabytes=150)
train.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)
train.set_resources({
- sky.Resources(sky.AWS(), 'p3.2xlarge'),
+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),
})
sky.launch(dag)
diff --git a/examples/resnet_app_storage.yaml b/examples/resnet_app_storage.yaml
index 7a3ddd81b57..b6747ab5614 100644
--- a/examples/resnet_app_storage.yaml
+++ b/examples/resnet_app_storage.yaml
@@ -2,7 +2,7 @@ name: resnet-app-storage
workdir: ~/Downloads/tpu
resources:
- cloud: aws
+ infra: aws
instance_type: p3.2xlarge
inputs: {
diff --git a/examples/resnet_app_storage_spot.yaml b/examples/resnet_app_storage_spot.yaml
index 0d4a3fec840..27ed558b4fc 100644
--- a/examples/resnet_app_storage_spot.yaml
+++ b/examples/resnet_app_storage_spot.yaml
@@ -1,7 +1,7 @@
name: resnet-app-storage
resources:
- cloud: aws
+ infra: aws
accelerators: V100
use_spot: true
spot_recovery: failover
diff --git a/examples/resnet_distributed_tf_app.py b/examples/resnet_distributed_tf_app.py
index 62befbbb313..2df1705e386 100644
--- a/examples/resnet_distributed_tf_app.py
+++ b/examples/resnet_distributed_tf_app.py
@@ -7,7 +7,7 @@
import sky
-def run(cluster: Optional[str] = None, cloud: Optional[str] = None):
+def run(cluster: Optional[str] = None, infra: Optional[str] = None):
if cluster is None:
# (username, last 4 chars of hash of hostname): for uniquefying users on
# shared-account cloud providers.
@@ -75,19 +75,17 @@ def run_fn(node_rank: int, ip_list: List[str]) -> Optional[str]:
train.set_inputs('gs://cloud-tpu-test-datasets/fake_imagenet',
estimated_size_gigabytes=70)
train.set_outputs('resnet-model-dir', estimated_size_gigabytes=0.1)
- train.set_resources(
- sky.Resources(sky.CLOUD_REGISTRY.from_str(cloud),
- accelerators='V100'))
+ train.set_resources(sky.Resources(infra=infra, accelerators='V100'))
sky.launch(dag, cluster_name=cluster, retry_until_up=True)
if __name__ == '__main__':
cluster = None
- cloud = None
+ infra = None
if len(sys.argv) > 1:
# For smoke test passing in a cluster name.
cluster = sys.argv[1]
if len(sys.argv) > 2:
- cloud = sys.argv[2]
- run(cluster, cloud)
+ infra = sys.argv[2]
+ run(cluster, infra)
diff --git a/examples/resnet_distributed_torch_app.py b/examples/resnet_distributed_torch_app.py
index 1bc38886536..9b31419cf85 100644
--- a/examples/resnet_distributed_torch_app.py
+++ b/examples/resnet_distributed_torch_app.py
@@ -35,19 +35,19 @@ def run_fn(node_rank: int, ip_list: List[str]) -> Optional[str]:
train.set_resources({
##### Fully specified
- sky.Resources(sky.AWS(), 'p3.2xlarge'),
- # sky.Resources(sky.GCP(), 'n1-standard-16'),
+ sky.Resources(infra='aws', instance_type='p3.2xlarge'),
+ # sky.Resources(infra='gcp', instance_type='n1-standard-16'),
#sky.Resources(
- # sky.GCP(),
- # 'n1-standard-8',
+ # infra='gcp',
+ # instance_type='n1-standard-8',
# Options: 'V100', {'V100': <num>}.
- # 'V100',
+ # accelerators='V100',
#),
##### Partially specified
#sky.Resources(accelerators='V100'),
# sky.Resources(accelerators='tpu-v3-8'),
- # sky.Resources(sky.AWS(), accelerators={'V100': 4}),
- # sky.Resources(sky.AWS(), accelerators='V100'),
+ # sky.Resources(infra='aws', accelerators={'V100': 4}),
+ # sky.Resources(infra='aws', accelerators='V100'),
})
sky.launch(train, cluster_name='dth')
diff --git a/examples/resnet_distributed_torch_with_script.yaml b/examples/resnet_distributed_torch_with_script.yaml
index a492e4878b3..b3a07b227d9 100644
--- a/examples/resnet_distributed_torch_with_script.yaml
+++ b/examples/resnet_distributed_torch_with_script.yaml
@@ -2,7 +2,7 @@ name: resnet-distributed-app
resources:
- cloud: aws
+ infra: aws
accelerators: V100
num_nodes: 2
diff --git a/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml b/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml
index faa5e9f08ff..d5dd35a189a 100644
--- a/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml
+++ b/examples/serve/spot_policy/dynamic_on_demand_fallback.yaml
@@ -11,8 +11,8 @@ service:
resources:
any_of:
- - zone: us-central1-a
- - region: us-east1
+ - infra: gcp/*/us-central1-a
+ - infra: gcp/us-east1
ports: 8081
cpus: 2+
# use_spot is needed for ondemand fallback
diff --git a/examples/spot/lightning_cifar10.yaml b/examples/spot/lightning_cifar10.yaml
index 2a9aa0c761c..b8fbb11bb7e 100644
--- a/examples/spot/lightning_cifar10.yaml
+++ b/examples/spot/lightning_cifar10.yaml
@@ -2,7 +2,7 @@ name: lit
resources:
accelerators: V100:1
- cloud: aws
+ infra: aws
use_spot: true
spot_recovery: FAILOVER
diff --git a/examples/spot/resnet.yaml b/examples/spot/resnet.yaml
index 54c13489f1a..7b439fff848 100644
--- a/examples/spot/resnet.yaml
+++ b/examples/spot/resnet.yaml
@@ -10,7 +10,7 @@ name: resnet
resources:
accelerators: V100
- cloud: aws
+ infra: aws
use_spot: true
spot_recovery: FAILOVER
diff --git a/examples/storage/checkpointed_training.yaml b/examples/storage/checkpointed_training.yaml
index 7d96e9634ca..c0fc8c6b2bd 100644
--- a/examples/storage/checkpointed_training.yaml
+++ b/examples/storage/checkpointed_training.yaml
@@ -20,7 +20,7 @@ name: resnet-distributed-app
resources:
accelerators: V100
- cloud: aws
+ infra: aws
num_nodes: 1
diff --git a/examples/storage/hostname_echo_demo.yaml b/examples/storage/hostname_echo_demo.yaml
index d90edbc9ebd..1769593f601 100644
--- a/examples/storage/hostname_echo_demo.yaml
+++ b/examples/storage/hostname_echo_demo.yaml
@@ -10,7 +10,7 @@
name: hostecho-demo
resources:
- cloud: aws
+ infra: aws
instance_type: m5.2xlarge
num_nodes: 2
diff --git a/examples/storage/pingpong.yaml b/examples/storage/pingpong.yaml
index fae72ab7f6a..345ade16162 100644
--- a/examples/storage/pingpong.yaml
+++ b/examples/storage/pingpong.yaml
@@ -14,7 +14,7 @@ name: pingpong
num_nodes: 2
resources:
- cloud: gcp
+ infra: gcp
file_mounts:
/sharedfs:
diff --git a/examples/tensorboard_app.py b/examples/tensorboard_app.py
index a5432ee7a6f..e181c8dac7f 100644
--- a/examples/tensorboard_app.py
+++ b/examples/tensorboard_app.py
@@ -19,7 +19,7 @@
cd models && pip install -e .)'
task = sky.Task('setup', workdir=workdir, setup=setup)
- task.set_resources(sky.Resources(sky.AWS(), accelerators={'V100': 1}))
+ task.set_resources(sky.Resources(infra='aws', accelerators={'V100': 1}))
sky.stream_and_get(sky.launch(dag, cluster_name='tb'))
# Run the training task.
diff --git a/examples/tensorflow_distributed/tf_distributed.yaml b/examples/tensorflow_distributed/tf_distributed.yaml
index beb6ad4b96e..0d59a538c30 100644
--- a/examples/tensorflow_distributed/tf_distributed.yaml
+++ b/examples/tensorflow_distributed/tf_distributed.yaml
@@ -7,7 +7,7 @@
# sky down myclus
resources:
- cloud: gcp
+ infra: gcp
accelerators: V100:1 # Provision 1 V100 GPU per node
# Provision 2 nodes, giving us a total of 2 GPUs in the cluster
diff --git a/examples/timm_app.py b/examples/timm_app.py
index 72ac53509c3..d3e9e4dd147 100644
--- a/examples/timm_app.py
+++ b/examples/timm_app.py
@@ -49,6 +49,6 @@ def clone_project():
# Download from GCS.
'/tmp/fake_imagenet': 'gs://cloud-tpu-test-datasets/fake_imagenet',
})
- train.set_resources({sky.Resources(sky.AWS(), accelerators='V100')})
+ train.set_resources({sky.Resources(infra='aws', accelerators='V100')})
sky.launch(dag)
diff --git a/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml b/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml
index 36278961006..6c4c627aa4b 100644
--- a/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml
+++ b/examples/torch_ddp_benchmark/torch_ddp_benchmark.yaml
@@ -30,7 +30,7 @@ num_nodes: 2
resources:
accelerators: A100:8 # Make sure you use 8 GPU instances
use_spot: True
- cloud: gcp
+ infra: gcp
file_mounts:
./torch_ddp_benchmark.py: ./examples/torch_ddp_benchmark/torch_ddp_benchmark.py
diff --git a/examples/using_file_mounts.yaml b/examples/using_file_mounts.yaml
index fb7110ac705..5b6783efc9f 100644
--- a/examples/using_file_mounts.yaml
+++ b/examples/using_file_mounts.yaml
@@ -10,7 +10,7 @@
# commands may require flags to follow symlinks (e.g., ls -H; du -L).
resources:
- cloud: aws
+ infra: aws
cpus: 2+
workdir: .
diff --git a/examples/using_file_mounts_with_env_vars.yaml b/examples/using_file_mounts_with_env_vars.yaml
index 100aa3d15c9..11fcd99bb32 100644
--- a/examples/using_file_mounts_with_env_vars.yaml
+++ b/examples/using_file_mounts_with_env_vars.yaml
@@ -8,7 +8,7 @@ envs:
MODEL_SIZE: 13b
resources:
- cloud: gcp
+ infra: gcp
# You can use env vars in
# - the destination: source paths
diff --git a/llm/axolotl/axolotl-docker.yaml b/llm/axolotl/axolotl-docker.yaml
index b883ebdde46..25caf8ae408 100644
--- a/llm/axolotl/axolotl-docker.yaml
+++ b/llm/axolotl/axolotl-docker.yaml
@@ -5,7 +5,7 @@ name: axolotl
resources:
accelerators: L4:1
- cloud: gcp # optional
+ infra: gcp # optional
workdir: mistral
diff --git a/llm/axolotl/axolotl-spot.yaml b/llm/axolotl/axolotl-spot.yaml
index 0e04ba11992..e6c04f1bca7 100644
--- a/llm/axolotl/axolotl-spot.yaml
+++ b/llm/axolotl/axolotl-spot.yaml
@@ -10,7 +10,7 @@ name: axolotl
resources:
accelerators: A100:1
- cloud: gcp # optional
+ infra: gcp # optional
use_spot: True
image_id: docker:winglian/axolotl:main-py3.10-cu118-2.0.1
diff --git a/llm/batch_inference/compute_text_vectors.yaml b/llm/batch_inference/compute_text_vectors.yaml
index 259bd685294..df197fcba80 100644
--- a/llm/batch_inference/compute_text_vectors.yaml
+++ b/llm/batch_inference/compute_text_vectors.yaml
@@ -6,7 +6,7 @@ resources:
cpus: 4
accelerators:
L4: 1
- cloud: aws
+ infra: aws
any_of:
- use_spot: true
- use_spot: false
@@ -83,4 +83,4 @@ run: |
# Clean up vLLM service
pkill -f "python -m vllm.entrypoints.openai.api_server"
- echo "vLLM service has been stopped"
\ No newline at end of file
+ echo "vLLM service has been stopped"
diff --git a/llm/batch_inference/monitor_progress.yaml b/llm/batch_inference/monitor_progress.yaml
index 8f59b43325b..623d0df1dad 100644
--- a/llm/batch_inference/monitor_progress.yaml
+++ b/llm/batch_inference/monitor_progress.yaml
@@ -5,7 +5,7 @@ workdir: .
resources:
cpus: 2
memory: 8+
- cloud: aws
+ infra: aws
ports:
- 8000
@@ -26,4 +26,4 @@ setup: |
pip install pandas pyarrow plotly
run: |
- python scripts/monitor_progress.py --metrics-dir /output/metrics
\ No newline at end of file
+ python scripts/monitor_progress.py --metrics-dir /output/metrics
diff --git a/llm/gpt-2/gpt2-pipeline.yaml b/llm/gpt-2/gpt2-pipeline.yaml
index e5ea05f7948..5d9b9d34164 100644
--- a/llm/gpt-2/gpt2-pipeline.yaml
+++ b/llm/gpt-2/gpt2-pipeline.yaml
@@ -46,13 +46,13 @@ resources:
any_of:
# Avoid using docker image for lambda due to the docker is not supported on
# Lambda yet, but the base image works.
- - cloud: lambda
+ - infra: lambda
image_id: null
- - cloud: aws
- - cloud: gcp
- - cloud: azure
- - cloud: fluidstack
- - cloud: kubernetes
+ - infra: aws
+ - infra: gcp
+ - infra: azure
+ - infra: fluidstack
+ - infra: kubernetes
file_mounts:
~/.cache/huggingface:
diff --git a/llm/gpt-2/gpt2-train.yaml b/llm/gpt-2/gpt2-train.yaml
index 3a4e8c28d14..b3d48a67bd0 100644
--- a/llm/gpt-2/gpt2-train.yaml
+++ b/llm/gpt-2/gpt2-train.yaml
@@ -11,13 +11,13 @@ resources:
any_of:
# Avoid using docker image for lambda due to the docker is not supported on
# Lambda yet, but the base image works.
- - cloud: lambda
+ - infra: lambda
image_id: null
- - cloud: aws
- - cloud: gcp
- - cloud: azure
- - cloud: fluidstack
- - cloud: kubernetes
+ - infra: aws
+ - infra: gcp
+ - infra: azure
+ - infra: fluidstack
+ - infra: kubernetes
file_mounts:
~/.cache/huggingface:
diff --git a/llm/gpt-2/gpt2.yaml b/llm/gpt-2/gpt2.yaml
index 8e203772128..6ede787d178 100644
--- a/llm/gpt-2/gpt2.yaml
+++ b/llm/gpt-2/gpt2.yaml
@@ -7,13 +7,13 @@ resources:
any_of:
# Avoid using docker image for lambda due to the docker is not supported on
# Lambda yet, but the base image works.
- - cloud: lambda
+ - infra: lambda
image_id: null
- - cloud: aws
- - cloud: gcp
- - cloud: azure
- - cloud: fluidstack
- - cloud: kubernetes
+ - infra: aws
+ - infra: gcp
+ - infra: azure
+ - infra: fluidstack
+ - infra: kubernetes
setup: |
diff --git a/llm/rag/build_rag.yaml b/llm/rag/build_rag.yaml
index 9323f661903..ffd64911de0 100644
--- a/llm/rag/build_rag.yaml
+++ b/llm/rag/build_rag.yaml
@@ -4,7 +4,7 @@ workdir: .
resources:
memory: 32+ # Need more memory for merging embeddings
- cloud: aws
+ infra: aws
envs:
EMBEDDINGS_BUCKET_NAME: sky-rag-embeddings
diff --git a/sky/backends/backend_utils.py b/sky/backends/backend_utils.py
index 3a1333e30ab..c33ffa80a88 100644
--- a/sky/backends/backend_utils.py
+++ b/sky/backends/backend_utils.py
@@ -2570,7 +2570,10 @@ def _update_record_with_credentials_and_resources_str(
if handle is None:
return
record['resources_str'] = resources_utils.get_readable_resources_repr(
- handle)
+ handle, simplify=True)
+ record[
+ 'resources_str_full'] = resources_utils.get_readable_resources_repr(
+ handle, simplify=False)
credentials = ssh_credential_from_yaml(handle.cluster_yaml,
handle.docker_user,
handle.ssh_user)
diff --git a/sky/backends/cloud_vm_ray_backend.py b/sky/backends/cloud_vm_ray_backend.py
index 48eb2ac7e0d..33dd56029c9 100644
--- a/sky/backends/cloud_vm_ray_backend.py
+++ b/sky/backends/cloud_vm_ray_backend.py
@@ -8,7 +8,6 @@
import pathlib
import re
import shlex
-import shutil
import signal
import subprocess
import sys
@@ -2157,11 +2156,18 @@ def provision_with_retries(
# possible resources or the requested resources is too
# restrictive. If we reach here, our failover logic finally
# ends here.
- table = log_utils.create_table(['Resource', 'Reason'])
+ table = log_utils.create_table(['INFRA', 'RESOURCES', 'REASON'])
for (resource, exception) in resource_exceptions.items():
- table.add_row(
- [resources_utils.format_resource(resource), exception])
- table.max_table_width = shutil.get_terminal_size().columns
+ table.add_row([
+ resource.infra.formatted_str(),
+ resources_utils.format_resource(resource,
+ simplify=True),
+ exception
+ ])
+ # Set the max width of REASON column to 80 to avoid the table
+ # being wrapped in a unreadable way.
+ # pylint: disable=protected-access
+ table._max_width = {'REASON': 80}
raise exceptions.ResourcesUnavailableError(
_RESOURCES_UNAVAILABLE_LOG + '\n' + table.get_string(),
failover_history=failover_history)
diff --git a/sky/check.py b/sky/check.py
index 65a3f92366e..6663e508748 100644
--- a/sky/check.py
+++ b/sky/check.py
@@ -34,7 +34,7 @@ def check_capabilities(
echo = (lambda *_args, **_kwargs: None
) if quiet else lambda *args, **kwargs: click.echo(
*args, **kwargs, color=True)
- echo('Checking credentials to enable clouds for SkyPilot.')
+ echo('Checking credentials to enable infra for SkyPilot.')
if capabilities is None:
capabilities = sky_cloud.ALL_CAPABILITIES
assert capabilities is not None
@@ -189,7 +189,7 @@ def get_all_clouds():
key=lambda item: item[0])
])
echo(f'\n{colorama.Fore.GREEN}{PARTY_POPPER_EMOJI} '
- f'Enabled clouds {PARTY_POPPER_EMOJI}'
+ f'Enabled infra {PARTY_POPPER_EMOJI}'
f'{colorama.Style.RESET_ALL}{enabled_clouds_str}')
return enabled_clouds
diff --git a/sky/cli.py b/sky/cli.py
index bd067d95461..88e39fd398b 100644
--- a/sky/cli.py
+++ b/sky/cli.py
@@ -78,6 +78,7 @@
from sky.utils import controller_utils
from sky.utils import dag_utils
from sky.utils import env_options
+from sky.utils import infra_utils
from sky.utils import log_utils
from sky.utils import registry
from sky.utils import resources_utils
@@ -345,24 +346,39 @@ def return_option_decorator(func):
'where the task will be invoked. '
'Overrides the "workdir" config in the YAML if both are supplied.'
)),
+ click.option(
+ '--infra',
+ required=False,
+ type=str,
+ help='Infrastructure to use. '
+ 'Format: cloud, cloud/region, cloud/region/zone, '
+ 'or kubernetes/context-name. '
+ 'Examples: aws, aws/us-east-1, aws/us-east-1/us-east-1a, '
+ # TODO(zhwu): we have to use `\*` to make sure the docs build
+ # not complaining about the `*`, but this will cause `--help`
+ # to show `\*` instead of `*`.
+ 'aws/\\*/us-east-1a, kubernetes/my-cluster-context.'),
click.option(
'--cloud',
required=False,
type=str,
help=('The cloud to use. If specified, overrides the "resources.cloud" '
- 'config. Passing "none" resets the config.')),
+ 'config. Passing "none" resets the config.'),
+ hidden=True),
click.option(
'--region',
required=False,
type=str,
help=('The region to use. If specified, overrides the '
- '"resources.region" config. Passing "none" resets the config.')),
+ '"resources.region" config. Passing "none" resets the config.'),
+ hidden=True),
click.option(
'--zone',
required=False,
type=str,
help=('The zone to use. If specified, overrides the '
- '"resources.zone" config. Passing "none" resets the config.')),
+ '"resources.zone" config. Passing "none" resets the config.'),
+ hidden=True),
click.option(
'--num-nodes',
required=False,
@@ -1063,6 +1079,33 @@ def cli():
pass
+def _handle_infra_cloud_region_zone_options(infra: Optional[str],
+ cloud: Optional[str],
+ region: Optional[str],
+ zone: Optional[str]):
+ """Handle the backward compatibility for --infra and --cloud/region/zone.
+
+ Returns:
+ cloud, region, zone
+ """
+ if cloud is not None or region is not None or zone is not None:
+ click.secho(
+ 'The --cloud, --region, and --zone options are deprecated. '
+ 'Use --infra instead.',
+ fg='yellow')
+ if infra is not None:
+ with ux_utils.print_exception_no_traceback():
+ raise ValueError('Cannot specify both --infra and '
+ '--cloud, --region, or --zone.')
+
+ if infra is not None:
+ infra_info = infra_utils.InfraInfo.from_str(infra)
+ cloud = infra_info.cloud
+ region = infra_info.region
+ zone = infra_info.zone
+ return cloud, region, zone
+
+
@cli.command(cls=_DocumentedCodeCommand)
@config_option(expose_value=True)
@click.argument('entrypoint',
@@ -1172,6 +1215,7 @@ def launch(
backend_name: Optional[str],
name: Optional[str],
workdir: Optional[str],
+ infra: Optional[str],
cloud: Optional[str],
region: Optional[str],
zone: Optional[str],
@@ -1219,6 +1263,9 @@ def launch(
if backend_name is None:
backend_name = backends.CloudVmRayBackend.NAME
+ cloud, region, zone = _handle_infra_cloud_region_zone_options(
+ infra, cloud, region, zone)
+
task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(
entrypoint=entrypoint,
name=name,
@@ -1336,6 +1383,7 @@ def exec(cluster: Optional[str],
entrypoint: Tuple[str, ...],
detach_run: bool,
name: Optional[str],
+ infra: Optional[str],
cloud: Optional[str],
region: Optional[str],
zone: Optional[str],
@@ -1427,6 +1475,9 @@ def exec(cluster: Optional[str],
controller_utils.check_cluster_name_not_controller(
cluster, operation_str='Executing task on it')
+ cloud, region, zone = _handle_infra_cloud_region_zone_options(
+ infra, cloud, region, zone)
+
task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(
entrypoint=entrypoint,
name=name,
@@ -3265,7 +3316,7 @@ def _down_or_stop(name: str):
@cli.command(cls=_DocumentedCodeCommand)
@config_option(expose_value=False)
[email protected]('clouds', required=False, type=str, nargs=-1)
[email protected]('infra_list', required=False, type=str, nargs=-1)
@click.option('--verbose',
'-v',
is_flag=True,
@@ -3273,7 +3324,7 @@ def _down_or_stop(name: str):
help='Show the activated account for each cloud.')
@usage_lib.entrypoint
# pylint: disable=redefined-outer-name
-def check(clouds: Tuple[str], verbose: bool):
+def check(infra_list: Tuple[str], verbose: bool):
"""Check which clouds are available to use.
This checks access credentials for all clouds supported by SkyPilot. If a
@@ -3295,8 +3346,8 @@ def check(clouds: Tuple[str], verbose: bool):
# Check only specific clouds - AWS and GCP.
sky check aws gcp
"""
- clouds_arg = clouds if len(clouds) > 0 else None
- request_id = sdk.check(clouds=clouds_arg, verbose=verbose)
+ infra_arg = infra_list if len(infra_list) > 0 else None
+ request_id = sdk.check(infra_list=infra_arg, verbose=verbose)
sdk.stream_and_get(request_id)
api_server_url = server_common.get_server_url()
click.echo()
@@ -3312,10 +3363,15 @@ def check(clouds: Tuple[str], verbose: bool):
is_flag=True,
default=False,
help='Show details of all GPU/TPU/accelerator offerings.')
[email protected]('--infra',
+ default=None,
+ type=str,
+ help='Infrastructure to query. Examples: "aws", "aws/us-east-1"')
@click.option('--cloud',
default=None,
type=str,
- help='Cloud provider to query.')
+ help='Cloud provider to query.',
+ hidden=True)
@click.option(
'--region',
required=False,
@@ -3323,6 +3379,7 @@ def check(clouds: Tuple[str], verbose: bool):
help=
('The region to use. If not specified, shows accelerators from all regions.'
),
+ hidden=True,
)
@click.option(
'--all-regions',
@@ -3335,6 +3392,7 @@ def check(clouds: Tuple[str], verbose: bool):
def show_gpus(
accelerator_str: Optional[str],
all: bool, # pylint: disable=redefined-builtin
+ infra: Optional[str],
cloud: Optional[str],
region: Optional[str],
all_regions: Optional[bool]):
@@ -3376,6 +3434,11 @@ def show_gpus(
* ``UTILIZATION`` (Kubernetes only): Total number of GPUs free / available
in the Kubernetes cluster.
"""
+ cloud, region, _ = _handle_infra_cloud_region_zone_options(infra,
+ cloud,
+ region,
+ zone=None)
+
# validation for the --region flag
if region is not None and cloud is None:
raise click.UsageError(
@@ -3991,6 +4054,7 @@ def jobs_launch(
name: Optional[str],
cluster: Optional[str],
workdir: Optional[str],
+ infra: Optional[str],
cloud: Optional[str],
region: Optional[str],
zone: Optional[str],
@@ -4032,6 +4096,8 @@ def jobs_launch(
'Use one of the flags as they are alias.')
name = cluster
env = _merge_env_vars(env_file, env)
+ cloud, region, zone = _handle_infra_cloud_region_zone_options(
+ infra, cloud, region, zone)
task_or_dag = _make_task_or_dag_from_entrypoint_with_overrides(
entrypoint,
name=name,
@@ -4509,6 +4575,7 @@ def serve_up(
service_yaml: Tuple[str, ...],
service_name: Optional[str],
workdir: Optional[str],
+ infra: Optional[str],
cloud: Optional[str],
region: Optional[str],
zone: Optional[str],
@@ -4555,6 +4622,8 @@ def serve_up(
sky serve up service.yaml
"""
+ cloud, region, zone = _handle_infra_cloud_region_zone_options(
+ infra, cloud, region, zone)
if service_name is None:
service_name = serve_lib.generate_service_name()
@@ -4621,13 +4690,13 @@ def serve_up(
@timeline.event
@usage_lib.entrypoint
def serve_update(service_name: str, service_yaml: Tuple[str, ...],
- workdir: Optional[str], cloud: Optional[str],
- region: Optional[str], zone: Optional[str],
- num_nodes: Optional[int], use_spot: Optional[bool],
- image_id: Optional[str], env_file: Optional[Dict[str, str]],
- env: List[Tuple[str, str]], gpus: Optional[str],
- instance_type: Optional[str], ports: Tuple[str],
- cpus: Optional[str], memory: Optional[str],
+ workdir: Optional[str], infra: Optional[str],
+ cloud: Optional[str], region: Optional[str],
+ zone: Optional[str], num_nodes: Optional[int],
+ use_spot: Optional[bool], image_id: Optional[str],
+ env_file: Optional[Dict[str, str]], env: List[Tuple[str, str]],
+ gpus: Optional[str], instance_type: Optional[str],
+ ports: Tuple[str], cpus: Optional[str], memory: Optional[str],
disk_size: Optional[int], disk_tier: Optional[str], mode: str,
yes: bool, async_call: bool):
"""Update a SkyServe service.
@@ -4659,6 +4728,8 @@ def serve_update(service_name: str, service_yaml: Tuple[str, ...],
sky serve update --mode blue_green sky-service-16aa new_service.yaml
"""
+ cloud, region, zone = _handle_infra_cloud_region_zone_options(
+ infra, cloud, region, zone)
task = _generate_task_with_service(
service_name=service_name,
service_yaml_args=service_yaml,
@@ -5173,6 +5244,7 @@ def benchmark_launch(
benchmark: str,
name: Optional[str],
workdir: Optional[str],
+ infra: Optional[str],
cloud: Optional[str],
region: Optional[str],
zone: Optional[str],
@@ -5206,7 +5278,6 @@ def benchmark_launch(
raise click.BadParameter(f'Benchmark {benchmark} already exists. '
'To delete the previous benchmark result, '
f'run `sky bench delete {benchmark}`.')
-
entrypoint = ' '.join(entrypoint)
if not entrypoint:
raise click.BadParameter('Please specify a task yaml to benchmark.')
@@ -5217,6 +5288,8 @@ def benchmark_launch(
'Sky Benchmark does not support command line tasks. '
'Please provide a YAML file.')
assert config is not None, (is_yaml, config)
+ cloud, region, zone = _handle_infra_cloud_region_zone_options(
+ infra, cloud, region, zone)
click.secho('Benchmarking a task from YAML: ', fg='cyan', nl=False)
click.secho(entrypoint, bold=True)
diff --git a/sky/client/sdk.py b/sky/client/sdk.py
index e5c38550aa8..a21cf70e811 100644
--- a/sky/client/sdk.py
+++ b/sky/client/sdk.py
@@ -42,6 +42,7 @@
from sky.utils import common_utils
from sky.utils import dag_utils
from sky.utils import env_options
+from sky.utils import infra_utils
from sky.utils import rich_utils
from sky.utils import status_lib
from sky.utils import subprocess_utils
@@ -87,12 +88,12 @@ def stream_response(request_id: Optional[str],
@usage_lib.entrypoint
@server_common.check_server_healthy_or_start
@annotations.client_api
-def check(clouds: Optional[Tuple[str]],
+def check(infra_list: Optional[Tuple[str, ...]],
verbose: bool) -> server_common.RequestId:
"""Checks the credentials to enable clouds.
Args:
- clouds: The clouds to check.
+ infra: The infra to check.
verbose: Whether to show verbose output.
Returns:
@@ -101,6 +102,22 @@ def check(clouds: Optional[Tuple[str]],
Request Returns:
None
"""
+ if infra_list is None:
+ clouds = None
+ else:
+ specified_clouds = []
+ for infra_str in infra_list:
+ infra = infra_utils.InfraInfo.from_str(infra_str)
+ if infra.cloud is None:
+ with ux_utils.print_exception_no_traceback():
+ raise ValueError(f'Invalid infra to check: {infra_str}')
+ if infra.region is not None or infra.zone is not None:
+ region_zone = infra_str.partition('/')[-1]
+ logger.warning(f'Infra {infra_str} is specified, but `check` '
+ f'only supports checking {infra.cloud}, '
+ f'ignoring {region_zone}')
+ specified_clouds.append(infra.cloud)
+ clouds = tuple(specified_clouds)
body = payloads.CheckBody(clouds=clouds, verbose=verbose)
response = requests.post(f'{server_common.get_server_url()}/check',
json=json.loads(body.model_dump_json()),
@@ -344,7 +361,7 @@ def launch(
import sky
task = sky.Task(run='echo hello SkyPilot')
task.set_resources(
- sky.Resources(cloud=sky.AWS(), accelerators='V100:4'))
+ sky.Resources(infra='aws', accelerators='V100:4'))
sky.launch(task, cluster_name='my-cluster')
diff --git a/sky/dashboard/src/components/clusters.jsx b/sky/dashboard/src/components/clusters.jsx
index e3b88e2cf46..804cdcba57e 100755
--- a/sky/dashboard/src/components/clusters.jsx
+++ b/sky/dashboard/src/components/clusters.jsx
@@ -7,7 +7,10 @@
import React, { useState, useEffect } from 'react';
import { CircularProgress } from '@mui/material';
-import { CustomTooltip as Tooltip } from '@/components/utils';
+import {
+ CustomTooltip as Tooltip,
+ NonCapitalizedTooltip,
+} from '@/components/utils';
import Link from 'next/link';
import { Button } from '@/components/ui/button';
import { Card } from '@/components/ui/card';
@@ -228,15 +231,15 @@ export function ClusterTable({
</TableHead>
<TableHead
className="sortable whitespace-nowrap"
- onClick={() => requestSort('resources_str')}
+ onClick={() => requestSort('infra')}
>
- Resources{getSortDirection('resources_str')}
+ Infra{getSortDirection('infra')}
</TableHead>
<TableHead
className="sortable whitespace-nowrap"
- onClick={() => requestSort('region')}
+ onClick={() => requestSort('resources_str')}
>
- Region{getSortDirection('region')}
+ Resources{getSortDirection('resources_str')}
</TableHead>
<TableHead
className="sortable whitespace-nowrap"
@@ -277,8 +280,22 @@ export function ClusterTable({
</Link>
</TableCell>
<TableCell>{item.user}</TableCell>
- <TableCell>{item.resources_str}</TableCell>
- <TableCell>{item.region}</TableCell>
+ <TableCell>
+ <NonCapitalizedTooltip
+ content={item.full_infra || item.infra}
+ className="text-sm text-muted-foreground"
+ >
+ <span>{item.infra}</span>
+ </NonCapitalizedTooltip>
+ </TableCell>
+ <TableCell>
+ <NonCapitalizedTooltip
+ content={item.resources_str_full || item.resources_str}
+ className="text-sm text-muted-foreground"
+ >
+ <span>{item.resources_str}</span>
+ </NonCapitalizedTooltip>
+ </TableCell>
<TableCell>{relativeTime(item.time)}</TableCell>
<TableCell className="text-left">
<Status2Actions
diff --git a/sky/dashboard/src/components/jobs.jsx b/sky/dashboard/src/components/jobs.jsx
index 98b44e277dd..4ae9854f743 100755
--- a/sky/dashboard/src/components/jobs.jsx
+++ b/sky/dashboard/src/components/jobs.jsx
@@ -21,7 +21,11 @@ import { formatDuration } from '@/components/utils';
import { getManagedJobs } from '@/data/connectors/jobs';
import { getClusters } from '@/data/connectors/clusters';
import { Layout } from '@/components/elements/layout';
-import { CustomTooltip as Tooltip, relativeTime } from '@/components/utils';
+import {
+ CustomTooltip as Tooltip,
+ NonCapitalizedTooltip,
+ relativeTime,
+} from '@/components/utils';
import {
FileSearchIcon,
RotateCwIcon,
@@ -490,22 +494,23 @@ export function ManagedJobsTable({
</TableHead>
<TableHead
className="sortable whitespace-nowrap"
- onClick={() => requestSort('resources')}
+ onClick={() => requestSort('resources_str')}
>
- Resources{getSortDirection('resources')}
+ Requested{getSortDirection('resources_str')}
</TableHead>
<TableHead
className="sortable whitespace-nowrap"
- onClick={() => requestSort('cluster')}
+ onClick={() => requestSort('infra')}
>
- Cluster{getSortDirection('cluster')}
+ Infra{getSortDirection('infra')}
</TableHead>
<TableHead
className="sortable whitespace-nowrap"
- onClick={() => requestSort('region')}
+ onClick={() => requestSort('cluster')}
>
- Region{getSortDirection('region')}
+ Resources{getSortDirection('cluster')}
</TableHead>
+
<TableHead
className="sortable whitespace-nowrap"
onClick={() => requestSort('recoveries')}
@@ -520,7 +525,7 @@ export function ManagedJobsTable({
{loading && isInitialLoad ? (
<TableRow>
<TableCell
- colSpan={12}
+ colSpan={11}
className="text-center py-6 text-gray-500"
>
<div className="flex justify-center items-center">
@@ -556,9 +561,25 @@ export function ManagedJobsTable({
<TableCell>
<StatusBadge status={item.status} />
</TableCell>
- <TableCell>{item.resources}</TableCell>
- <TableCell>{item.cluster}</TableCell>
- <TableCell>{item.region}</TableCell>
+ <TableCell>{item.requested_resources}</TableCell>
+ <TableCell>
+ <NonCapitalizedTooltip
+ content={item.full_infra || item.infra}
+ className="text-sm text-muted-foreground"
+ >
+ <span>{item.infra}</span>
+ </NonCapitalizedTooltip>
+ </TableCell>
+ <TableCell>
+ <NonCapitalizedTooltip
+ content={
+ item.resources_str_full || item.resources_str
+ }
+ className="text-sm text-muted-foreground"
+ >
+ <span>{item.resources_str}</span>
+ </NonCapitalizedTooltip>
+ </TableCell>
<TableCell>{item.recoveries}</TableCell>
<TableCell>
{item.details ? (
@@ -583,7 +604,7 @@ export function ManagedJobsTable({
{expandedRowId === item.id && (
<ExpandedDetailsRow
text={item.details}
- colSpan={12}
+ colSpan={11}
innerRef={expandedRowRef}
/>
)}
@@ -592,7 +613,7 @@ export function ManagedJobsTable({
</>
) : (
<TableRow>
- <TableCell colSpan={12} className="text-center py-6">
+ <TableCell colSpan={11} className="text-center py-6">
<div className="flex flex-col items-center space-y-4">
{controllerLaunching && (
<div className="flex flex-col items-center space-y-2">
diff --git a/sky/dashboard/src/components/utils.jsx b/sky/dashboard/src/components/utils.jsx
index 8044abac9df..e4fe8a2b2e2 100644
--- a/sky/dashboard/src/components/utils.jsx
+++ b/sky/dashboard/src/components/utils.jsx
@@ -87,6 +87,24 @@ export const CustomTooltip = ({ children, ...props }) => {
);
};
+export const NonCapitalizedTooltip = ({ children, ...props }) => {
+ const content = props.content;
+ props.content = undefined;
+ return (
+ <Tooltip
+ {...DEFAULT_TOOLTIP_PROPS}
+ {...props}
+ content={
+ <span className="left-full w-max px-2 py-1 text-sm text-gray-100 bg-gray-500 text-sm rounded">
+ {content}
+ </span>
+ }
+ >
+ {children}
+ </Tooltip>
+ );
+};
+
// Format duration from seconds to a readable format
export function formatDuration(durationInSeconds) {
if (!durationInSeconds && durationInSeconds !== 0) return '-';
diff --git a/sky/dashboard/src/data/connectors/clusters.jsx b/sky/dashboard/src/data/connectors/clusters.jsx
index 71944182602..8fb5052be52 100644
--- a/sky/dashboard/src/data/connectors/clusters.jsx
+++ b/sky/dashboard/src/data/connectors/clusters.jsx
@@ -4,6 +4,38 @@ import { useState, useEffect, useCallback } from 'react';
import { showToast } from '@/data/connectors/toast';
import { ENDPOINT } from '@/data/connectors/constants';
+/**
+ * Truncates a string in the middle, preserving parts from beginning and end.
+ * @param {string} str - The string to truncate
+ * @param {number} maxLength - Maximum length of the truncated string
+ * @return {string} - Truncated string
+ */
+function truncateMiddle(str, maxLength = 15) {
+ if (!str || str.length <= maxLength) return str;
+
+ // Reserve 3 characters for '...'
+ if (maxLength <= 3) return '...';
+
+ // Calculate how many characters to keep from beginning and end
+ const halfLength = Math.floor((maxLength - 3) / 2);
+ const remainder = (maxLength - 3) % 2;
+
+ // Keep one more character at the beginning if maxLength - 3 is odd
+ const startLength = halfLength + remainder;
+ const endLength = halfLength;
+
+ // When endLength is 0, just show the start part and '...'
+ if (endLength === 0) {
+ return str.substring(0, startLength) + '...';
+ }
+
+ return (
+ str.substring(0, startLength) +
+ '...' +
+ str.substring(str.length - endLength)
+ );
+}
+
const clusterStatusMap = {
UP: 'RUNNING',
STOPPED: 'STOPPED',
@@ -31,16 +63,33 @@ export async function getClusters({ clusterNames = null } = {}) {
const data = await fetchedData.json();
const clusters = data.return_value ? JSON.parse(data.return_value) : [];
const clusterData = clusters.map((cluster) => {
+ let region_or_zone = '';
+ if (cluster.zone) {
+ region_or_zone = cluster.zone;
+ } else {
+ region_or_zone = cluster.region;
+ }
+ // Store the full value before truncation
+ const full_region_or_zone = region_or_zone;
+ // Truncate region_or_zone in the middle if it's too long
+ if (region_or_zone && region_or_zone.length > 25) {
+ region_or_zone = truncateMiddle(region_or_zone, 25);
+ }
return {
status: clusterStatusMap[cluster.status],
cluster: cluster.name,
user: cluster.user_name,
- infra: cluster.cloud,
- region: cluster.region,
+ infra: region_or_zone
+ ? cluster.cloud + ' (' + region_or_zone + ')'
+ : cluster.cloud,
+ full_infra: full_region_or_zone
+ ? `${cluster.cloud} (${full_region_or_zone})`
+ : cluster.cloud,
cpus: cluster.cpus,
mem: cluster.memory,
gpus: cluster.accelerators,
resources_str: cluster.resources_str,
+ resources_str_full: cluster.resources_str_full,
time: new Date(cluster.launched_at * 1000),
num_nodes: cluster.nodes,
jobs: [],
@@ -169,7 +218,7 @@ export function useClusterDetails({ cluster, job = null }) {
if (cluster) {
try {
setLoadingClusterJobData(true);
- const data = await getClusterJobs({ clusterName: cluster, job: job });
+ const data = await getClusterJobs({ clusterName: cluster });
setClusterJobData(data);
} catch (error) {
console.error('Error fetching cluster job data:', error);
@@ -177,7 +226,7 @@ export function useClusterDetails({ cluster, job = null }) {
setLoadingClusterJobData(false);
}
}
- }, [cluster, job]);
+ }, [cluster]);
const refreshData = useCallback(async () => {
await Promise.all([fetchClusterData(), fetchClusterJobData()]);
diff --git a/sky/dashboard/src/data/connectors/jobs.jsx b/sky/dashboard/src/data/connectors/jobs.jsx
index 55cf10cf0f9..3f3728496fb 100644
--- a/sky/dashboard/src/data/connectors/jobs.jsx
+++ b/sky/dashboard/src/data/connectors/jobs.jsx
@@ -82,6 +82,49 @@ export async function getManagedJobs({ allUsers = true } = {}) {
let endTime = job.end_at ? job.end_at : Date.now() / 1000;
const total_duration = endTime - job.submitted_at;
+ // Extract cloud name if not available (backward compatibility)
+ // TODO(zhwu): remove this after 0.12.0
+ let cloud = job.cloud;
+ let cluster_resources = job.cluster_resources;
+ if (!cloud) {
+ // Backward compatibility for old jobs controller without cloud info
+ // Similar to the logic in sky/jobs/utils.py
+ if (job.cluster_resources && job.cluster_resources !== '-') {
+ try {
+ cloud = job.cluster_resources.split('(')[0].split('x').pop().trim();
+ cluster_resources = job.cluster_resources
+ .replace(`${cloud}(`, '(')
+ .replace('x ', 'x');
+ } catch (error) {
+ // If parsing fails, set a default value
+ cloud = 'Unknown';
+ }
+ } else {
+ cloud = 'Unknown';
+ }
+ }
+
+ let region_or_zone = '';
+ if (job.zone) {
+ region_or_zone = job.zone;
+ } else {
+ region_or_zone = job.region;
+ }
+
+ const full_region_or_zone = region_or_zone;
+ if (region_or_zone && region_or_zone.length > 15) {
+ region_or_zone = region_or_zone.substring(0, 15) + '...';
+ }
+
+ let infra = cloud + ' (' + region_or_zone + ')';
+ if (region_or_zone === '-') {
+ infra = cloud;
+ }
+ let full_infra = cloud + ' (' + full_region_or_zone + ')';
+ if (full_region_or_zone === '-') {
+ full_infra = cloud;
+ }
+
return {
id: job.job_id,
task: job.task_name,
@@ -89,9 +132,11 @@ export async function getManagedJobs({ allUsers = true } = {}) {
job_duration: job.job_duration,
total_duration: total_duration,
status: job.status,
- resources: job.resources,
- cluster: job.cluster_resources,
- region: job.region,
+ requested_resources: job.resources,
+ resources_str: cluster_resources,
+ resources_str_full: job.cluster_resources_full || cluster_resources,
+ infra: infra,
+ full_infra: full_infra,
recoveries: job.recovery_count,
details: job.failure_reason,
user: job.user_name,
diff --git a/sky/dashboard/src/pages/clusters/[cluster].js b/sky/dashboard/src/pages/clusters/[cluster].js
index 60549cbe30c..4d15ca6f71e 100644
--- a/sky/dashboard/src/pages/clusters/[cluster].js
+++ b/sky/dashboard/src/pages/clusters/[cluster].js
@@ -147,6 +147,14 @@ function ActiveTab({ clusterData, clusterJobData, loading }) {
</div>
<div className="p-4">
<div className="grid grid-cols-2 gap-6">
+ <div>
+ <div className="text-gray-600 font-medium text-base">
+ Status
+ </div>
+ <div className="text-base mt-1">
+ <StatusBadge status={clusterData.status} />
+ </div>
+ </div>
<div>
<div className="text-gray-600 font-medium text-base">
Cluster
@@ -158,11 +166,9 @@ function ActiveTab({ clusterData, clusterJobData, loading }) {
<div className="text-base mt-1">{clusterData.user}</div>
</div>
<div>
- <div className="text-gray-600 font-medium text-base">
- Status
- </div>
+ <div className="text-gray-600 font-medium text-base">Infra</div>
<div className="text-base mt-1">
- <StatusBadge status={clusterData.status} />
+ {clusterData.full_infra || clusterData.infra || 'N/A'}
</div>
</div>
<div>
@@ -170,15 +176,19 @@ function ActiveTab({ clusterData, clusterJobData, loading }) {
Resources
</div>
<div className="text-base mt-1">
- {clusterData.resources_str || 'N/A'}
+ {clusterData.resources_str_full ||
+ clusterData.resources_str ||
+ 'N/A'}
</div>
</div>
<div>
<div className="text-gray-600 font-medium text-base">
- Region
+ Started
</div>
<div className="text-base mt-1">
- {clusterData.region || 'N/A'}
+ {clusterData.time
+ ? new Date(clusterData.time).toLocaleString()
+ : 'N/A'}
</div>
</div>
</div>
diff --git a/sky/dashboard/src/pages/clusters/[cluster]/[job].js b/sky/dashboard/src/pages/clusters/[cluster]/[job].js
index 54ea6f31b66..1d67b8c344a 100755
--- a/sky/dashboard/src/pages/clusters/[cluster]/[job].js
+++ b/sky/dashboard/src/pages/clusters/[cluster]/[job].js
@@ -228,7 +228,7 @@ export function JobDetailPage() {
{jobData.resources && (
<div>
<div className="text-gray-600 font-medium text-base">
- Resources
+ Requested Resources
</div>
<div className="text-base mt-1">
{jobData.resources || 'N/A'}
diff --git a/sky/dashboard/src/pages/jobs/[job].js b/sky/dashboard/src/pages/jobs/[job].js
index 4b9b0c44cb2..5a079444a4c 100755
--- a/sky/dashboard/src/pages/jobs/[job].js
+++ b/sky/dashboard/src/pages/jobs/[job].js
@@ -450,12 +450,10 @@ function JobDetailsContent({
return (
<div className="grid grid-cols-2 gap-6">
<div>
- <div className="text-gray-600 font-medium text-base">Job ID</div>
- <div className="text-base mt-1">{jobData.id}</div>
- </div>
- <div>
- <div className="text-gray-600 font-medium text-base">Job Name</div>
- <div className="text-base mt-1">{jobData.name}</div>
+ <div className="text-gray-600 font-medium text-base">Job ID (Name)</div>
+ <div className="text-base mt-1">
+ {jobData.id} {jobData.name ? `(${jobData.name})` : ''}
+ </div>
</div>
<div>
<div className="text-gray-600 font-medium text-base">Status</div>
@@ -468,12 +466,22 @@ function JobDetailsContent({
<div className="text-base mt-1">{jobData.user}</div>
</div>
<div>
- <div className="text-gray-600 font-medium text-base">Resources</div>
- <div className="text-base mt-1">{jobData.resources || 'N/A'}</div>
+ <div className="text-gray-600 font-medium text-base">
+ Requested Resources
+ </div>
+ <div className="text-base mt-1">
+ {jobData.requested_resources || 'N/A'}
+ </div>
+ </div>
+ <div>
+ <div className="text-gray-600 font-medium text-base">Infra</div>
+ <div className="text-base mt-1">{jobData.infra || '-'}</div>
</div>
<div>
- <div className="text-gray-600 font-medium text-base">Cluster</div>
- <div className="text-base mt-1">{jobData.cluster || '-'}</div>
+ <div className="text-gray-600 font-medium text-base">Resources</div>
+ <div className="text-base mt-1">
+ {jobData.resources_str_full || jobData.resources_str || '-'}
+ </div>
</div>
</div>
);
diff --git a/sky/execution.py b/sky/execution.py
index 9d42ac11689..b173cc8b407 100644
--- a/sky/execution.py
+++ b/sky/execution.py
@@ -465,7 +465,7 @@ def launch(
import sky
task = sky.Task(run='echo hello SkyPilot')
task.set_resources(
- sky.Resources(cloud=sky.AWS(), accelerators='V100:4'))
+ sky.Resources(infra='aws', accelerators='V100:4'))
sky.launch(task, cluster_name='my-cluster')
diff --git a/sky/jobs/server/core.py b/sky/jobs/server/core.py
index 09080c8a012..e64befc7488 100644
--- a/sky/jobs/server/core.py
+++ b/sky/jobs/server/core.py
@@ -395,7 +395,7 @@ def queue(refresh: bool,
if returncode != 0:
logger.error(job_table_payload + stderr)
raise RuntimeError('Failed to fetch managed jobs with returncode: '
- f'{returncode}')
+ f'{returncode}.\n{job_table_payload + stderr}')
jobs = managed_job_utils.load_managed_job_queue(job_table_payload)
diff --git a/sky/jobs/utils.py b/sky/jobs/utils.py
index 73d96185c9e..c0eee370881 100644
--- a/sky/jobs/utils.py
+++ b/sky/jobs/utils.py
@@ -33,8 +33,10 @@
from sky.skylet import log_lib
from sky.usage import usage_lib
from sky.utils import common_utils
+from sky.utils import infra_utils
from sky.utils import log_utils
from sky.utils import message_utils
+from sky.utils import resources_utils
from sky.utils import rich_utils
from sky.utils import subprocess_utils
from sky.utils import ux_utils
@@ -911,15 +913,23 @@ def dump_managed_job_queue() -> str:
cluster_name = generate_managed_job_cluster_name(
job['task_name'], job['job_id'])
handle = global_user_state.get_handle_from_cluster_name(cluster_name)
- if handle is not None:
- assert isinstance(handle, backends.CloudVmRayResourceHandle)
- job['cluster_resources'] = (
- f'{handle.launched_nodes}x {handle.launched_resources}')
+ if isinstance(handle, backends.CloudVmRayResourceHandle):
+ resources_str = resources_utils.get_readable_resources_repr(
+ handle, simplify=True)
+ resources_str_full = resources_utils.get_readable_resources_repr(
+ handle, simplify=False)
+ job['cluster_resources'] = resources_str
+ job['cluster_resources_full'] = resources_str_full
+ job['cloud'] = str(handle.launched_resources.cloud)
job['region'] = handle.launched_resources.region
+ job['zone'] = handle.launched_resources.zone
else:
# FIXME(zongheng): display the last cached values for these.
job['cluster_resources'] = '-'
+ job['cluster_resources_full'] = '-'
+ job['cloud'] = '-'
job['region'] = '-'
+ job['zone'] = '-'
return message_utils.encode_payload(jobs)
@@ -1026,7 +1036,7 @@ def get_hash(task):
'TASK',
'NAME',
*user_cols,
- 'RESOURCES',
+ 'REQUESTED',
'SUBMITTED',
'TOT. DURATION',
'JOB DURATION',
@@ -1035,7 +1045,7 @@ def get_hash(task):
]
if show_all:
# TODO: move SCHED. STATE to a separate flag (e.g. --debug)
- columns += ['STARTED', 'CLUSTER', 'REGION', 'SCHED. STATE', 'DETAILS']
+ columns += ['STARTED', 'INFRA', 'RESOURCES', 'SCHED. STATE', 'DETAILS']
if tasks_have_k8s_user:
columns.insert(0, 'USER')
job_table = log_utils.create_table(columns)
@@ -1174,11 +1184,32 @@ def get_user_column_values(task: Dict[str, Any]) -> List[str]:
# more than one task, only display on the aggregated row.
schedule_state = (task['schedule_state']
if len(job_tasks) == 1 else '-')
+ cloud = task.get('cloud')
+ if cloud is None:
+ # Backward compatibility for old jobs controller without
+ # cloud info returned, we parse it from the cluster
+ # resources
+ # TODO(zhwu): remove this after 0.12.0
+ cloud = task['cluster_resources'].split('(')[0].split(
+ 'x')[-1]
+ task['cluster_resources'] = task[
+ 'cluster_resources'].replace(f'{cloud}(',
+ '(').replace('x ', 'x')
+ region = task['region']
+ zone = task.get('zone')
+ if cloud == '-':
+ cloud = None
+ if region == '-':
+ region = None
+ if zone == '-':
+ zone = None
+
+ infra = infra_utils.InfraInfo(cloud, region, zone)
values.extend([
# STARTED
log_utils.readable_time_duration(task['start_at']),
+ infra.formatted_str(),
task['cluster_resources'],
- task['region'],
schedule_state,
generate_details(task['failure_reason']),
])
diff --git a/sky/optimizer.py b/sky/optimizer.py
index f4a9fa03553..453afb0b633 100644
--- a/sky/optimizer.py
+++ b/sky/optimizer.py
@@ -167,7 +167,7 @@ def _add_dummy_source_sink_nodes(dag: 'dag_lib.Dag'):
def make_dummy(name):
dummy = task_lib.Task(name)
- dummy.set_resources({DummyResources(DummyCloud(), None)})
+ dummy.set_resources({DummyResources(cloud=DummyCloud())})
dummy.set_time_estimator(lambda _: 0)
return dummy
@@ -321,10 +321,10 @@ def get_reservations_available_resources(
estimated_runtime = 1 * 3600
else:
# We assume the time estimator takes in a partial resource
- # Resources('V100')
+ # Resources(accelerators='V100')
# and treats their launchable versions
- # Resources(AWS, 'p3.2xlarge'),
- # Resources(GCP, '...', 'V100'),
+ # Resources(infra='aws', instance_type='p3.2xlarge'),
+ # Resources(infra='gcp', accelerators='V100'),
# ...
# as having the same run time.
# FIXME(zongheng): take 'num_nodes' as an arg/into
@@ -772,6 +772,15 @@ def print_optimized_plan(
f'{colorama.Style.BRIGHT}Estimated total cost: '
f'{colorama.Style.RESET_ALL}${total_cost:.1f}\n')
+ def _instance_type_str(resources: 'resources_lib.Resources') -> str:
+ instance_type = resources.instance_type
+ assert instance_type is not None, 'Instance type must be specified'
+ if isinstance(resources.cloud, clouds.Kubernetes):
+ instance_type = '-'
+ if resources.use_spot:
+ instance_type = ''
+ return instance_type
+
def _get_resources_element_list(
resources: 'resources_lib.Resources') -> List[str]:
accelerators = resources.get_accelerators_str()
@@ -794,22 +803,20 @@ def format_number(x: Optional[float]) -> str:
vcpus = format_number(vcpus_)
mem = format_number(mem_)
- if resources.zone is None:
- region_or_zone = resources.region
- else:
- region_or_zone = resources.zone
+ # Format infra as CLOUD (REGION/ZONE)
+ infra = resources.infra.formatted_str()
+
return [
- str(cloud),
- resources.instance_type + spot,
+ infra,
+ _instance_type_str(resources) + spot,
vcpus,
mem,
str(accelerators),
- str(region_or_zone),
]
Row = collections.namedtuple('Row', [
- 'cloud', 'instance', 'vcpus', 'mem', 'accelerators',
- 'region_or_zone', 'cost_str', 'chosen_str'
+ 'infra', 'instance', 'vcpus', 'mem', 'accelerators', 'cost_str',
+ 'chosen_str'
])
def _get_resources_named_tuple(resources: 'resources_lib.Resources',
@@ -833,18 +840,15 @@ def format_number(x: Optional[float]) -> str:
vcpus = format_number(vcpus_)
mem = format_number(mem_)
- if resources.zone is None:
- region_or_zone = resources.region
- else:
- region_or_zone = resources.zone
+ infra = resources.infra.formatted_str()
chosen_str = ''
if chosen:
chosen_str = (colorama.Fore.GREEN + ' ' + '\u2714' +
colorama.Style.RESET_ALL)
- row = Row(cloud, resources.instance_type + spot, vcpus, mem,
- str(accelerators), str(region_or_zone), cost_str,
- chosen_str)
+ row = Row(infra,
+ _instance_type_str(resources) + spot, vcpus, mem,
+ str(accelerators), cost_str, chosen_str)
return row
@@ -862,10 +866,7 @@ def _get_resource_group_hash(resources: 'resources_lib.Resources'):
return json.dumps(resource_key_dict, sort_keys=True)
# Print the list of resouces that the optimizer considered.
- resource_fields = [
- 'CLOUD', 'INSTANCE', 'vCPUs', 'Mem(GB)', 'ACCELERATORS',
- 'REGION/ZONE'
- ]
+ resource_fields = ['INFRA', 'INSTANCE', 'vCPUs', 'Mem(GB)', 'GPUS']
if len(ordered_best_plan) > 1:
best_plan_rows = []
for t, r in ordered_best_plan.items():
@@ -993,13 +994,19 @@ def _print_candidates(node_to_candidate_map: _TaskToPerCloudCandidates):
if len(candidate_list) > 1:
is_multi_instances = True
instance_list = [
- res.instance_type for res in candidate_list
+ res.instance_type
+ for res in candidate_list
+ if res.instance_type is not None
]
+ candidate_str = resources_utils.format_resource(
+ candidate_list[0], simplify=True)
+
logger.info(
- f'Multiple {cloud} instances satisfy '
- f'{acc_name}:{int(acc_count)}. '
- f'The cheapest {candidate_list[0]!r} is considered '
- f'among:\n{instance_list}.')
+ f'{colorama.Style.DIM}🔍 Multiple {cloud} instances '
+ f'satisfy {acc_name}:{int(acc_count)}. '
+ f'The cheapest {candidate_str} is considered '
+ f'among: {", ".join(instance_list)}.'
+ f'{colorama.Style.RESET_ALL}')
if is_multi_instances:
logger.info(
f'To list more details, run: sky show-gpus {acc_name}\n')
diff --git a/sky/resources.py b/sky/resources.py
index f988c000548..4729482fb4d 100644
--- a/sky/resources.py
+++ b/sky/resources.py
@@ -6,6 +6,7 @@
import colorama
+import sky
from sky import check as sky_check
from sky import clouds
from sky import exceptions
@@ -20,6 +21,7 @@
from sky.utils import annotations
from sky.utils import common_utils
from sky.utils import config_utils
+from sky.utils import infra_utils
from sky.utils import log_utils
from sky.utils import registry
from sky.utils import resources_utils
@@ -106,6 +108,7 @@ def __init__(
memory: Union[None, int, float, str] = None,
accelerators: Union[None, str, Dict[str, Union[int, float]]] = None,
accelerator_args: Optional[Dict[str, str]] = None,
+ infra: Optional[str] = None,
use_spot: Optional[bool] = None,
job_recovery: Optional[Union[Dict[str, Optional[Union[str, int]]],
str]] = None,
@@ -134,9 +137,9 @@ def __init__(
.. code-block:: python
# Fully specified cloud and instance type (is_launchable() is True).
- sky.Resources(clouds.AWS(), 'p3.2xlarge')
- sky.Resources(clouds.GCP(), 'n1-standard-16')
- sky.Resources(clouds.GCP(), 'n1-standard-8', 'V100')
+ sky.Resources(infra='aws', instance_type='p3.2xlarge')
+ sky.Resources(infra='k8s/my-cluster-ctx', accelerators='V100')
+ sky.Resources(infra='gcp/us-central1', accelerators='V100')
# Specifying required resources; the system decides the
# cloud/instance type. The below are equivalent:
@@ -145,8 +148,9 @@ def __init__(
sky.Resources(accelerators={'V100': 1})
sky.Resources(cpus='2+', memory='16+', accelerators='V100')
+
Args:
- cloud: the cloud to use.
+ cloud: the cloud to use. Deprecated. Use `infra` instead.
instance_type: the instance type to use.
cpus: the number of CPUs required for the task.
If a str, must be a string of the form ``'2'`` or ``'2+'``, where
@@ -160,6 +164,11 @@ def __init__(
dict of the form ``{'V100': 2}`` or ``{'tpu-v2-8': 1}``.
accelerator_args: accelerator-specific arguments. For example,
``{'tpu_vm': True, 'runtime_version': 'tpu-vm-base'}`` for TPUs.
+ infra: a string specifying the infrastructure to use, in the format
+ of "cloud/region" or "cloud/region/zone". For example,
+ `aws/us-east-1` or `k8s/my-cluster-ctx`. This is an alternative to
+ specifying cloud, region, and zone separately. If provided, it
+ takes precedence over cloud, region, and zone parameters.
use_spot: whether to use spot instances. If None, defaults to
False.
job_recovery: the job recovery strategy to use for the managed
@@ -172,8 +181,8 @@ def __init__(
- max_restarts_on_errors: the max number of restarts on user code
errors.
- region: the region to use.
- zone: the zone to use.
+ region: the region to use. Deprecated. Use `infra` instead.
+ zone: the zone to use. Deprecated. Use `infra` instead.
image_id: the image ID to use. If a str, must be a string
of the image id from the cloud, such as AWS:
``'ami-1234567890abcdef0'``, GCP:
@@ -218,6 +227,25 @@ def __init__(
exceptions.NoCloudAccessError: if no public cloud is enabled.
"""
self._version = self._VERSION
+
+ if infra is not None and (cloud is not None or region is not None or
+ zone is not None):
+ with ux_utils.print_exception_no_traceback():
+ raise ValueError('Cannot specify both `infra` and `cloud`, '
+ '`region`, or `zone` parameters. '
+ f'Got: infra={infra}, cloud={cloud}, '
+ f'region={region}, zone={zone}')
+
+ # Infra is user facing, and cloud, region, zone in parameters are for
+ # backward compatibility. Internally, we keep using cloud, region, zone
+ # for simplicity.
+ if infra is not None:
+ infra_info = infra_utils.InfraInfo.from_str(infra)
+ # Infra takes precedence over individually specified parameters
+ cloud = sky.CLOUD_REGISTRY.from_str(infra_info.cloud)
+ region = infra_info.region
+ zone = infra_info.zone
+
self._cloud = cloud
self._region: Optional[str] = region
self._zone: Optional[str] = zone
@@ -431,6 +459,11 @@ def repr_with_region_zone(self) -> str:
repr_str += f'{region_str}{zone_str}'
return repr_str
+ @property
+ def infra(self) -> infra_utils.InfraInfo:
+ cloud = str(self.cloud) if self.cloud is not None else None
+ return infra_utils.InfraInfo(cloud, self.region, self.zone)
+
@property
def cloud(self) -> Optional[clouds.Cloud]:
return self._cloud
@@ -486,9 +519,9 @@ def memory(self) -> Optional[str]:
def accelerators(self) -> Optional[Dict[str, Union[int, float]]]:
"""Returns the accelerators field directly or by inferring.
- For example, Resources(AWS, 'p3.2xlarge') has its accelerators field
- set to None, but this function will infer {'V100': 1} from the instance
- type.
+ For example, Resources(infra='aws', instance_type='p3.2xlarge') has its
+ accelerators field set to None, but this function will infer {'V100': 1}
+ from the instance type.
"""
if self._accelerators is not None:
return self._accelerators
@@ -1450,6 +1483,7 @@ def copy(self, **override) -> 'Resources':
ports=override.pop('ports', self.ports),
labels=override.pop('labels', self.labels),
autostop=override.pop('autostop', current_autostop_config),
+ infra=override.pop('infra', None),
_docker_login_config=override.pop('_docker_login_config',
self._docker_login_config),
_docker_username_for_runpod=override.pop(
@@ -1621,9 +1655,21 @@ def _override_resources(
@classmethod
def _from_yaml_config_single(cls, config: Dict[str, str]) -> 'Resources':
- resources_fields = {}
+ resources_fields: Dict[str, Any] = {}
+
+ # Extract infra field if present
+ infra = config.pop('infra', None)
+ resources_fields['infra'] = infra
+
+ # Keep backward compatibility with cloud, region, zone
+ # Note: if both `infra` and any of `cloud`, `region`, `zone` are
+ # specified, it will raise an error during the Resources.__init__
+ # validation.
resources_fields['cloud'] = registry.CLOUD_REGISTRY.from_str(
config.pop('cloud', None))
+ resources_fields['region'] = config.pop('region', None)
+ resources_fields['zone'] = config.pop('zone', None)
+
resources_fields['instance_type'] = config.pop('instance_type', None)
resources_fields['cpus'] = config.pop('cpus', None)
resources_fields['memory'] = config.pop('memory', None)
@@ -1641,8 +1687,6 @@ def _from_yaml_config_single(cls, config: Dict[str, str]) -> 'Resources':
# exclusive by the schema validation.
resources_fields['job_recovery'] = config.pop('job_recovery', None)
resources_fields['disk_size'] = config.pop('disk_size', None)
- resources_fields['region'] = config.pop('region', None)
- resources_fields['zone'] = config.pop('zone', None)
resources_fields['image_id'] = config.pop('image_id', None)
resources_fields['disk_tier'] = config.pop('disk_tier', None)
resources_fields['ports'] = config.pop('ports', None)
@@ -1679,7 +1723,10 @@ def add_if_not_none(key, value):
if value is not None and value != 'None':
config[key] = value
- add_if_not_none('cloud', str(self.cloud))
+ # Construct infra field if cloud is set
+ infra = self.infra.to_str()
+ add_if_not_none('infra', infra)
+
add_if_not_none('instance_type', self.instance_type)
add_if_not_none('cpus', self._cpus)
add_if_not_none('memory', self.memory)
@@ -1690,8 +1737,6 @@ def add_if_not_none(key, value):
add_if_not_none('use_spot', self.use_spot)
add_if_not_none('job_recovery', self.job_recovery)
add_if_not_none('disk_size', self.disk_size)
- add_if_not_none('region', self.region)
- add_if_not_none('zone', self.zone)
add_if_not_none('image_id', self.image_id)
if self.disk_tier is not None:
config['disk_tier'] = self.disk_tier.value
diff --git a/sky/serve/serve_utils.py b/sky/serve/serve_utils.py
index a1b2b4a2b37..d1d510ff0d7 100644
--- a/sky/serve/serve_utils.py
+++ b/sky/serve/serve_utils.py
@@ -1027,11 +1027,9 @@ def _format_replica_table(replica_records: List[Dict[str, Any]],
return 'No existing replicas.'
replica_columns = [
- 'SERVICE_NAME', 'ID', 'VERSION', 'ENDPOINT', 'LAUNCHED', 'RESOURCES',
- 'STATUS', 'REGION'
+ 'SERVICE_NAME', 'ID', 'VERSION', 'ENDPOINT', 'LAUNCHED', 'INFRA',
+ 'RESOURCES', 'STATUS'
]
- if show_all:
- replica_columns.append('ZONE')
replica_table = log_utils.create_table(replica_columns)
truncate_hint = ''
@@ -1047,21 +1045,17 @@ def _format_replica_table(replica_records: List[Dict[str, Any]],
version = (record['version'] if 'version' in record else '-')
replica_endpoint = endpoint if endpoint else '-'
launched_at = log_utils.readable_time_duration(record['launched_at'])
+ infra = '-'
resources_str = '-'
replica_status = record['status']
status_str = replica_status.colored_str()
- region = '-'
- zone = '-'
replica_handle: Optional['backends.CloudVmRayResourceHandle'] = record[
'handle']
if replica_handle is not None:
+ infra = replica_handle.launched_resources.infra.formatted_str()
resources_str = resources_utils.get_readable_resources_repr(
replica_handle, simplify=not show_all)
- if replica_handle.launched_resources.region is not None:
- region = replica_handle.launched_resources.region
- if replica_handle.launched_resources.zone is not None:
- zone = replica_handle.launched_resources.zone
replica_values = [
service_name,
@@ -1069,12 +1063,10 @@ def _format_replica_table(replica_records: List[Dict[str, Any]],
version,
replica_endpoint,
launched_at,
+ infra,
resources_str,
status_str,
- region,
]
- if show_all:
- replica_values.append(zone)
replica_table.add_row(replica_values)
return f'{replica_table}{truncate_hint}'
diff --git a/sky/utils/cli_utils/status_utils.py b/sky/utils/cli_utils/status_utils.py
index 6e267770b87..246d3e19785 100644
--- a/sky/utils/cli_utils/status_utils.py
+++ b/sky/utils/cli_utils/status_utils.py
@@ -33,17 +33,15 @@ class StatusColumn:
def __init__(self,
name: str,
calc_func: Callable,
- trunc_length: int = 0,
+ truncate: bool = True,
show_by_default: bool = True):
self.name = name
self.calc_func = calc_func
- self.trunc_length = trunc_length
+ self.truncate: bool = truncate
self.show_by_default = show_by_default
def calc(self, record):
- val = self.calc_func(record)
- if self.trunc_length != 0:
- val = common_utils.truncate_long_string(str(val), self.trunc_length)
+ val = self.calc_func(record, self.truncate)
return val
@@ -68,19 +66,20 @@ def show_status_table(cluster_records: List[_ClusterRecord],
StatusColumn('USER_ID', _get_user_hash, show_by_default=False))
status_columns += [
- StatusColumn('LAUNCHED', _get_launched),
- StatusColumn('RESOURCES',
- _get_resources,
- trunc_length=70 if not show_all else 0),
- StatusColumn('REGION', _get_region, show_by_default=False),
- StatusColumn('ZONE', _get_zone, show_by_default=False),
+ StatusColumn('INFRA', _get_infra, truncate=not show_all),
+ StatusColumn('RESOURCES', _get_resources, truncate=not show_all),
StatusColumn('STATUS', _get_status_colored),
StatusColumn('AUTOSTOP', _get_autostop),
- StatusColumn('HEAD_IP', _get_head_ip, show_by_default=False),
- StatusColumn('COMMAND',
- _get_command,
- trunc_length=COMMAND_TRUNC_LENGTH if not show_all else 0),
+ StatusColumn('LAUNCHED', _get_launched),
]
+ if show_all:
+ status_columns += [
+ StatusColumn('HEAD_IP', _get_head_ip, show_by_default=False),
+ StatusColumn('COMMAND',
+ _get_command,
+ truncate=not show_all,
+ show_by_default=False),
+ ]
columns = []
for status_column in status_columns:
@@ -160,10 +159,10 @@ def show_cost_report_table(cluster_records: List[_ClusterCostReportRecord],
status_columns = [
StatusColumn('NAME', _get_name),
StatusColumn('LAUNCHED', _get_launched),
- StatusColumn('DURATION', _get_duration, trunc_length=20),
+ StatusColumn('DURATION', _get_duration, truncate=False),
StatusColumn('RESOURCES',
_get_resources_for_cost_report,
- trunc_length=70 if not show_all else 0),
+ truncate=False),
StatusColumn('STATUS',
_get_status_for_cost_report,
show_by_default=True),
@@ -221,47 +220,68 @@ def show_cost_report_table(cluster_records: List[_ClusterCostReportRecord],
# Some of these lambdas are invoked on both _ClusterRecord and
# _ClusterCostReportRecord, which is okay as we guarantee the queried fields
# exist in those cases.
-_get_name = (lambda cluster_record: cluster_record['name'])
-_get_user_hash = (lambda cluster_record: cluster_record['user_hash'])
-_get_user_name = (lambda cluster_record: cluster_record.get('user_name', '-'))
-_get_launched = (lambda cluster_record: log_utils.readable_time_duration(
+_get_name = (lambda cluster_record, _: cluster_record['name'])
+_get_user_hash = (lambda cluster_record, _: cluster_record['user_hash'])
+_get_user_name = (
+ lambda cluster_record, _: cluster_record.get('user_name', '-'))
+_get_launched = (lambda cluster_record, _: log_utils.readable_time_duration(
cluster_record['launched_at']))
-_get_region = (
- lambda clusters_status: clusters_status['handle'].launched_resources.region)
-_get_command = (lambda cluster_record: cluster_record['last_use'])
-_get_duration = (lambda cluster_record: log_utils.readable_time_duration(
+_get_duration = (lambda cluster_record, _: log_utils.readable_time_duration(
0, cluster_record['duration'], absolute=True))
-def _get_status(cluster_record: _ClusterRecord) -> status_lib.ClusterStatus:
- return cluster_record['status']
-
+def _get_command(cluster_record: _ClusterRecord, truncate: bool = True) -> str:
+ command = cluster_record.get('last_use', '-')
+ if truncate:
+ return common_utils.truncate_long_string(command, COMMAND_TRUNC_LENGTH)
+ return command
-def _get_status_colored(cluster_record: _ClusterRecord) -> str:
- return _get_status(cluster_record).colored_str()
+def _get_status(cluster_record: _ClusterRecord,
+ truncate: bool = True) -> status_lib.ClusterStatus:
+ del truncate
+ return cluster_record['status']
-def _get_resources(cluster_record: _ClusterRecord) -> str:
- if 'resources_str' in cluster_record:
- return cluster_record['resources_str']
- handle = cluster_record['handle']
- if isinstance(handle, backends.LocalDockerResourceHandle):
- resources_str = 'docker'
- elif isinstance(handle, backends.CloudVmRayResourceHandle):
- resources_str = resources_utils.get_readable_resources_repr(handle)
- else:
- raise ValueError(f'Unknown handle type {type(handle)} encountered.')
- return resources_str
+def _get_status_colored(cluster_record: _ClusterRecord,
+ truncate: bool = True) -> str:
+ del truncate
+ return _get_status(cluster_record).colored_str()
-def _get_zone(cluster_record: _ClusterRecord) -> str:
- zone_str = cluster_record['handle'].launched_resources.zone
- if zone_str is None:
- zone_str = '-'
- return zone_str
+def _get_resources(cluster_record: _ClusterRecord,
+ truncate: bool = True) -> str:
+ """Get the resources information for a cluster.
-def _get_autostop(cluster_record: _ClusterRecord) -> str:
+ Returns:
+ A string in one of the following formats:
+ - For cloud VMs: "Nx instance_type" (e.g., "1x m6i.2xlarge")
+ - For K8S/SSH: "Nx (...)"
+ - "-" if no resource information is available
+ """
+ handle = cluster_record['handle']
+ if isinstance(handle, backends.CloudVmRayResourceHandle):
+ launched_resources = handle.launched_resources
+ if launched_resources is None:
+ return '-'
+
+ # For cloud VMs, show instance type directly
+ # For K8S/SSH, show (...) as the resource type
+ resources_str = cluster_record.get('resources_str', None)
+ if not truncate:
+ resources_str_full = cluster_record.get('resources_str_full', None)
+ if resources_str_full is not None:
+ resources_str = resources_str_full
+ if resources_str is None:
+ resources_str = resources_utils.get_readable_resources_repr(
+ handle, simplify=truncate)
+
+ return resources_str
+ return '-'
+
+
+def _get_autostop(cluster_record: _ClusterRecord, truncate: bool = True) -> str:
+ del truncate
autostop_str = ''
separation = ''
if cluster_record['autostop'] >= 0:
@@ -276,7 +296,8 @@ def _get_autostop(cluster_record: _ClusterRecord) -> str:
return autostop_str
-def _get_head_ip(cluster_record: _ClusterRecord) -> str:
+def _get_head_ip(cluster_record: _ClusterRecord, truncate: bool = True) -> str:
+ del truncate # Unused
handle = cluster_record['handle']
if not isinstance(handle, backends.CloudVmRayResourceHandle):
return '-'
@@ -291,6 +312,25 @@ def _is_pending_autostop(cluster_record: _ClusterRecord) -> bool:
cluster_record) != status_lib.ClusterStatus.STOPPED
+def _get_infra(cluster_record: _ClusterRecord, truncate: bool = True) -> str:
+ """Get the infrastructure information for a cluster.
+
+ Returns:
+ A string in one of the following formats:
+ - AWS/region (e.g., "AWS/us-east-1")
+ - K8S/context (e.g., "K8S/my-ctx")
+ - SSH/hostname (e.g., "SSH/my-tobi-box")
+ - "-" if no infrastructure information is available
+ """
+ handle = cluster_record['handle']
+ if isinstance(handle, backends.CloudVmRayResourceHandle):
+ if handle.launched_resources is None:
+ # If launched_resources is None, try to get infra from the record
+ return cluster_record.get('infra', '-')
+ return handle.launched_resources.infra.formatted_str(truncate)
+ return '-'
+
+
# ---- 'sky cost-report' helper functions below ----
@@ -347,14 +387,13 @@ def show_kubernetes_cluster_status_table(
show_all: bool) -> None:
"""Compute cluster table values and display for Kubernetes clusters."""
status_columns = [
- StatusColumn('USER', lambda c: c.user),
- StatusColumn('NAME', lambda c: c.cluster_name),
- StatusColumn('LAUNCHED',
- lambda c: log_utils.readable_time_duration(c.launched_at)),
- StatusColumn('RESOURCES',
- lambda c: c.resources_str,
- trunc_length=70 if not show_all else 0),
- StatusColumn('STATUS', lambda c: c.status.colored_str()),
+ StatusColumn('USER', lambda c, _: c.user),
+ StatusColumn('NAME', lambda c, _: c.cluster_name),
+ StatusColumn('RESOURCES', lambda c, _: c.resources_str, truncate=False),
+ StatusColumn('STATUS', lambda c, _: c.status.colored_str()),
+ StatusColumn(
+ 'LAUNCHED',
+ lambda c, _: log_utils.readable_time_duration(c.launched_at)),
# TODO(romilb): We should consider adding POD_NAME field here when --all
# is passed to help users fetch pod name programmatically.
]
diff --git a/sky/utils/common_utils.py b/sky/utils/common_utils.py
index 99f205e8e7a..00d9db4c756 100644
--- a/sky/utils/common_utils.py
+++ b/sky/utils/common_utils.py
@@ -723,10 +723,43 @@ def new_func(*args, **kwargs):
return new_func
-def truncate_long_string(s: str, max_length: int = 35) -> str:
- """Truncate a string to a maximum length, preserving whole words."""
+def truncate_long_string(s: str,
+ max_length: int = 35,
+ truncate_middle: bool = False) -> str:
+ """Truncate a string to a maximum length.
+
+ Args:
+ s: String to truncate.
+ max_length: Maximum length of the truncated string.
+ truncate_middle: Whether to truncate in the middle of the string.
+ If True, the middle part of the string is replaced with '...'.
+ If False, truncation happens at the end preserving whole words.
+
+ Returns:
+ Truncated string.
+ """
if len(s) <= max_length:
return s
+
+ if truncate_middle:
+ # Reserve 3 characters for '...'
+ if max_length <= 3:
+ return '...'
+
+ # Calculate how many characters to keep from beginning and end
+ half_length = (max_length - 3) // 2
+ remainder = (max_length - 3) % 2
+
+ # Keep one more character at the beginning if max_length - 3 is odd
+ start_length = half_length + remainder
+ end_length = half_length
+
+ # When end_length is 0, just show the start part and '...'
+ if end_length == 0:
+ return s[:start_length] + '...'
+ return s[:start_length] + '...' + s[-end_length:]
+
+ # Original end-truncation logic
splits = s.split(' ')
if len(splits[0]) > max_length:
return splits[0][:max_length] + '...' # Use '…'?
diff --git a/sky/utils/infra_utils.py b/sky/utils/infra_utils.py
new file mode 100644
index 00000000000..278475da51f
--- /dev/null
+++ b/sky/utils/infra_utils.py
@@ -0,0 +1,175 @@
+"""Utility functions for handling infrastructure specifications."""
+import dataclasses
+from typing import Optional
+
+from sky.utils import common_utils
+from sky.utils import ux_utils
+
+_REGION_OR_ZONE_TRUNCATION_LENGTH = 25
+
+
[email protected]
+class InfraInfo:
+ """Infrastructure information parsed from infra string.
+
+ When a field is None, it means the field is not specified.
+ """
+ cloud: Optional[str] = None
+ region: Optional[str] = None
+ zone: Optional[str] = None
+
+ def __init__(self,
+ cloud: Optional[str] = None,
+ region: Optional[str] = None,
+ zone: Optional[str] = None):
+ assert cloud not in ['none', 'None', 'NONE'], 'cloud must be specified'
+ if not cloud or cloud == '*':
+ cloud = None
+ if not region or region == '*':
+ region = None
+ if not zone or zone == '*':
+ zone = None
+
+ self.cloud = cloud
+ self.region = region
+ self.zone = zone
+
+ @staticmethod
+ def from_str(infra: Optional[str]) -> 'InfraInfo':
+ """Parse the infra string into cloud, region, and zone components.
+
+ The format of the infra string is `cloud`, `cloud/region`, or
+ `cloud/region/zone`. Examples: `aws`, `aws/us-east-1`,
+ `aws/us-east-1/us-east-1a`. For any field, you can use `*` to indicate
+ that any value is acceptable.
+
+ If `*` is used for any field, the InfraInfo will have None for that
+ field.
+
+ Args:
+ infra: A string in the format of `cloud`, `cloud/region`, or
+ `cloud/region/zone`. Examples: `aws`, `aws/us-east-1`,
+ `aws/us-east-1/us-east-1a`.
+
+ Returns:
+ An InfraInfo object containing cloud, region, and zone information.
+
+ Raises:
+ ValueError: If the infra string is malformed.
+ """
+ if infra is None or not infra.strip():
+ return InfraInfo()
+
+ infra = infra.strip().strip('/')
+
+ # Split on / to get cloud, region, zone
+ parts = [p.strip() for p in infra.strip().split('/')]
+
+ if '' in parts:
+ with ux_utils.print_exception_no_traceback():
+ raise ValueError(
+ f'Invalid infra format: {infra}. Format should not contain '
+ 'empty parts (e.g., double slashes "//").')
+
+ if not parts or not parts[0]:
+ with ux_utils.print_exception_no_traceback():
+ raise ValueError(
+ f'Invalid infra format: {infra}. Expected format is '
+ '"cloud", "cloud/region", or "cloud/region/zone".')
+
+ cloud_name: Optional[str] = parts[0].lower()
+
+ # Handle Kubernetes contexts specially, as they can contain slashes
+ if cloud_name in ['k8s', 'kubernetes']:
+ # For Kubernetes, the entire string after "k8s/" is the
+ # context name (region)
+ cloud_name = 'kubernetes' # Normalize k8s to kubernetes
+ region = '/'.join(parts[1:]) if len(parts) >= 2 else None
+ zone = None
+ else:
+ # For non-Kubernetes clouds, continue with regular parsing
+ # but be careful to only split into max 3 parts
+ region_zone_parts = parts[1:]
+ region = None
+ zone = None
+ if region_zone_parts:
+ region = region_zone_parts[0]
+ if len(region_zone_parts) > 1:
+ zone = region_zone_parts[1]
+ if len(region_zone_parts) > 2:
+ with ux_utils.print_exception_no_traceback():
+ raise ValueError(
+ f'Invalid infra format: {infra}. Expected format '
+ 'is "cloud", "cloud/region", or '
+ '"cloud/region/zone".')
+
+ if cloud_name == '*':
+ cloud_name = None
+ if region == '*':
+ region = None
+ if zone == '*':
+ zone = None
+ return InfraInfo(cloud=cloud_name, region=region, zone=zone)
+
+ def to_str(self) -> Optional[str]:
+ """Formats cloud, region, and zone into an infra string.
+
+ Args:
+ cloud: The cloud object
+ region: The region name
+ zone: The zone name
+
+ Returns:
+ A formatted infra string, or None if cloud is None or '*'
+ """
+ cloud = self.cloud
+ region = self.region
+ zone = self.zone
+
+ if cloud is None:
+ cloud = '*'
+ if region is None:
+ region = '*'
+ if zone is None:
+ zone = '*'
+
+ # Build the parts list and filter out trailing wildcards
+ parts = [cloud.lower(), region, zone]
+ while parts and parts[-1] == '*':
+ parts.pop()
+
+ if not parts:
+ return None
+
+ # Join the parts with '/'
+ return '/'.join(parts)
+
+ def formatted_str(self, truncate: bool = True) -> str:
+ """Formats cloud, region, and zone into an infra string.
+
+ Args:
+ truncate: Whether to truncate the region or zone
+
+ Returns:
+ A formatted infra string, or None if cloud is None or '*'
+ """
+ if self.cloud is None or self.cloud == '*':
+ return '-'
+
+ region_or_zone = None
+ if self.zone is not None and self.zone != '*':
+ region_or_zone = self.zone
+ elif self.region is not None and self.region != '*':
+ region_or_zone = self.region
+
+ if region_or_zone is not None and truncate:
+ region_or_zone = common_utils.truncate_long_string(
+ region_or_zone,
+ _REGION_OR_ZONE_TRUNCATION_LENGTH,
+ truncate_middle=True)
+
+ formatted_str = f'{self.cloud}'
+ if region_or_zone is not None:
+ formatted_str += f' ({region_or_zone})'
+
+ return formatted_str
diff --git a/sky/utils/resources_utils.py b/sky/utils/resources_utils.py
index 60556e95b68..bd654f0b37a 100644
--- a/sky/utils/resources_utils.py
+++ b/sky/utils/resources_utils.py
@@ -4,11 +4,11 @@
import itertools
import json
import math
-import re
import typing
from typing import Dict, List, Optional, Set, Union
from sky import skypilot_config
+from sky.utils import common_utils
from sky.utils import registry
from sky.utils import ux_utils
@@ -139,34 +139,54 @@ def simplify_ports(ports: List[str]) -> List[str]:
def format_resource(resource: 'resources_lib.Resources',
simplify: bool = False) -> str:
+ resource = resource.assert_launchable()
+ vcpu, mem = resource.cloud.get_vcpus_mem_from_instance_type(
+ resource.instance_type)
+
+ components = []
+
+ if resource.accelerators is not None:
+ acc, count = list(resource.accelerators.items())[0]
+ components.append(f'gpus={acc}:{count}')
+
+ is_k8s = str(resource.cloud).lower() == 'kubernetes'
+ if (resource.accelerators is None or is_k8s or not simplify):
+ if vcpu is not None:
+ components.append(f'cpus={int(vcpu)}')
+ if mem is not None:
+ components.append(f'mem={int(mem)}')
+
+ instance_type = resource.instance_type
if simplify:
- resource = resource.assert_launchable()
- cloud = resource.cloud
- if resource.accelerators is None:
- vcpu, _ = cloud.get_vcpus_mem_from_instance_type(
- resource.instance_type)
- assert vcpu is not None, 'vCPU must be specified'
- hardware = f'vCPU={int(vcpu)}'
- else:
- hardware = f'{resource.accelerators}'
- spot = '[Spot]' if resource.use_spot else ''
- return f'{cloud}({spot}{hardware})'
+ instance_type = common_utils.truncate_long_string(instance_type, 15)
+ if not is_k8s:
+ components.append(instance_type)
+ if simplify:
+ components.append('...')
else:
- # accelerator_args is way too long.
- # Convert from:
- # GCP(n1-highmem-8, {'tpu-v2-8': 1}, accelerator_args={'runtime_version': '2.12.0'} # pylint: disable=line-too-long
- # to:
- # GCP(n1-highmem-8, {'tpu-v2-8': 1}...)
- pattern = ', accelerator_args={.*}'
- launched_resource_str = re.sub(pattern, '...', str(resource))
- return launched_resource_str
+ image_id = resource.image_id
+ if image_id is not None:
+ if None in image_id:
+ components.append(f'image_id={image_id[None]}')
+ else:
+ components.append(f'image_id={image_id}')
+ components.append(f'disk={resource.disk_size}')
+ disk_tier = resource.disk_tier
+ if disk_tier is not None:
+ components.append(f'disk_tier={disk_tier.value}')
+ ports = resource.ports
+ if ports is not None:
+ components.append(f'ports={ports}')
+
+ spot = '[spot]' if resource.use_spot else ''
+ return f'{spot}({"" if not components else ", ".join(components)})'
def get_readable_resources_repr(handle: 'backends.CloudVmRayResourceHandle',
simplify: bool = False) -> str:
if (handle.launched_nodes is not None and
handle.launched_resources is not None):
- return (f'{handle.launched_nodes}x '
+ return (f'{handle.launched_nodes}x'
f'{format_resource(handle.launched_resources, simplify)}')
return _DEFAULT_MESSAGE_HANDLE_INITIALIZING
diff --git a/sky/utils/schemas.py b/sky/utils/schemas.py
index 479c23cb7e1..7e22898c753 100644
--- a/sky/utils/schemas.py
+++ b/sky/utils/schemas.py
@@ -69,6 +69,39 @@ def _get_single_resources_schema():
# To avoid circular imports, only import when needed.
# pylint: disable=import-outside-toplevel
from sky.clouds import service_catalog
+
+ # Building the regex pattern for the infra field
+ # Format: cloud[/region[/zone]] or wildcards or kubernetes context
+ # Match any cloud name (case insensitive)
+ all_clouds = list(service_catalog.ALL_CLOUDS)
+ all_clouds.remove('kubernetes')
+ cloud_pattern = f'(?i:({"|".join(all_clouds)}))'
+
+ # Optional /region followed by optional /zone
+ # /[^/]+ matches a slash followed by any characters except slash (region or
+ # zone name)
+ # The outer (?:...)? makes the entire region/zone part optional
+ region_zone_pattern = '(?:/[^/]+(?:/[^/]+)?)?'
+
+ # Wildcard patterns:
+ # 1. * - any cloud
+ # 2. */region - any cloud with specific region
+ # 3. */*/zone - any cloud, any region, specific zone
+ wildcard_cloud = '\\*' # Wildcard for cloud
+ wildcard_with_region = '(?:/[^/]+(?:/[^/]+)?)?'
+
+ # Kubernetes specific pattern - matches:
+ # 1. Just the word "kubernetes" or "k8s" by itself
+ # 2. "k8s/" or "kubernetes/" followed by any context name (which may contain
+ # slashes)
+ kubernetes_pattern = '(?i:kubernetes|k8s)(?:/.+)?'
+
+ # Combine all patterns with alternation (|)
+ # ^ marks start of string, $ marks end of string
+ infra_pattern = (f'^(?:{cloud_pattern}{region_zone_pattern}|'
+ f'{wildcard_cloud}{wildcard_with_region}|'
+ f'{kubernetes_pattern})$')
+
return {
'$schema': 'https://json-schema.org/draft/2020-12/schema',
'type': 'object',
@@ -85,6 +118,21 @@ def _get_single_resources_schema():
'zone': {
'type': 'string',
},
+ 'infra': {
+ 'type': 'string',
+ 'description':
+ ('Infrastructure specification in format: '
+ 'cloud[/region[/zone]]. Use "*" as a wildcard.'),
+ # Pattern validates:
+ # 1. cloud[/region[/zone]] - e.g. "aws", "aws/us-east-1",
+ # "aws/us-east-1/us-east-1a"
+ # 2. Wildcard patterns - e.g. "*", "*/us-east-1",
+ # "*/*/us-east-1a", "aws/*/us-east-1a"
+ # 3. Kubernetes patterns - e.g. "kubernetes/my-context",
+ # "k8s/context-name",
+ # "k8s/aws:eks:us-east-1:123456789012:cluster/my-cluster"
+ 'pattern': infra_pattern,
+ },
'cpus': {
'anyOf': [{
'type': 'string',
diff --git a/tests/common_test_fixtures.py b/tests/common_test_fixtures.py
index 10de8e89766..6ad96fb2531 100644
--- a/tests/common_test_fixtures.py
+++ b/tests/common_test_fixtures.py
@@ -133,6 +133,59 @@ def mock_get(url, *args, **kwargs):
monkeypatch.setattr(requests, "get", mock_get)
+# Define helper functions at module level for pickleability
+def get_cached_enabled_clouds_mock(enabled_clouds, *_, **__):
+ return enabled_clouds
+
+
+def dummy_function(*_, **__):
+ return None
+
+
+def get_az_mappings(*_, **__):
+ return pd.read_csv('tests/default_aws_az_mappings.csv')
+
+
+def list_empty_reservations(*_, **__):
+ return []
+
+
+def get_kubernetes_label_formatter(*_, **__):
+ return [kubernetes_utils.SkyPilotLabelFormatter, {}]
+
+
+def detect_accelerator_resource_mock(*_, **__):
+ return [True, []]
+
+
+def check_instance_fits_mock(*_, **__):
+ return [True, '']
+
+
+def get_spot_label_mock(*_, **__):
+ return [None, None]
+
+
+def is_kubeconfig_exec_auth_mock(*_, **__):
+ return [False, None]
+
+
+def regions_with_offering_mock(*_, **__):
+ return [sky.clouds.Region('my-k8s-cluster-context')]
+
+
+def check_quota_available_mock(*_, **__):
+ return True
+
+
+def mock_redirect_output(*_, **__):
+ return (None, None)
+
+
+def mock_restore_output(*_, **__):
+ return None
+
+
@pytest.fixture
def enable_all_clouds(monkeypatch, request, mock_client_requests):
"""Create mock context managers for cloud configurations."""
@@ -143,40 +196,43 @@ def enable_all_clouds(monkeypatch, request, mock_client_requests):
config_file = tempfile.NamedTemporaryFile(prefix='tmp_config_default',
delete=False).name
+ # Use a function that takes enabled_clouds as an argument
+ def get_clouds_factory(*args, **kwargs):
+ return get_cached_enabled_clouds_mock(enabled_clouds, *args, **kwargs)
+
# Mock all the functions
monkeypatch.setattr('sky.check.get_cached_enabled_clouds_or_refresh',
- lambda *_, **__: enabled_clouds)
- monkeypatch.setattr('sky.check.check_capability', lambda *_, **__: None)
+ get_clouds_factory)
+ monkeypatch.setattr('sky.check.check_capability', dummy_function)
monkeypatch.setattr(
'sky.clouds.service_catalog.aws_catalog._get_az_mappings',
- lambda *_, **__: pd.read_csv('tests/default_aws_az_mappings.csv'))
+ get_az_mappings)
monkeypatch.setattr('sky.backends.backend_utils.check_owner_identity',
- lambda *_, **__: None)
+ dummy_function)
monkeypatch.setattr(
'sky.clouds.utils.gcp_utils.list_reservations_for_instance_type_in_zone',
- lambda *_, **__: [])
+ list_empty_reservations)
# Kubernetes mocks
- monkeypatch.setattr('sky.adaptors.kubernetes._load_config',
- lambda *_, **__: None)
+ monkeypatch.setattr('sky.adaptors.kubernetes._load_config', dummy_function)
monkeypatch.setattr(
'sky.provision.kubernetes.utils.detect_gpu_label_formatter',
- lambda *_, **__: [kubernetes_utils.SkyPilotLabelFormatter, {}])
+ get_kubernetes_label_formatter)
monkeypatch.setattr(
'sky.provision.kubernetes.utils.detect_accelerator_resource',
- lambda *_, **__: [True, []])
+ detect_accelerator_resource_mock)
monkeypatch.setattr('sky.provision.kubernetes.utils.check_instance_fits',
- lambda *_, **__: [True, ''])
+ check_instance_fits_mock)
monkeypatch.setattr('sky.provision.kubernetes.utils.get_spot_label',
- lambda *_, **__: [None, None])
+ get_spot_label_mock)
monkeypatch.setattr('sky.clouds.kubernetes.kubernetes_utils.get_spot_label',
- lambda *_, **__: [None, None])
+ get_spot_label_mock)
monkeypatch.setattr(
'sky.provision.kubernetes.utils.is_kubeconfig_exec_auth',
- lambda *_, **__: [False, None])
+ is_kubeconfig_exec_auth_mock)
monkeypatch.setattr(
'sky.clouds.kubernetes.Kubernetes.regions_with_offering',
- lambda *_, **__: [sky.clouds.Region('my-k8s-cluster-context')])
+ regions_with_offering_mock)
# VSphere catalog mock
monkeypatch.setattr(vsphere_catalog, '_LOCAL_CATALOG',
@@ -186,7 +242,7 @@ def enable_all_clouds(monkeypatch, request, mock_client_requests):
for cloud in enabled_clouds:
if hasattr(cloud, 'check_quota_available'):
monkeypatch.setattr(cloud, 'check_quota_available',
- lambda *_, **__: True)
+ check_quota_available_mock)
# Environment variables
monkeypatch.setattr(
@@ -326,9 +382,9 @@ def mock_get_queue(schedule_type):
@pytest.fixture
def mock_redirect_log_file(monkeypatch):
monkeypatch.setattr('sky.server.requests.executor._redirect_output',
- lambda *_, **__: (None, None))
+ mock_redirect_output)
monkeypatch.setattr('sky.server.requests.executor._restore_output',
- lambda *_, **__: None)
+ mock_restore_output)
@pytest.fixture
diff --git a/tests/load_tests/test_distribute_load_on_server.py b/tests/load_tests/test_distribute_load_on_server.py
index 60c2dc26ed5..e2511b4e67c 100644
--- a/tests/load_tests/test_distribute_load_on_server.py
+++ b/tests/load_tests/test_distribute_load_on_server.py
@@ -79,9 +79,7 @@ def stream_log(req_id):
task = sky.Task(setup=setup, run=run)
task.set_file_mounts(file_mounts)
task.set_resources(
- sky.Resources(clouds.Kubernetes(),
- cpus=args.cpus,
- memory=args.memory))
+ sky.Resources(infra='k8s', cpus=args.cpus, memory=args.memory))
# Use launch instead of jobs launch for predictable client parallelism
resps.append(sky.launch(task, f'benchmark-{i}'))
try:
diff --git a/tests/skyserve/auto_restart.yaml b/tests/skyserve/auto_restart.yaml
index 5fd26ea8acd..6369034f08e 100644
--- a/tests/skyserve/auto_restart.yaml
+++ b/tests/skyserve/auto_restart.yaml
@@ -7,7 +7,7 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp
workdir: examples/serve/http_server
diff --git a/tests/skyserve/cancel/cancel.yaml b/tests/skyserve/cancel/cancel.yaml
index 2683e7dbb7b..883814fb03d 100644
--- a/tests/skyserve/cancel/cancel.yaml
+++ b/tests/skyserve/cancel/cancel.yaml
@@ -8,7 +8,7 @@ service:
resources:
ports: 9000
- cloud: gcp
+ infra: gcp
workdir: examples/serve/misc/cancel
diff --git a/tests/skyserve/high_availability/config.yaml b/tests/skyserve/high_availability/config.yaml
index 836894e2067..49100f4eff4 100644
--- a/tests/skyserve/high_availability/config.yaml
+++ b/tests/skyserve/high_availability/config.yaml
@@ -1,6 +1,6 @@
serve:
controller:
resources:
- cloud: kubernetes
+ infra: kubernetes
cpus: 2
high_availability: true
diff --git a/tests/skyserve/high_availability/service.yaml b/tests/skyserve/high_availability/service.yaml
index a33761535b6..b9b875fd830 100644
--- a/tests/skyserve/high_availability/service.yaml
+++ b/tests/skyserve/high_availability/service.yaml
@@ -7,7 +7,7 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp
cpus: 2+
workdir: examples/serve/http_server
diff --git a/tests/skyserve/http/aws.yaml b/tests/skyserve/http/aws.yaml
index edb562a5273..73b1da6bba9 100644
--- a/tests/skyserve/http/aws.yaml
+++ b/tests/skyserve/http/aws.yaml
@@ -6,7 +6,7 @@ service:
resources:
ports: 8080
- cloud: aws
+ infra: aws
workdir: examples/serve/http_server
diff --git a/tests/skyserve/http/azure.yaml b/tests/skyserve/http/azure.yaml
index 2f111a7d610..b0e869e9b13 100644
--- a/tests/skyserve/http/azure.yaml
+++ b/tests/skyserve/http/azure.yaml
@@ -6,7 +6,7 @@ service:
resources:
ports: 8081
- cloud: azure
+ infra: azure
workdir: examples/serve/http_server
diff --git a/tests/skyserve/http/gcp.yaml b/tests/skyserve/http/gcp.yaml
index b61f0c29fe3..81c2e24eaf4 100644
--- a/tests/skyserve/http/gcp.yaml
+++ b/tests/skyserve/http/gcp.yaml
@@ -6,7 +6,7 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp
workdir: examples/serve/http_server
diff --git a/tests/skyserve/http/kubernetes.yaml b/tests/skyserve/http/kubernetes.yaml
index 987304bb2d7..64a9033ead2 100644
--- a/tests/skyserve/http/kubernetes.yaml
+++ b/tests/skyserve/http/kubernetes.yaml
@@ -6,7 +6,7 @@ service:
resources:
ports: 8080
- cloud: kubernetes
+ infra: kubernetes
workdir: examples/serve/http_server
diff --git a/tests/skyserve/http/oci.yaml b/tests/skyserve/http/oci.yaml
index d7d98c18ab4..c9451634438 100644
--- a/tests/skyserve/http/oci.yaml
+++ b/tests/skyserve/http/oci.yaml
@@ -3,8 +3,8 @@ service:
replicas: 2
resources:
- cloud: oci
+ infra: oci
ports: 8080
cpus: 2+
-run: python -m http.server 8080
\ No newline at end of file
+run: python -m http.server 8080
diff --git a/tests/skyserve/llm/service.yaml b/tests/skyserve/llm/service.yaml
index dde5c9313b0..a848889ea9d 100644
--- a/tests/skyserve/llm/service.yaml
+++ b/tests/skyserve/llm/service.yaml
@@ -15,7 +15,7 @@ envs:
resources:
ports: 8087
- cloud: gcp
+ infra: gcp
accelerators: T4
cpus: 7+
memory: 20+
diff --git a/tests/skyserve/spot/dynamic_ondemand_fallback.yaml b/tests/skyserve/spot/dynamic_ondemand_fallback.yaml
index 2e8d692ecbd..00cb905eaa6 100644
--- a/tests/skyserve/spot/dynamic_ondemand_fallback.yaml
+++ b/tests/skyserve/spot/dynamic_ondemand_fallback.yaml
@@ -11,9 +11,8 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp/*/us-central1-a
cpus: 2+
- zone: us-central1-a
use_spot: true
workdir: examples/serve/http_server
diff --git a/tests/skyserve/spot/recovery.yaml b/tests/skyserve/spot/recovery.yaml
index 81cae7e1fc7..5efc467c6d6 100644
--- a/tests/skyserve/spot/recovery.yaml
+++ b/tests/skyserve/spot/recovery.yaml
@@ -7,8 +7,7 @@ service:
resources:
ports: 8080
- cloud: gcp
- zone: us-central1-a
+ infra: gcp/*/us-central1-a
use_spot: true
workdir: examples/serve/http_server
diff --git a/tests/skyserve/spot/spot_hedge.yaml b/tests/skyserve/spot/spot_hedge.yaml
index 88bbfeda052..f9dcb5e16c7 100644
--- a/tests/skyserve/spot/spot_hedge.yaml
+++ b/tests/skyserve/spot/spot_hedge.yaml
@@ -21,14 +21,12 @@ envs:
HF_TOKEN: # TODO: Fill with your own huggingface token, or use --env to pass.
resources:
- cloud: aws
+ infra: aws
any_of:
# Enable all region in AWS.
- - cloud: aws
+ - infra: aws
# Enable one in GCP.
- - cloud: gcp
- region: asia-northeast3
- zone: asia-northeast3-a
+ - infra: gcp/*/asia-northeast3-a
use_spot: true
accelerators: L4
ports: 9000 # Expose to internet traffic.
diff --git a/tests/skyserve/spot/spot_hedge_T4.yaml b/tests/skyserve/spot/spot_hedge_T4.yaml
index af949ee8904..641fd245428 100644
--- a/tests/skyserve/spot/spot_hedge_T4.yaml
+++ b/tests/skyserve/spot/spot_hedge_T4.yaml
@@ -23,13 +23,11 @@ envs:
resources:
any_of:
# Enable all region in AWS.
- - cloud: aws
+ - infra: aws
# region: us-east-1
# zone: us-east-1f
# Enable one zone in GCP.
- - cloud: gcp
- region: europe-west2
- zone: europe-west2-a
+ - infra: gcp/*/europe-west2-a
use_spot: true
accelerators: T4
ports: 9000 # Expose to internet traffic.
diff --git a/tests/skyserve/update/new.yaml b/tests/skyserve/update/new.yaml
index 4317af1b146..15982656d0e 100644
--- a/tests/skyserve/update/new.yaml
+++ b/tests/skyserve/update/new.yaml
@@ -7,7 +7,7 @@ service:
resources:
ports: 8081
- cloud: gcp
+ infra: gcp
workdir: tests/skyserve/update
diff --git a/tests/skyserve/update/num_min_one.yaml b/tests/skyserve/update/num_min_one.yaml
index e168af69af3..9dd84f38091 100644
--- a/tests/skyserve/update/num_min_one.yaml
+++ b/tests/skyserve/update/num_min_one.yaml
@@ -7,7 +7,7 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp
workdir: examples/serve/http_server
diff --git a/tests/skyserve/update/num_min_two.yaml b/tests/skyserve/update/num_min_two.yaml
index d4f26fdee8c..457ddd5849e 100644
--- a/tests/skyserve/update/num_min_two.yaml
+++ b/tests/skyserve/update/num_min_two.yaml
@@ -7,7 +7,7 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp
workdir: examples/serve/http_server
diff --git a/tests/skyserve/update/old.yaml b/tests/skyserve/update/old.yaml
index 38ef1cdcb60..666f8ff231a 100644
--- a/tests/skyserve/update/old.yaml
+++ b/tests/skyserve/update/old.yaml
@@ -7,7 +7,7 @@ service:
resources:
ports: 8080
- cloud: gcp
+ infra: gcp
workdir: tests/skyserve/update
diff --git a/tests/smoke_tests/smoke_tests_utils.py b/tests/smoke_tests/smoke_tests_utils.py
index 4396a73b4a9..d79026b6d14 100644
--- a/tests/smoke_tests/smoke_tests_utils.py
+++ b/tests/smoke_tests/smoke_tests_utils.py
@@ -37,10 +37,10 @@
# different job id.
test_id = str(uuid.uuid4())[-2:]
-LAMBDA_TYPE = '--cloud lambda --gpus A10'
-FLUIDSTACK_TYPE = '--cloud fluidstack --gpus RTXA4000'
+LAMBDA_TYPE = '--infra lambda --gpus A10'
+FLUIDSTACK_TYPE = '--infra fluidstack --gpus RTXA4000'
-SCP_TYPE = '--cloud scp'
+SCP_TYPE = '--infra scp'
SCP_GPU_V100 = '--gpus V100-32GB'
STORAGE_SETUP_COMMANDS = [
@@ -490,7 +490,7 @@ def get_aws_region_for_quota_failover() -> Optional[str]:
use_spot=True,
region=None,
zone=None)
- original_resources = sky.Resources(cloud=sky.AWS(),
+ original_resources = sky.Resources(infra='aws',
instance_type='p3.16xlarge',
use_spot=True)
@@ -517,7 +517,7 @@ def get_gcp_region_for_quota_failover() -> Optional[str]:
region=None,
zone=None)
- original_resources = sky.Resources(cloud=sky.GCP(),
+ original_resources = sky.Resources(infra='gcp',
instance_type='a2-ultragpu-1g',
accelerators={'A100-80GB': 1},
use_spot=True)
@@ -611,7 +611,7 @@ def launch_cluster_for_cloud_cmd(cloud: str, test_cluster_name: str) -> str:
return 'true'
else:
return (
- f'sky launch -y -c {cluster_name} --cloud {cloud} {LOW_RESOURCE_ARG} --async'
+ f'sky launch -y -c {cluster_name} --infra {cloud} {LOW_RESOURCE_ARG} --async'
)
diff --git a/tests/smoke_tests/test_basic.py b/tests/smoke_tests/test_basic.py
index 21ed1d37940..7cb87c5582a 100644
--- a/tests/smoke_tests/test_basic.py
+++ b/tests/smoke_tests/test_basic.py
@@ -55,12 +55,12 @@ def test_minimal(generic_cloud: str):
test = smoke_tests_utils.Test(
'minimal',
[
- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
# Output validation done.
f'sky logs {name} 1 --status',
f'sky logs {name} --status | grep "Job 1: SUCCEEDED"', # Equivalent.
# Test launch output again on existing cluster
- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
f'sky logs {name} 2 --status',
f'sky logs {name} --status | grep "Job 2: SUCCEEDED"', # Equivalent.
# Check the logs downloading
@@ -103,7 +103,7 @@ def test_launch_fast(generic_cloud: str):
'test_launch_fast',
[
# First launch to create the cluster
- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
f'sky logs {name} 1 --status',
# Second launch to test fast launch - should not reprovision
@@ -138,7 +138,7 @@ def test_launch_fast_with_autostop(generic_cloud: str):
'test_launch_fast_with_autostop',
[
# First launch to create the cluster with a short autostop
- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} --fast -i 1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} --fast -i 1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
f'sky logs {name} 1 --status',
f'sky status -r {name} | grep UP',
@@ -172,7 +172,7 @@ def test_launch_fast_with_cluster_changes(generic_cloud: str, tmp_path):
'test_launch_fast_with_cluster_changes',
[
# Initial launch
- f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --cloud {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
+ f's=$(SKYPILOT_DEBUG=0 sky launch -y -c {name} --infra {generic_cloud} --fast {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml) && {smoke_tests_utils.VALIDATE_LAUNCH_OUTPUT}',
f'sky logs {name} 1 --status',
# Launch again - setup and provisioning should be skipped
@@ -209,14 +209,14 @@ def test_stale_job(generic_cloud: str):
test = smoke_tests_utils.Test(
'stale_job',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo hi"',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo hi"',
f'sky exec {name} -d "echo start; sleep 10000"',
- f'sky stop {name} -y',
+ f'sky stop -y {name}',
smoke_tests_utils.get_cmd_wait_until_cluster_status_contains(
cluster_name=name,
cluster_status=[sky.ClusterStatus.STOPPED],
timeout=100),
- f'sky start {name} -y',
+ f'sky start -y {name}',
f'sky logs {name} 1 --status',
f's=$(sky queue {name}); echo "$s"; echo; echo; echo "$s" | grep FAILED_DRIVER',
],
@@ -236,7 +236,7 @@ def test_aws_stale_job_manual_restart():
'aws_stale_job_manual_restart',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),
- f'sky launch -y -c {name} --cloud aws --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo hi"',
+ f'sky launch -y -c {name} --infra aws/us-east-2 {smoke_tests_utils.LOW_RESOURCE_ARG} "echo hi"',
f'sky exec {name} -d "echo start; sleep 10000"',
# Stop the cluster manually.
smoke_tests_utils.run_cloud_cmd_on_cluster(
@@ -283,7 +283,7 @@ def test_gcp_stale_job_manual_restart():
'gcp_stale_job_manual_restart',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
- f'sky launch -y -c {name} --cloud gcp --zone {zone} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo hi"',
+ f'sky launch -y -c {name} --infra gcp/*/us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} "echo hi"',
f'sky exec {name} -d "echo start; sleep 10000"',
# Stop the cluster manually.
smoke_tests_utils.run_cloud_cmd_on_cluster(name, cmd=stop_cmd),
@@ -313,7 +313,7 @@ def test_env_check(generic_cloud: str):
test = smoke_tests_utils.Test(
'env_check',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/env_check.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/env_check.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
# Test with only setup.
f'sky launch -y -c {name} tests/test_yamls/test_only_setup.yaml',
@@ -337,7 +337,7 @@ def test_cli_logs(generic_cloud: str):
num_nodes = 1
timestamp = time.time()
test = smoke_tests_utils.Test('cli_logs', [
- f'sky launch -y -c {name} --cloud {generic_cloud} --num-nodes {num_nodes} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo {timestamp} 1"',
+ f'sky launch -y -c {name} --infra {generic_cloud} --num-nodes {num_nodes} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo {timestamp} 1"',
f'sky exec {name} "echo {timestamp} 2"',
f'sky exec {name} "echo {timestamp} 3"',
f'sky exec {name} "echo {timestamp} 4"',
@@ -377,21 +377,23 @@ def test_scp_logs():
# These tests are for testing the return value of the APIs not fully used in CLI.
def test_core_api_sky_launch_exec(generic_cloud: str):
name = smoke_tests_utils.get_cluster_name()
- cloud = sky.CLOUD_REGISTRY.from_str(generic_cloud)
task = sky.Task(run="whoami")
task.set_resources(
- sky.Resources(cloud=cloud, **smoke_tests_utils.LOW_RESOURCE_PARAM))
+ sky.Resources(infra=generic_cloud,
+ **smoke_tests_utils.LOW_RESOURCE_PARAM))
try:
job_id, handle = sky.get(sky.launch(task, cluster_name=name))
assert job_id == 1
assert handle is not None
assert handle.cluster_name == name
- assert handle.launched_resources.cloud.is_same_cloud(cloud)
+ assert str(
+ handle.launched_resources.cloud).lower() == generic_cloud.lower()
job_id_exec, handle_exec = sky.get(sky.exec(task, cluster_name=name))
assert job_id_exec == 2
assert handle_exec is not None
assert handle_exec.cluster_name == name
- assert handle_exec.launched_resources.cloud.is_same_cloud(cloud)
+ assert str(handle_exec.launched_resources.cloud).lower(
+ ) == generic_cloud.lower()
# For dummy task (i.e. task.run is None), the job won't be submitted.
dummy_task = sky.Task()
job_id_dummy, _ = sky.get(sky.exec(dummy_task, cluster_name=name))
@@ -416,10 +418,10 @@ def test_core_api_sky_launch_exec(generic_cloud: str):
@pytest.mark.no_kubernetes
def test_core_api_sky_launch_fast(generic_cloud: str):
name = smoke_tests_utils.get_cluster_name()
- cloud = sky.CLOUD_REGISTRY.from_str(generic_cloud)
try:
task = sky.Task(run="whoami").set_resources(
- sky.Resources(cloud=cloud, **smoke_tests_utils.LOW_RESOURCE_PARAM))
+ sky.Resources(infra=generic_cloud,
+ **smoke_tests_utils.LOW_RESOURCE_PARAM))
sky.launch(task,
cluster_name=name,
idle_minutes_to_autostop=1,
@@ -444,9 +446,9 @@ def test_jobs_launch_and_logs(generic_cloud: str):
smoke_tests_utils.LOW_CONTROLLER_RESOURCE_OVERRIDE_CONFIG):
name = smoke_tests_utils.get_cluster_name()
task = sky.Task(run="echo start job; sleep 30; echo end job")
- cloud = sky.CLOUD_REGISTRY.from_str(generic_cloud)
task.set_resources(
- sky.Resources(cloud=cloud, **smoke_tests_utils.LOW_RESOURCE_PARAM))
+ sky.Resources(infra=generic_cloud,
+ **smoke_tests_utils.LOW_RESOURCE_PARAM))
job_id, handle = sky.stream_and_get(sky.jobs.launch(task, name=name))
assert handle is not None
# Check the job status from the dashboard
@@ -558,7 +560,7 @@ def test_multiple_accelerators_ordered_with_default():
[
f'sky launch -y -c {name} tests/test_yamls/test_multiple_accelerators_ordered_with_default.yaml | grep "Using user-specified accelerators list"',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
- f'sky status {name} | grep Spot',
+ f'sky status {name} | grep spot',
],
f'sky down -y {name}',
)
@@ -593,7 +595,7 @@ def test_multiple_accelerators_unordered_with_default():
[
f'sky launch -y -c {name} tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
- f'sky status {name} | grep Spot',
+ f'sky status {name} | grep spot',
],
f'sky down -y {name}',
)
@@ -627,7 +629,7 @@ def test_sky_bench(generic_cloud: str):
test = smoke_tests_utils.Test(
'sky-bench',
[
- f'sky bench launch -y -b {name} --cloud {generic_cloud} -i0 tests/test_yamls/minimal.yaml',
+ f'sky bench launch -y -b {name} --infra {generic_cloud} -i0 tests/test_yamls/minimal.yaml',
'sleep 120',
f'sky bench show {name} | grep sky-bench-{name} | grep FINISHED',
],
@@ -738,40 +740,34 @@ def test_kubernetes_context_failover(unreachable_context):
'kubectl get namespaces --context kind-skypilot | grep test-namespace || '
'{ echo "Should set the namespace to test-namespace for kind-skypilot. Check the instructions in '
'tests/test_smoke.py::test_kubernetes_context_failover." && exit 1; }',
- 'sky show-gpus --cloud kubernetes --region kind-skypilot | grep H100 | grep "1, 2, 4, 8"',
+ 'sky show-gpus --infra kubernetes/kind-skypilot | grep H100 | grep "1, 2, 4, 8"',
# Get contexts and set current context to the other cluster that is not kind-skypilot
f'kubectl config use-context {context}',
# H100 should not be in the current context
- f'! sky show-gpus --cloud kubernetes --region {context} | grep H100',
+ f'! sky show-gpus --infra kubernetes/{context} | grep H100',
# H100 should be displayed as long as it is available in one of the contexts
- 'sky show-gpus --cloud kubernetes | grep H100',
+ 'sky show-gpus --infra kubernetes | grep H100',
f'sky launch -y -c {name}-1 --cpus 1 echo hi',
f'sky logs {name}-1 --status',
# It should be launched not on kind-skypilot
f'sky status -v {name}-1 | grep "{context}"',
# Test failure for launching H100 on other cluster
- f'sky launch -y -c {name}-2 --gpus H100 --cpus 1 --cloud kubernetes --region {context} echo hi && exit 1 || true',
+ f'sky launch -y -c {name}-2 --gpus H100 --cpus 1 --infra kubernetes/{context} echo hi && exit 1 || true',
# Test failover
- f'sky launch -y -c {name}-3 --gpus H100 --cpus 1 --cloud kubernetes echo hi',
+ f'sky launch -y -c {name}-3 --gpus H100 --cpus 1 --infra kubernetes echo hi',
f'sky logs {name}-3 --status',
# Test pods
f'kubectl get pods --context kind-skypilot | grep "{name}-3"',
# It should be launched on kind-skypilot
f'sky status -v {name}-3 | grep "kind-skypilot"',
# Should be 7 free GPUs
- f'sky show-gpus --cloud kubernetes --region kind-skypilot | grep H100 | grep " 7"',
+ f'sky show-gpus --infra kubernetes/kind-skypilot | grep H100 | grep " 7"',
# Remove the line with "kind-skypilot"
f'sed -i "/kind-skypilot/d" {f.name}',
- # Should still be able to exec and launch on existing cluster
- f'sky exec {name}-3 "echo hi"',
- f'sky logs {name}-3 --status',
- f'sky status -r {name}-3 | grep UP',
- f'sky launch -c {name}-3 --gpus h100 echo hi',
- f'sky logs {name}-3 --status',
- f'sky status -r {name}-3 | grep UP',
+ f'export KUBECONFIG={f.name}',
# Test failure for launching on unreachable context
f'kubectl config use-context {unreachable_context}',
- f'sky launch -y -c {name}-4 --gpus H100 --cpus 1 --cloud kubernetes --region {unreachable_context} echo hi && exit 1 || true',
+ f'sky launch -y -c {name}-4 --gpus H100 --cpus 1 --infra kubernetes/{unreachable_context} echo hi && exit 1 || true',
# Test failover from unreachable context
f'sky launch -y -c {name}-5 --cpus 1 echo hi',
],
@@ -836,7 +832,7 @@ def test_cancel_launch_and_exec_async(generic_cloud: str):
wait_cmd = wait_cmd.replace('sleep 10', 'sleep 1')
test = smoke_tests_utils.Test(
'cancel_launch_and_exec_async', [
- f'sky launch -c {name} -y --cloud {generic_cloud} --async',
+ f'sky launch -c {name} -y --infra {generic_cloud} --async',
(f's=$(sky exec {name} echo --async) && '
'echo "$s" && '
'logs_cmd=$(echo "$s" | grep "Check logs with" | '
@@ -861,7 +857,7 @@ def test_cli_exit_codes(generic_cloud: str):
'cli_exit_codes',
[
# Test successful job exit code (0)
- f'sky launch -y -c {name} --cloud {generic_cloud} "echo success" && echo "Exit code: $?"',
+ f'sky launch -y -c {name} --infra {generic_cloud} "echo success" && echo "Exit code: $?"',
f'sky logs {name} 1 --status | grep SUCCEEDED',
# Test that sky logs with successful job returns 0
diff --git a/tests/smoke_tests/test_cluster_job.py b/tests/smoke_tests/test_cluster_job.py
index 0437d35bd2b..8bf3f635bf9 100644
--- a/tests/smoke_tests/test_cluster_job.py
+++ b/tests/smoke_tests/test_cluster_job.py
@@ -54,7 +54,7 @@ def test_job_queue(generic_cloud: str, accelerator: Dict[str, str]):
test = smoke_tests_utils.Test(
'job_queue',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster.yaml',
f'sky exec {name} -n {name}-1 -d --gpus {accelerator}:0.5 examples/job_queue/job.yaml',
f'sky exec {name} -n {name}-2 -d --gpus {accelerator}:0.5 examples/job_queue/job.yaml',
f'sky exec {name} -n {name}-3 -d --gpus {accelerator}:0.5 examples/job_queue/job.yaml',
@@ -115,7 +115,7 @@ def test_job_queue_with_docker(generic_cloud: str, image_id: str,
test = smoke_tests_utils.Test(
'job_queue_with_docker',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} --image-id {image_id} examples/job_queue/cluster_docker.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} --image-id {image_id} examples/job_queue/cluster_docker.yaml',
f'sky exec {name} -n {name}-1 -d --gpus {accelerator}:0.5 --image-id {image_id} --env TIME_TO_SLEEP={time_to_sleep*2} examples/job_queue/job_docker.yaml',
f'sky exec {name} -n {name}-2 -d --gpus {accelerator}:0.5 --image-id {image_id} --env TIME_TO_SLEEP={time_to_sleep} examples/job_queue/job_docker.yaml',
f'sky exec {name} -n {name}-3 -d --gpus {accelerator}:0.5 --image-id {image_id} --env TIME_TO_SLEEP={time_to_sleep} examples/job_queue/job_docker.yaml',
@@ -180,10 +180,10 @@ def test_ibm_job_queue():
test = smoke_tests_utils.Test(
'ibm_job_queue',
[
- f'sky launch -y -c {name} --cloud ibm --gpus v100',
- f'sky exec {name} -n {name}-1 --cloud ibm -d examples/job_queue/job_ibm.yaml',
- f'sky exec {name} -n {name}-2 --cloud ibm -d examples/job_queue/job_ibm.yaml',
- f'sky exec {name} -n {name}-3 --cloud ibm -d examples/job_queue/job_ibm.yaml',
+ f'sky launch -y -c {name} --infra ibm --gpus v100',
+ f'sky exec {name} -n {name}-1 --infra ibm -d examples/job_queue/job_ibm.yaml',
+ f'sky exec {name} -n {name}-2 --infra ibm -d examples/job_queue/job_ibm.yaml',
+ f'sky exec {name} -n {name}-3 --infra ibm -d examples/job_queue/job_ibm.yaml',
f'sky queue {name} | grep {name}-1 | grep RUNNING',
f'sky queue {name} | grep {name}-2 | grep RUNNING',
f'sky queue {name} | grep {name}-3 | grep PENDING',
@@ -239,7 +239,7 @@ def test_job_queue_multinode(generic_cloud: str, accelerator: Dict[str, str]):
test = smoke_tests_utils.Test(
'job_queue_multinode',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster_multinode.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/job_queue/cluster_multinode.yaml',
f'sky exec {name} -n {name}-1 -d --gpus {accelerator}:0.5 examples/job_queue/job_multinode.yaml',
f'sky exec {name} -n {name}-2 -d --gpus {accelerator}:0.5 examples/job_queue/job_multinode.yaml',
f'sky launch -c {name} -n {name}-3 -d --gpus {accelerator}:0.5 examples/job_queue/job_multinode.yaml',
@@ -280,7 +280,7 @@ def test_large_job_queue(generic_cloud: str):
test = smoke_tests_utils.Test(
'large_job_queue',
[
- f'sky launch -y -c {name} --cpus 8 --cloud {generic_cloud}',
+ f'sky launch -y -c {name} --cpus 8 --infra {generic_cloud}',
f'for i in `seq 1 75`; do sky exec {name} -n {name}-$i -d "echo $i; sleep 100000000"; done',
f'sky cancel -y {name} 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16',
'sleep 90',
@@ -328,7 +328,7 @@ def test_fast_large_job_queue(generic_cloud: str):
test = smoke_tests_utils.Test(
'fast_large_job_queue',
[
- f'sky launch -y -c {name} --cpus 8 --cloud {generic_cloud}',
+ f'sky launch -y -c {name} --cpus 8 --infra {generic_cloud}',
f'for i in `seq 1 32`; do sky exec {name} -n {name}-$i -d "echo $i"; done',
'sleep 60',
f's=$(sky queue {name}); echo "$s"; echo; echo; echo "$s" | grep -v grep | grep SUCCEEDED | wc -l | grep 32',
@@ -346,7 +346,7 @@ def test_ibm_job_queue_multinode():
test = smoke_tests_utils.Test(
'ibm_job_queue_multinode',
[
- f'sky launch -y -c {name} --cloud ibm --gpus v100 --num-nodes 2',
+ f'sky launch -y -c {name} --infra ibm --gpus v100 --num-nodes 2',
f'sky exec {name} -n {name}-1 -d {task_file}',
f'sky exec {name} -n {name}-2 -d {task_file}',
f'sky launch -y -c {name} -n {name}-3 -d {task_file}',
@@ -391,7 +391,7 @@ def test_docker_preinstalled_package(generic_cloud: str):
test = smoke_tests_utils.Test(
'docker_with_preinstalled_package',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id docker:nginx',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id docker:nginx',
f'sky exec {name} "nginx -V"',
f'sky logs {name} 1 --status',
f'sky exec {name} whoami | grep root',
@@ -471,7 +471,7 @@ def test_huggingface(generic_cloud: str, accelerator: Dict[str, str]):
test = smoke_tests_utils.Test(
'huggingface_glue_imdb_app',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/huggingface_glue_imdb_app.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/huggingface_glue_imdb_app.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky exec {name} --gpus {accelerator} examples/huggingface_glue_imdb_app.yaml',
f'sky logs {name} 2 --status', # Ensure the job succeeded.
@@ -623,7 +623,7 @@ def test_multi_hostname(generic_cloud: str):
test = smoke_tests_utils.Test(
'multi_hostname',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/multi_hostname.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/multi_hostname.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky logs {name} 1 | grep "My hostname:" | wc -l | grep 2', # Ensure there are 2 hosts.
f'sky exec {name} examples/multi_hostname.yaml',
@@ -643,7 +643,7 @@ def test_multi_node_failure(generic_cloud: str):
test = smoke_tests_utils.Test(
'multi_node_failure',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/failed_worker_setup.yaml || [ $? -eq 100 ]',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/failed_worker_setup.yaml || [ $? -eq 100 ]',
f'sky logs {name} 1 --status | grep FAILED_SETUP', # Ensure the job setup failed.
f'sky exec {name} tests/test_yamls/failed_worker_run.yaml || [ $? -eq 100 ]',
f'sky logs {name} 2 --status | grep FAILED', # Ensure the job failed.
@@ -661,7 +661,7 @@ def test_gcp_http_server_with_custom_ports():
test = smoke_tests_utils.Test(
'gcp_http_server_with_custom_ports',
[
- f'sky launch -y -d -c {name} --cloud gcp {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',
+ f'sky launch -y -d -c {name} --infra gcp {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',
f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',
# Retry a few times to avoid flakiness in ports being open.
f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep "<h1>This is a demo HTML page.</h1>"; then success=true; break; fi; sleep 10; done; if [ "$success" = false ]; then exit 1; fi',
@@ -678,7 +678,7 @@ def test_aws_http_server_with_custom_ports():
test = smoke_tests_utils.Test(
'aws_http_server_with_custom_ports',
[
- f'sky launch -y -d -c {name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',
+ f'sky launch -y -d -c {name} --infra aws {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',
f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',
# Retry a few times to avoid flakiness in ports being open.
f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep "<h1>This is a demo HTML page.</h1>"; then success=true; break; fi; sleep 10; done; if [ "$success" = false ]; then exit 1; fi'
@@ -695,7 +695,7 @@ def test_azure_http_server_with_custom_ports():
test = smoke_tests_utils.Test(
'azure_http_server_with_custom_ports',
[
- f'sky launch -y -d -c {name} --cloud azure {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',
+ f'sky launch -y -d -c {name} --infra azure {smoke_tests_utils.LOW_RESOURCE_ARG} examples/http_server_with_custom_ports/task.yaml',
f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',
# Retry a few times to avoid flakiness in ports being open.
f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep "<h1>This is a demo HTML page.</h1>"; then success=true; break; fi; sleep 10; done; if [ "$success" = false ]; then exit 1; fi'
@@ -713,7 +713,7 @@ def test_kubernetes_http_server_with_custom_ports():
test = smoke_tests_utils.Test(
'kubernetes_http_server_with_custom_ports',
[
- f'sky launch -y -d -c {name} --cloud kubernetes examples/http_server_with_custom_ports/task.yaml',
+ f'sky launch -y -d -c {name} --infra kubernetes examples/http_server_with_custom_ports/task.yaml',
f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',
# Retry a few times to avoid flakiness in ports being open.
f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 100); do if curl $ip | grep "<h1>This is a demo HTML page.</h1>"; then success=true; break; fi; sleep 5; done; if [ "$success" = false ]; then exit 1; fi'
@@ -730,7 +730,7 @@ def test_paperspace_http_server_with_custom_ports():
test = smoke_tests_utils.Test(
'paperspace_http_server_with_custom_ports',
[
- f'sky launch -y -d -c {name} --cloud paperspace examples/http_server_with_custom_ports/task.yaml',
+ f'sky launch -y -d -c {name} --infra paperspace examples/http_server_with_custom_ports/task.yaml',
f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',
# Retry a few times to avoid flakiness in ports being open.
f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep "<h1>This is a demo HTML page.</h1>"; then success=true; break; fi; sleep 10; done; if [ "$success" = false ]; then exit 1; fi',
@@ -747,7 +747,7 @@ def test_runpod_http_server_with_custom_ports():
test = smoke_tests_utils.Test(
'runpod_http_server_with_custom_ports',
[
- f'sky launch -y -d -c {name} --cloud runpod examples/http_server_with_custom_ports/task.yaml',
+ f'sky launch -y -d -c {name} --infra runpod examples/http_server_with_custom_ports/task.yaml',
f'until SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}; do sleep 10; done',
# Retry a few times to avoid flakiness in ports being open.
f'ip=$(SKYPILOT_DEBUG=0 sky status --endpoint 33828 {name}); success=false; for i in $(seq 1 5); do if curl $ip | grep "<h1>This is a demo HTML page.</h1>"; then success=true; break; fi; sleep 10; done; if [ "$success" = false ]; then exit 1; fi',
@@ -862,7 +862,7 @@ def test_add_pod_annotations_for_autodown_with_launch():
smoke_tests_utils.launch_cluster_for_cloud_cmd('kubernetes', name),
# Launch Kubernetes cluster with two nodes, each being head node and worker node.
# Autodown is set.
- f'sky launch -y -c {name} -i 10 --down --num-nodes 2 --cpus=1 --cloud kubernetes',
+ f'sky launch -y -c {name} -i 10 --down --num-nodes 2 --cpus=1 --infra kubernetes',
# Get names of the pods containing cluster name.
smoke_tests_utils.run_cloud_cmd_on_cluster(
name,
@@ -894,7 +894,7 @@ def test_add_and_remove_pod_annotations_with_autostop():
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('kubernetes', name),
# Launch Kubernetes cluster with two nodes, each being head node and worker node.
- f'sky launch -y -c {name} --num-nodes 2 --cpus=1 --cloud kubernetes',
+ f'sky launch -y -c {name} --num-nodes 2 --cpus=1 --infra kubernetes',
# Set autodown on the cluster with 'autostop' command.
f'sky autostop -y {name} -i 20 --down',
# Get names of the pods containing cluster name.
@@ -1216,7 +1216,7 @@ def test_autostop(generic_cloud: str):
test = smoke_tests_utils.Test(
'autostop',
[
- f'sky launch -y -d -c {name} --num-nodes 2 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
+ f'sky launch -y -d -c {name} --num-nodes 2 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
f'sky autostop -y {name} -i 1',
# Ensure autostop is set.
@@ -1285,7 +1285,7 @@ def test_autodown(generic_cloud: str):
test = smoke_tests_utils.Test(
'autodown',
[
- f'sky launch -y -d -c {name} --num-nodes 2 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
+ f'sky launch -y -d -c {name} --num-nodes 2 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
f'sky autostop -y {name} --down -i 1',
check_autostop_set,
# Ensure the cluster is not terminated early.
@@ -1294,14 +1294,14 @@ def test_autodown(generic_cloud: str):
# Ensure the cluster is terminated.
f'sleep {autodown_timeout}',
f's=$(SKYPILOT_DEBUG=0 sky status {name} --refresh) && echo "$s" && {{ echo "$s" | grep {name} | grep "Autodowned cluster\|Cluster \'{name}\' not found"; }} || {{ echo "$s" | grep {name} && exit 1 || exit 0; }}',
- f'sky launch -y -d -c {name} --cloud {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
+ f'sky launch -y -d -c {name} --infra {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
f'sky status | grep {name} | grep UP', # Ensure the cluster is UP.
- f'sky exec {name} --cloud {generic_cloud} tests/test_yamls/minimal.yaml',
+ f'sky exec {name} --infra {generic_cloud} tests/test_yamls/minimal.yaml',
check_autostop_set,
f'sleep {autodown_timeout}',
# Ensure the cluster is terminated.
f's=$(SKYPILOT_DEBUG=0 sky status {name} --refresh) && echo "$s" && {{ echo "$s" | grep {name} | grep "Autodowned cluster\|Cluster \'{name}\' not found"; }} || {{ echo "$s" | grep {name} && exit 1 || exit 0; }}',
- f'sky launch -y -d -c {name} --cloud {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
+ f'sky launch -y -d -c {name} --infra {generic_cloud} --num-nodes 2 --down {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
f'sky autostop -y {name} --cancel',
f'sleep {autodown_timeout}',
# Ensure the cluster is still UP.
@@ -1352,7 +1352,7 @@ def _get_cancel_task_with_cloud(name, cloud, timeout=15 * 60):
test = smoke_tests_utils.Test(
f'{cloud}-cancel-task',
[
- f'sky launch -c {name} examples/resnet_app.yaml --cloud {cloud} -y -d',
+ f'sky launch -c {name} examples/resnet_app.yaml --infra {cloud} -y -d',
# Wait the job to be scheduled and finished setup.
f'until sky queue {name} | grep "RUNNING"; do sleep 10; done',
# Wait the setup and initialize before the GPU process starts.
@@ -1407,7 +1407,7 @@ def test_cancel_pytorch(generic_cloud: str, accelerator: Dict[str, str]):
test = smoke_tests_utils.Test(
'cancel-pytorch',
[
- f'sky launch -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/resnet_distributed_torch.yaml -y -d',
+ f'sky launch -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus {accelerator} examples/resnet_distributed_torch.yaml -y -d',
# Wait until the setup finishes.
smoke_tests_utils.
get_cmd_wait_until_job_status_contains_matching_job_id(
@@ -1444,7 +1444,7 @@ def test_cancel_ibm():
test = smoke_tests_utils.Test(
'ibm-cancel-task',
[
- f'sky launch -y -c {name} --cloud ibm examples/minimal.yaml',
+ f'sky launch -y -c {name} --infra ibm examples/minimal.yaml',
f'sky exec {name} -n {name}-1 -d "while true; do echo \'Hello SkyPilot\'; sleep 2; done"',
'sleep 20',
f'sky queue {name} | grep {name}-1 | grep RUNNING',
@@ -1472,7 +1472,7 @@ def test_use_spot(generic_cloud: str):
test = smoke_tests_utils.Test(
'use-spot',
[
- f'sky launch -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',
+ f'sky launch -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',
f'sky logs {name} 1 --status',
f'sky exec {name} echo hi',
f'sky logs {name} 2 --status',
@@ -1493,7 +1493,7 @@ def test_azure_spot_instance_verification():
test = smoke_tests_utils.Test(
'azure-spot-verification',
[
- f'sky launch -c {name} --cloud azure {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',
+ f'sky launch -c {name} --infra azure {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml --use-spot -y',
f'sky logs {name} 1 --status', f'TARGET_VM_NAME="{name}"; '
'VM_INFO=$(az vm list --query "[?contains(name, \'$TARGET_VM_NAME\')].{Name:name, ResourceGroup:resourceGroup}" -o tsv); '
'[[ -z "$VM_INFO" ]] && exit 1; '
@@ -1517,7 +1517,7 @@ def test_stop_gcp_spot():
test = smoke_tests_utils.Test(
'stop_gcp_spot',
[
- f'sky launch -c {name} --cloud gcp {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y -- touch myfile',
+ f'sky launch -c {name} --infra gcp {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y -- touch myfile',
# stop should go through:
f'sky stop {name} -y',
f'sky start {name} -y',
@@ -1550,7 +1550,7 @@ def test_inline_env(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-inline-env',
[
- f'sky launch -c {name} -y --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV="hello world" -- "([[ ! -z \\"\$TEST_ENV\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_IPS}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_RANK}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NUM_NODES}\\" ]]) || exit 1"',
+ f'sky launch -c {name} -y --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV="hello world" -- "([[ ! -z \\"\$TEST_ENV\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_IPS}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_RANK}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NUM_NODES}\\" ]]) || exit 1"',
'sleep 20',
f'sky logs {name} 1 --status',
f'sky exec {name} --env TEST_ENV2="success" "([[ ! -z \\"\$TEST_ENV2\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_IPS}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_RANK}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NUM_NODES}\\" ]]) || exit 1"',
@@ -1569,7 +1569,7 @@ def test_inline_env_file(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-inline-env-file',
[
- f'sky launch -c {name} -y --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV="hello world" -- "([[ ! -z \\"\$TEST_ENV\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_IPS}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_RANK}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NUM_NODES}\\" ]]) || exit 1"',
+ f'sky launch -c {name} -y --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV="hello world" -- "([[ ! -z \\"\$TEST_ENV\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_IPS}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_RANK}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NUM_NODES}\\" ]]) || exit 1"',
f'sky logs {name} 1 --status',
f'sky exec {name} --env-file examples/sample_dotenv "([[ ! -z \\"\$TEST_ENV2\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_IPS}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NODE_RANK}\\" ]] && [[ ! -z \\"\${constants.SKYPILOT_NUM_NODES}\\" ]]) || exit 1"',
f'sky logs {name} 2 --status',
@@ -1588,7 +1588,7 @@ def test_aws_custom_image():
test = smoke_tests_utils.Test(
'test-aws-custom-image',
[
- f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --cloud aws --region us-east-2 --image-id ami-062ddd90fb6f8267a', # Nvidia image
+ f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --infra aws/us-east-2 --image-id ami-062ddd90fb6f8267a', # Nvidia image
f'sky logs {name} 1 --status',
],
f'sky down -y {name}',
@@ -1616,10 +1616,10 @@ def test_kubernetes_custom_image(image_id):
test = smoke_tests_utils.Test(
'test-kubernetes-custom-image',
[
- f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --cloud kubernetes --image-id {image_id} --region None --gpus T4:1',
+ f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --retry-until-up -y tests/test_yamls/test_custom_image.yaml --infra kubernetes/none --image-id {image_id} --gpus T4:1',
f'sky logs {name} 1 --status',
# Try exec to run again and check if the logs are printed
- f'sky exec {name} tests/test_yamls/test_custom_image.yaml --cloud kubernetes --image-id {image_id} --region None --gpus T4:1 | grep "Hello 100"',
+ f'sky exec {name} tests/test_yamls/test_custom_image.yaml --infra kubernetes/none --image-id {image_id} --gpus T4:1 | grep "Hello 100"',
# Make sure ssh is working with custom username
f'ssh {name} echo hi | grep hi',
],
@@ -1677,7 +1677,7 @@ def _get_aws_query_command(region: str, instance_id: str, field: str,
'aws-disk-tier-' + disk_tier.value,
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),
- f'sky launch -y -c {name} --cloud aws --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
+ f'sky launch -y -c {name} --infra aws/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
f'--disk-tier {disk_tier.value} echo "hello sky"',
smoke_tests_utils.run_cloud_cmd_on_cluster(
name,
@@ -1736,7 +1736,7 @@ def test_gcp_disk_tier(instance_types: List[str]):
'gcp-disk-tier-' + disk_tier.value,
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
- f'sky launch -y -c {name} --cloud gcp --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
+ f'sky launch -y -c {name} --infra gcp/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
f'--disk-tier {disk_tier.value} {instance_type_option} ',
smoke_tests_utils.run_cloud_cmd_on_cluster(
name,
@@ -1766,7 +1766,7 @@ def test_azure_disk_tier():
test = smoke_tests_utils.Test(
'azure-disk-tier-' + disk_tier.value,
[
- f'sky launch -y -c {name} --cloud azure --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
+ f'sky launch -y -c {name} --infra azure/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
f'--disk-tier {disk_tier.value} echo "hello sky"',
f'az resource list --tag ray-cluster-name={name_on_cloud} --query '
f'"[?type==\'Microsoft.Compute/disks\'].sku.name" '
@@ -1788,7 +1788,7 @@ def test_azure_best_tier_failover():
test = smoke_tests_utils.Test(
'azure-best-tier-failover',
[
- f'sky launch -y -c {name} --cloud azure --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
+ f'sky launch -y -c {name} --infra azure/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} '
f'--disk-tier best --instance-type Standard_D8_v5 echo "hello sky"',
f'az resource list --tag ray-cluster-name={name_on_cloud} --query '
f'"[?type==\'Microsoft.Compute/disks\'].sku.name" '
@@ -1817,7 +1817,7 @@ def test_aws_zero_quota_failover():
test = smoke_tests_utils.Test(
'aws-zero-quota-failover',
[
- f'sky launch -y -c {name} --cloud aws --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus V100:8 --use-spot | grep "Found no quota"',
+ f'sky launch -y -c {name} --infra aws/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus V100:8 --use-spot | grep "Found no quota"',
],
f'sky down -y {name}',
)
@@ -1840,7 +1840,7 @@ def test_gcp_zero_quota_failover():
test = smoke_tests_utils.Test(
'gcp-zero-quota-failover',
[
- f'sky launch -y -c {name} --cloud gcp --region {region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus A100-80GB:1 --use-spot | grep "Found no quota"',
+ f'sky launch -y -c {name} --infra gcp/{region} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus A100-80GB:1 --use-spot | grep "Found no quota"',
],
f'sky down -y {name}',
)
@@ -1872,7 +1872,7 @@ def test_long_setup_run_script(generic_cloud: str):
test = smoke_tests_utils.Test(
'long-setup-run-script',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {f.name}',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {f.name}',
f'sky exec {name} "echo hello"',
f'sky exec {name} {f.name}',
f'sky logs {name} --status 1',
@@ -1909,7 +1909,7 @@ def test_min_gpt_kubernetes():
test = smoke_tests_utils.Test(
'min_gpt_kubernetes',
[
- f'sky launch -y -c {name} --cloud kubernetes {f.name}',
+ f'sky launch -y -c {name} --infra kubernetes {f.name}',
f'sky logs {name} 1 --status',
],
f'sky down -y {name}',
diff --git a/tests/smoke_tests/test_images.py b/tests/smoke_tests/test_images.py
index f27bef7a5f6..b7825982508 100644
--- a/tests/smoke_tests/test_images.py
+++ b/tests/smoke_tests/test_images.py
@@ -57,10 +57,10 @@ def test_gcp_images():
test = smoke_tests_utils.Test(
'gcp_images',
[
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-debian-10 --cloud gcp tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-debian-10 --infra gcp tests/test_yamls/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
- f'sky launch -c {name} --image-id skypilot:cpu-debian-10 --cloud gcp tests/test_yamls/minimal.yaml && exit 1 || true',
- f'sky launch -y -c {name} tests/test_yamls/minimal.yaml',
+ f'sky launch -c {name} --image-id skypilot:cpu-debian-10 --infra gcp tests/test_yamls/minimal.yaml && exit 1 || true',
+ f'sky launch -y -c {name} --infra gcp tests/test_yamls/minimal.yaml',
f'sky logs {name} 2 --status',
f'sky logs {name} --status | grep "Job 2: SUCCEEDED"', # Equivalent.
f'sky exec {name} \'echo $SKYPILOT_CLUSTER_INFO | jq .cloud | grep -i gcp\'',
@@ -77,9 +77,9 @@ def test_azure_images():
test = smoke_tests_utils.Test(
'azure_images',
[
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-ubuntu-2204 --cloud azure tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-ubuntu-2204 --infra azure tests/test_yamls/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
- f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:v1-ubuntu-2004 --cloud azure tests/test_yamls/minimal.yaml && exit 1 || true',
+ f'sky launch -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:v1-ubuntu-2004 --infra azure tests/test_yamls/minimal.yaml && exit 1 || true',
f'sky launch -y -c {name} tests/test_yamls/minimal.yaml',
f'sky logs {name} 2 --status',
f'sky logs {name} --status | grep "Job 2: SUCCEEDED"', # Equivalent.
@@ -140,9 +140,9 @@ def test_aws_image_id_dict_region():
# us-west-2: skypilot:gpu-ubuntu-1804
# us-east-2: skypilot:gpu-ubuntu-2004
# Use region to filter image_id dict.
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1 examples/per_region_images.yaml && exit 1 || true',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra aws/us-east-1 examples/per_region_images.yaml && exit 1 || true',
f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-2 examples/per_region_images.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra aws/us-east-2 examples/per_region_images.yaml',
# Should success because the image id match for the region.
f'sky launch -c {name} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',
f'sky exec {name} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',
@@ -152,9 +152,9 @@ def test_aws_image_id_dict_region():
f'sky logs {name} 3 --status',
f'sky status -v | grep {name} | grep us-east-2', # Ensure the region is correct.
# Ensure exec works.
- f'sky exec {name} --region us-east-2 examples/per_region_images.yaml',
+ f'sky exec {name} --infra aws/us-east-2 examples/per_region_images.yaml',
f'sky exec {name} examples/per_region_images.yaml',
- f'sky exec {name} --cloud aws --region us-east-2 "ls ~"',
+ f'sky exec {name} --infra aws/us-east-2 "ls ~"',
f'sky exec {name} "ls ~"',
f'sky logs {name} 4 --status',
f'sky logs {name} 5 --status',
@@ -173,21 +173,21 @@ def test_gcp_image_id_dict_region():
'gcp_image_id_dict_region',
[
# Use region to filter image_id dict.
- f'sky launch -y -c {name} --region us-east1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',
+ f'sky launch -y -c {name} --infra gcp/us-east1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',
f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.
- f'sky launch -y -c {name} --region us-west3 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',
+ f'sky launch -y -c {name} --infra gcp/us-west3 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',
# Should success because the image id match for the region.
- f'sky launch -c {name} --cloud gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',
- f'sky exec {name} --cloud gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',
- f'sky exec {name} --cloud gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',
+ f'sky launch -c {name} --infra gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',
+ f'sky exec {name} --infra gcp --image-id projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112 tests/test_yamls/minimal.yaml',
+ f'sky exec {name} --infra gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',
f'sky logs {name} 1 --status',
f'sky logs {name} 2 --status',
f'sky logs {name} 3 --status',
f'sky status -v | grep {name} | grep us-west3', # Ensure the region is correct.
# Ensure exec works.
- f'sky exec {name} --region us-west3 tests/test_yamls/gcp_per_region_images.yaml',
+ f'sky exec {name} --infra gcp/us-west3 tests/test_yamls/gcp_per_region_images.yaml',
f'sky exec {name} tests/test_yamls/gcp_per_region_images.yaml',
- f'sky exec {name} --cloud gcp --region us-west3 "ls ~"',
+ f'sky exec {name} --infra gcp/us-west3 "ls ~"',
f'sky exec {name} "ls ~"',
f'sky logs {name} 4 --status',
f'sky logs {name} 5 --status',
@@ -210,9 +210,9 @@ def test_aws_image_id_dict_zone():
# us-west-2: skypilot:gpu-ubuntu-1804
# us-east-2: skypilot:gpu-ubuntu-2004
# Use zone to filter image_id dict.
- f'sky launch -y -c {name} --zone us-east-1b {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml && exit 1 || true',
+ f'sky launch -y -c {name} --infra aws/*/us-east-1b {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml && exit 1 || true',
f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.
- f'sky launch -y -c {name} --zone us-east-2a {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml',
+ f'sky launch -y -c {name} --infra aws/*/us-east-2a {smoke_tests_utils.LOW_RESOURCE_ARG} examples/per_region_images.yaml',
# Should success because the image id match for the zone.
f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',
f'sky exec {name} --image-id skypilot:gpu-ubuntu-2004 examples/minimal.yaml',
@@ -223,9 +223,9 @@ def test_aws_image_id_dict_zone():
f'sky logs {name} 3 --status',
f'sky status -v | grep {name} | grep us-east-2a', # Ensure the zone is correct.
# Ensure exec works.
- f'sky exec {name} --zone us-east-2a examples/per_region_images.yaml',
+ f'sky exec {name} --infra aws/*/us-east-2a examples/per_region_images.yaml',
f'sky exec {name} examples/per_region_images.yaml',
- f'sky exec {name} --cloud aws --region us-east-2 "ls ~"',
+ f'sky exec {name} --infra aws/us-east-2 "ls ~"',
f'sky exec {name} "ls ~"',
f'sky logs {name} 4 --status',
f'sky logs {name} 5 --status',
@@ -244,22 +244,22 @@ def test_gcp_image_id_dict_zone():
'gcp_image_id_dict_zone',
[
# Use zone to filter image_id dict.
- f'sky launch -y -c {name} --zone us-east1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',
+ f'sky launch -y -c {name} --infra */*/us-east1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml && exit 1 || true',
f'sky status | grep {name} && exit 1 || true', # Ensure the cluster is not created.
- f'sky launch -y -c {name} --zone us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',
+ f'sky launch -y -c {name} --infra */*/us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/gcp_per_region_images.yaml',
# Should success because the image id match for the zone.
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',
- f'sky exec {name} --cloud gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',
+ f'sky exec {name} --infra gcp --image-id skypilot:cpu-debian-10 tests/test_yamls/minimal.yaml',
# Fail due to image id mismatch.
- f'sky exec {name} --cloud gcp --image-id skypilot:gpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',
+ f'sky exec {name} --infra gcp --image-id skypilot:gpu-debian-10 tests/test_yamls/minimal.yaml && exit 1 || true',
f'sky logs {name} 1 --status',
f'sky logs {name} 2 --status',
f'sky logs {name} 3 --status',
f'sky status -v | grep {name} | grep us-central1', # Ensure the zone is correct.
# Ensure exec works.
- f'sky exec {name} --cloud gcp --zone us-central1-a tests/test_yamls/gcp_per_region_images.yaml',
+ f'sky exec {name} --infra gcp/*/us-central1-a tests/test_yamls/gcp_per_region_images.yaml',
f'sky exec {name} tests/test_yamls/gcp_per_region_images.yaml',
- f'sky exec {name} --cloud gcp --region us-central1 "ls ~"',
+ f'sky exec {name} --infra gcp/us-central1 "ls ~"',
f'sky exec {name} "ls ~"',
f'sky logs {name} 4 --status',
f'sky logs {name} 5 --status',
@@ -279,7 +279,7 @@ def test_clone_disk_aws():
test = smoke_tests_utils.Test(
'clone_disk_aws',
[
- f'sky launch -y -c {name} --cloud aws --region us-east-2 --retry-until-up "echo hello > ~/user_file.txt"',
+ f'sky launch -y -c {name} --infra aws/us-east-2 --retry-until-up "echo hello > ~/user_file.txt"',
f'sky launch --clone-disk-from {name} -y -c {name}-clone && exit 1 || true',
f'sky stop {name} -y',
smoke_tests_utils.get_cmd_wait_until_cluster_status_contains(
@@ -289,8 +289,8 @@ def test_clone_disk_aws():
# Wait for EC2 instance to be in stopped state.
# TODO: event based wait.
'sleep 60',
- f'sky launch --clone-disk-from {name} -y -c {name}-clone --cloud aws -d --region us-east-2 "cat ~/user_file.txt | grep hello"',
- f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --cloud aws -d --region us-east-2 "cat ~/user_file.txt | grep hello"',
+ f'sky launch --clone-disk-from {name} -y -c {name}-clone --infra aws/us-east-2 -d "cat ~/user_file.txt | grep hello"',
+ f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --infra aws/us-east-2 -d "cat ~/user_file.txt | grep hello"',
f'sky logs {name}-clone 1 --status',
f'sky logs {name}-clone-2 1 --status',
],
@@ -308,11 +308,11 @@ def test_clone_disk_gcp():
test = smoke_tests_utils.Test(
'clone_disk_gcp',
[
- f'sky launch -y -c {name} --cloud gcp --zone us-east1-b --retry-until-up "echo hello > ~/user_file.txt"',
+ f'sky launch -y -c {name} --infra gcp/*/us-east1-b --retry-until-up "echo hello > ~/user_file.txt"',
f'sky launch --clone-disk-from {name} -y -c {name}-clone && exit 1 || true',
f'sky stop {name} -y',
- f'sky launch --clone-disk-from {name} -y -c {name}-clone --cloud gcp --zone us-central1-a "cat ~/user_file.txt | grep hello"',
- f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --cloud gcp --zone us-east1-b "cat ~/user_file.txt | grep hello"',
+ f'sky launch --clone-disk-from {name} -y -c {name}-clone --infra gcp/*/us-central1-a "cat ~/user_file.txt | grep hello"',
+ f'sky launch --clone-disk-from {name} -y -c {name}-clone-2 --infra gcp/*/us-east1-b "cat ~/user_file.txt | grep hello"',
f'sky logs {name}-clone 1 --status',
f'sky logs {name}-clone-2 1 --status',
],
@@ -331,9 +331,9 @@ def test_gcp_mig():
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
# Launch a CPU instance asynchronously.
- f'sky launch -y -c {name}-cpu {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp --zone {zone} --async tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name}-cpu {smoke_tests_utils.LOW_RESOURCE_ARG} --infra gcp/*/us-central1-a --async tests/test_yamls/minimal.yaml',
# Launch a GPU instance.
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus l4 --num-nodes 2 --image-id skypilot:gpu-debian-10 --cloud gcp --region {region} tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --gpus l4 --num-nodes 2 --image-id skypilot:gpu-debian-10 --infra gcp/{region} tests/test_yamls/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
f'sky logs {name} 2 --status',
@@ -354,7 +354,7 @@ def test_gcp_mig():
)),
# Launch again with the same region. The original instance template
# should be removed.
- f'sky launch -y -c {name} --gpus L4 --num-nodes 2 --region {region} nvidia-smi',
+ f'sky launch -y -c {name} --gpus L4 --num-nodes 2 --infra gcp/{region} nvidia-smi',
f'sky logs {name} 1 | grep "L4"',
f'sky down -y {name}',
f'sky status | grep {name}-cpu | grep UP',
@@ -408,7 +408,7 @@ def test_gcp_force_enable_external_ips():
test_commands = [
is_on_gcp_command,
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp --cpus 2 tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra gcp --cpus 2 tests/test_yamls/minimal.yaml',
# Check network of vm is "default"
(f'gcloud compute instances list --filter=name~"{name}" --format='
'"value(networkInterfaces.network)" | grep "networks/default"'),
@@ -438,7 +438,7 @@ def test_image_no_conda():
'image_no_conda',
[
# Use image id dict.
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-2 examples/per_region_images.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra aws/us-east-2 examples/per_region_images.yaml',
f'sky logs {name} 1 --status',
f'sky stop {name} -y',
f'sky start {name} -y',
@@ -459,7 +459,7 @@ def test_custom_default_conda_env(generic_cloud: str):
timeout *= 3
name = smoke_tests_utils.get_cluster_name()
test = smoke_tests_utils.Test('custom_default_conda_env', [
- f'sky launch -c {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} tests/test_yamls/test_custom_default_conda_env.yaml',
+ f'sky launch -c {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} tests/test_yamls/test_custom_default_conda_env.yaml',
f'sky status -r {name} | grep "UP"',
f'sky logs {name} 1 --status',
f'sky logs {name} 1 --no-follow | grep -E "myenv\\s+\\*"',
diff --git a/tests/smoke_tests/test_managed_job.py b/tests/smoke_tests/test_managed_job.py
index 69915e3e89e..9a874f5a6da 100644
--- a/tests/smoke_tests/test_managed_job.py
+++ b/tests/smoke_tests/test_managed_job.py
@@ -51,8 +51,8 @@ def test_managed_jobs_basic(generic_cloud: str):
test = smoke_tests_utils.Test(
'managed-jobs',
[
- f'sky jobs launch -n {name}-1 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',
- f'sky jobs launch -n {name}-2 --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',
+ f'sky jobs launch -n {name}-1 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',
+ f'sky jobs launch -n {name}-2 --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} examples/managed_job.yaml -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=f'{name}-1',
@@ -97,7 +97,7 @@ def test_managed_jobs_cli_exit_codes(generic_cloud: str):
'managed_jobs_exit_codes',
[
# Test jobs launch with successful job
- f'sky jobs launch -y -n jobs-{name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo jobs success" && echo "Jobs launch exit code: $?"',
+ f'sky jobs launch -y -n jobs-{name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo jobs success" && echo "Jobs launch exit code: $?"',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=f'jobs-{name}',
@@ -112,7 +112,7 @@ def test_managed_jobs_cli_exit_codes(generic_cloud: str):
f'sky jobs logs $JOB_ID && echo "Jobs logs exit code: $?"',
# Test jobs launch with failing job
- f'sky jobs launch -y -n jobs-fail-{name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "exit 1" || echo "Jobs launch failed exit code: $?" | grep "Jobs launch failed exit code: 100"',
+ f'sky jobs launch -y -n jobs-fail-{name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "exit 1" || echo "Jobs launch failed exit code: $?" | grep "Jobs launch failed exit code: 100"',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=f'jobs-fail-{name}',
@@ -149,7 +149,7 @@ def test_job_pipeline(generic_cloud: str):
test = smoke_tests_utils.Test(
'job_pipeline',
[
- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/pipeline.yaml --cloud {generic_cloud} -y -d',
+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} tests/test_yamls/pipeline.yaml -y -d',
# Need to wait for setup and job initialization.
'sleep 30',
rf'{smoke_tests_utils.GET_JOB_QUEUE} | grep {name} | head -n1 | grep "STARTING\|RUNNING"',
@@ -194,7 +194,7 @@ def test_managed_jobs_failed_setup(generic_cloud: str):
test = smoke_tests_utils.Test(
'managed_jobs_failed_setup',
[
- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} -y -d tests/test_yamls/failed_setup.yaml',
+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} -y -d tests/test_yamls/failed_setup.yaml',
# Make sure the job failed quickly.
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
@@ -225,7 +225,7 @@ def test_managed_jobs_pipeline_failed_setup(generic_cloud: str):
test = smoke_tests_utils.Test(
'managed_jobs_pipeline_failed_setup',
[
- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} -y -d tests/test_yamls/failed_setup_pipeline.yaml',
+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} -y -d tests/test_yamls/failed_setup_pipeline.yaml',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -265,7 +265,7 @@ def test_managed_jobs_recovery_aws(aws_config_region):
'managed_jobs_recovery_aws',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),
- rf'sky jobs launch --cloud aws --region {region} --use-spot -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
+ rf'sky jobs launch --infra aws/{region} --use-spot -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -316,7 +316,7 @@ def test_managed_jobs_recovery_gcp():
'managed_jobs_recovery_gcp',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
- rf'sky jobs launch --cloud gcp --zone {zone} -n {name} --use-spot {smoke_tests_utils.LOW_RESOURCE_ARG} "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
+ rf'sky jobs launch --infra gcp/*/{zone} -n {name} --use-spot {smoke_tests_utils.LOW_RESOURCE_ARG} "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -468,7 +468,7 @@ def test_managed_jobs_recovery_default_resources(generic_cloud: str):
test = smoke_tests_utils.Test(
'managed-spot-recovery-default-resources',
[
- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} --use-spot "sleep 30 && sudo shutdown now && sleep 1000" -y -d',
+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} --use-spot "sleep 30 && sudo shutdown now && sleep 1000" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -497,7 +497,7 @@ def test_managed_jobs_recovery_multi_node_aws(aws_config_region):
'managed_jobs_recovery_multi_node_aws',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),
- rf'sky jobs launch --cloud aws --region {region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
+ rf'sky jobs launch --infra aws/{region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -549,7 +549,7 @@ def test_managed_jobs_recovery_multi_node_gcp():
'managed_jobs_recovery_multi_node_gcp',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
- rf'sky jobs launch --cloud gcp --zone {zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
+ rf'sky jobs launch --infra gcp/*/{zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot --num-nodes 2 "echo SKYPILOT_TASK_ID: \$SKYPILOT_TASK_ID; sleep 1800" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -591,7 +591,7 @@ def test_managed_jobs_cancellation_aws(aws_config_region):
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('aws', name),
# Test cancellation during spot cluster being launched.
- f'sky jobs launch --cloud aws --region {region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
+ f'sky jobs launch --infra aws/{region} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -614,7 +614,7 @@ def test_managed_jobs_cancellation_aws(aws_config_region):
'--output text) && echo "$s" && echo; [[ -z "$s" ]] || [[ "$s" = "terminated" ]] || [[ "$s" = "shutting-down" ]]'
)),
# Test cancelling the spot cluster during spot job being setup.
- f'sky jobs launch --cloud aws --region {region} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',
+ f'sky jobs launch --infra aws/{region} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',
# The job is set up in the cluster, will shown as RUNNING.
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
@@ -636,7 +636,7 @@ def test_managed_jobs_cancellation_aws(aws_config_region):
'--output text) && echo "$s" && echo; [[ -z "$s" ]] || [[ "$s" = "terminated" ]] || [[ "$s" = "shutting-down" ]]'
)),
# Test cancellation during spot job is recovering.
- f'sky jobs launch --cloud aws --region {region} -n {name}-3 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
+ f'sky jobs launch --infra aws/{region} -n {name}-3 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
# The job is running in the cluster, will shown as RUNNING.
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
@@ -701,7 +701,7 @@ def test_managed_jobs_cancellation_gcp():
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
# Test cancellation during spot cluster being launched.
- f'sky jobs launch --cloud gcp --zone {zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
+ f'sky jobs launch --infra gcp/*/{zone} -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -714,7 +714,7 @@ def test_managed_jobs_cancellation_gcp():
job_status=[sky.ManagedJobStatus.CANCELLED],
timeout=155),
# Test cancelling the spot cluster during spot job being setup.
- f'sky jobs launch --cloud gcp --zone {zone} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',
+ f'sky jobs launch --infra gcp/*/{zone} -n {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot tests/test_yamls/test_long_setup.yaml -y -d',
# The job is set up in the cluster, will shown as RUNNING.
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
@@ -728,7 +728,7 @@ def test_managed_jobs_cancellation_gcp():
job_status=[sky.ManagedJobStatus.CANCELLED],
timeout=155),
# Test cancellation during spot job is recovering.
- f'sky jobs launch --cloud gcp --zone {zone} -n {name_3} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
+ f'sky jobs launch --infra gcp/*/{zone} -n {name_3} {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot "sleep 1000" -y -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name_3,
@@ -821,16 +821,20 @@ def test_managed_jobs_storage(generic_cloud: str):
storage_name = f'sky-test-{timestamp}'
output_storage_name = f'sky-test-output-{timestamp}'
+ # First, add an initialization for region
+ region = None
+ region_flag = ''
+ region_validation_cmd = 'true'
+ use_spot = ' --use-spot'
+ output_check_cmd = None
+
# Also perform region testing for bucket creation to validate if buckets are
# created in the correct region and correctly mounted in managed jobs.
# However, we inject this testing only for AWS and GCP since they are the
# supported object storage providers in SkyPilot.
- region_flag = ''
- region_validation_cmd = 'true'
- use_spot = ' --use-spot'
if generic_cloud == 'aws':
region = 'eu-central-1'
- region_flag = f' --region {region}'
+ region_flag = f'/{region}'
region_cmd = test_mount_and_storage.TestStorageWithCredentials.cli_region_cmd(
storage_lib.StoreType.S3, bucket_name=output_storage_name)
region_validation_cmd = f's=$({region_cmd}) && echo "$s" && echo; echo "$s" | grep {region}'
@@ -847,7 +851,7 @@ def test_managed_jobs_storage(generic_cloud: str):
f'{non_persistent_bucket_removed_check_cmd} && exit 1 || true')
elif generic_cloud == 'gcp':
region = 'us-west2'
- region_flag = f' --region {region}'
+ region_flag = f'/{region}'
region_cmd = test_mount_and_storage.TestStorageWithCredentials.cli_region_cmd(
storage_lib.StoreType.GCS, bucket_name=output_storage_name)
region_validation_cmd = f'{region_cmd} | grep {region}'
@@ -863,12 +867,12 @@ def test_managed_jobs_storage(generic_cloud: str):
name,
f'{non_persistent_bucket_removed_check_cmd} && exit 1 || true')
elif generic_cloud == 'azure':
- region = 'centralus'
- # Region centralus seems don't have the quota for low resource.
+ # Azure instances with smaller than 7G memory can have flaky performance,
# so we keep the default resource to avoid flakiness.
low_resource_arg = ""
- region_flag = f' --region {region}'
- storage_account_name = test_mount_and_storage.TestStorageWithCredentials. \
+ region = 'centralus'
+ region_flag = f'/{region}'
+ storage_account_name = test_mount_and_storage.TestStorageWithCredentials.\
get_az_storage_account_name(region)
region_cmd = test_mount_and_storage.TestStorageWithCredentials.cli_region_cmd(
storage_lib.StoreType.AZURE,
@@ -948,7 +952,7 @@ def test_managed_jobs_storage(generic_cloud: str):
generic_cloud, name),
# Override CPU/memory requirements to relax resource constraints
# and reduce the chance of out-of-stock
- f'sky jobs launch -n {name}{use_spot} {low_resource_arg} --cloud {generic_cloud}{region_flag} {file_path} -y -d',
+ f'sky jobs launch -n {name}{use_spot} {low_resource_arg} --infra {generic_cloud}{region_flag} {file_path} -y -d',
region_validation_cmd, # Check if the bucket is created in the correct region
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
@@ -1010,7 +1014,7 @@ def test_managed_jobs_intermediate_storage(generic_cloud: str):
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
# Verify command fails with correct error - run only once
# In API server, we don't error out if the bucket does not exist, instead we create it.
- # f'err=$(sky jobs launch -n {name} --cloud {generic_cloud} {file_path} -y 2>&1); '
+ # f'err=$(sky jobs launch -n {name} --infra {generic_cloud} {file_path} -y 2>&1); '
# f'ret=$?; if [ $ret -ne 0 ] && echo "$err" | grep -q "StorageBucketCreateError: '
# f'Jobs bucket \'{intermediate_storage_name}\' does not exist."; then exit 0; '
# f'else exit 1; fi',
@@ -1019,7 +1023,7 @@ def test_managed_jobs_intermediate_storage(generic_cloud: str):
cmd=
f'aws s3api create-bucket --bucket {intermediate_storage_name}'
),
- f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} {file_path} -y',
+ f'sky jobs launch -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} {file_path} -y',
# fail because the bucket does not exist
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
@@ -1090,7 +1094,7 @@ def test_managed_jobs_inline_env(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-managed-jobs-inline-env',
[
- rf'sky jobs launch -n {name} -y --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV="hello world" -- "echo "\$TEST_ENV"; ([[ ! -z \"\$TEST_ENV\" ]] && [[ ! -z \"\${constants.SKYPILOT_NODE_IPS}\" ]] && [[ ! -z \"\${constants.SKYPILOT_NODE_RANK}\" ]] && [[ ! -z \"\${constants.SKYPILOT_NUM_NODES}\" ]]) || exit 1"',
+ rf'sky jobs launch -n {name} -y --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --env TEST_ENV="hello world" -- "echo "\$TEST_ENV"; ([[ ! -z \"\$TEST_ENV\" ]] && [[ ! -z \"\${constants.SKYPILOT_NODE_IPS}\" ]] && [[ ! -z \"\${constants.SKYPILOT_NODE_RANK}\" ]] && [[ ! -z \"\${constants.SKYPILOT_NUM_NODES}\" ]]) || exit 1"',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -1120,7 +1124,7 @@ def test_managed_jobs_logs_sync_down(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-managed-jobs-logs-sync-down',
[
- f'sky jobs launch -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y examples/managed_job.yaml -d',
+ f'sky jobs launch -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y examples/managed_job.yaml -d',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=f'{name}',
diff --git a/tests/smoke_tests/test_mount_and_storage.py b/tests/smoke_tests/test_mount_and_storage.py
index e2f45ede9e4..91249affd8d 100644
--- a/tests/smoke_tests/test_mount_and_storage.py
+++ b/tests/smoke_tests/test_mount_and_storage.py
@@ -71,7 +71,7 @@ def test_file_mounts(generic_cloud: str):
extra_flags = '--num-nodes 1'
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {extra_flags} examples/using_file_mounts.yaml',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {extra_flags} examples/using_file_mounts.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
]
test = smoke_tests_utils.Test(
@@ -105,7 +105,7 @@ def test_oci_mounts():
name = smoke_tests_utils.get_cluster_name()
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {name} --cloud oci --num-nodes 2 examples/oci/oci-mounts.yaml',
+ f'sky launch -y -c {name} --infra oci --num-nodes 2 examples/oci/oci-mounts.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
]
test = smoke_tests_utils.Test(
@@ -124,12 +124,12 @@ def test_using_file_mounts_with_env_vars(generic_cloud: str):
storage_name = TestStorageWithCredentials.generate_bucket_name()
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- (f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} '
+ (f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} '
'examples/using_file_mounts_with_env_vars.yaml '
f'--env MY_BUCKET={storage_name}'),
f'sky logs {name} 1 --status', # Ensure the job succeeded.
# Override with --env:
- (f'sky launch -y -c {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} '
+ (f'sky launch -y -c {name}-2 {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} '
'examples/using_file_mounts_with_env_vars.yaml '
f'--env MY_BUCKET={storage_name} '
'--env MY_LOCAL_PATH=tmpfile'),
@@ -176,7 +176,7 @@ def _storage_mounts_commands_generator(f: TextIO, cluster_name: str,
test_commands = [
smoke_tests_utils.launch_cluster_for_cloud_cmd(cloud, cluster_name),
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {cluster_name} --cloud {cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {file_path}',
+ f'sky launch -y -c {cluster_name} --infra {cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {file_path}',
f'sky logs {cluster_name} 1 --status', # Ensure job succeeded.
smoke_tests_utils.run_cloud_cmd_on_cluster(cluster_name,
cmd=ls_hello_command),
@@ -336,7 +336,7 @@ def test_kubernetes_context_switch():
test_commands = [
# Launch a cluster and run a simple task
- f'sky launch -y -c {name} --cloud kubernetes "echo Hello from original context"',
+ f'sky launch -y -c {name} --infra kubernetes "echo Hello from original context"',
f'sky logs {name} 1 --status', # Ensure job succeeded
# Get current context details and save to a file for later use in cleanup
@@ -420,7 +420,7 @@ def test_docker_storage_mounts(generic_cloud: str, image_id: str):
# If azure is used, the azure blob storage checking assumes the bucket is
# created in the centralus region when getting the storage account. We
# should set the cluster to be launched in the same region.
- region_str = '--region centralus' if generic_cloud == 'azure' else ''
+ region_str = f'/centralus' if generic_cloud == 'azure' else ''
if azure_mount_unsupported_ubuntu_version in image_id:
# The store for mount_private_mount is not specified in the template.
# If we're running on Azure, the private mount will be created on
@@ -449,7 +449,7 @@ def test_docker_storage_mounts(generic_cloud: str, image_id: str):
file_path = f.name
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} {region_str} --image-id {image_id} {file_path}',
+ f'sky launch -y -c {name} --infra {generic_cloud}{region_str} {smoke_tests_utils.LOW_RESOURCE_ARG} --image-id {image_id} {file_path}',
f'sky logs {name} 1 --status', # Ensure job succeeded.
# Check AWS, GCP, or Azure storage mount.
f'sky exec {name} {quoted_check}',
@@ -479,7 +479,7 @@ def test_cloudflare_storage_mounts(generic_cloud: str):
file_path = f.name
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {name} --cloud {generic_cloud} {file_path}',
+ f'sky launch -y -c {name} --infra {generic_cloud} {file_path}',
f'sky logs {name} 1 --status', # Ensure job succeeded.
f'AWS_SHARED_CREDENTIALS_FILE={cloudflare.R2_CREDENTIALS_PATH} aws s3 ls s3://{storage_name}/hello.txt --endpoint {endpoint_url} --profile=r2'
]
@@ -507,7 +507,7 @@ def test_nebius_storage_mounts(generic_cloud: str):
file_path = f.name
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {name} --cloud {generic_cloud} {file_path}',
+ f'sky launch -y -c {name} --infra {generic_cloud} {file_path}',
f'sky logs {name} 1 --status', # Ensure job succeeded.
f'aws s3 ls s3://{storage_name}/hello.txt --profile={nebius.NEBIUS_PROFILE_NAME}'
]
@@ -537,7 +537,7 @@ def test_ibm_storage_mounts():
file_path = f.name
test_commands = [
*smoke_tests_utils.STORAGE_SETUP_COMMANDS,
- f'sky launch -y -c {name} --cloud ibm {file_path}',
+ f'sky launch -y -c {name} --infra ibm {file_path}',
f'sky logs {name} 1 --status', # Ensure job succeeded.
f'rclone ls {rclone_profile_name}:{storage_name}/hello.txt',
]
@@ -617,7 +617,7 @@ def test_ignore_exclusions(generic_cloud: str, ignore_file: str):
# Run test commands
test_commands = [
# Test with sky launch
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --workdir {temp_dir} {yaml_path}',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --workdir {temp_dir} {yaml_path}',
f'sky logs {name} 1 --status', # Ensure the job succeeded
# Test with sky jobs launch
diff --git a/tests/smoke_tests/test_region_and_zone.py b/tests/smoke_tests/test_region_and_zone.py
index ebc8d65ca5a..3d6091bd767 100644
--- a/tests/smoke_tests/test_region_and_zone.py
+++ b/tests/smoke_tests/test_region_and_zone.py
@@ -37,7 +37,7 @@ def test_aws_region():
test = smoke_tests_utils.Test(
'aws_region',
[
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-2 examples/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra */us-east-2 examples/minimal.yaml',
f'sky exec {name} examples/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky status -v | grep {name} | grep us-east-2', # Ensure the region is correct.
@@ -70,15 +70,15 @@ def test_aws_with_ssh_proxy_command():
test = smoke_tests_utils.Test(
'aws_with_ssh_proxy_command',
[
- f'sky launch -y -c jump-{name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1',
+ f'sky launch -y -c jump-{name} --infra aws/us-east-1 {smoke_tests_utils.LOW_RESOURCE_ARG}',
# Use jump config
f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; '
- f'sky launch -y -c {name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1 echo hi',
+ f'sky launch -y -c {name} --infra aws/us-east-1 {smoke_tests_utils.LOW_RESOURCE_ARG} echo hi',
f'sky logs {name} 1 --status',
f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; sky exec {name} echo hi',
f'sky logs {name} 2 --status',
# Start a small job to make sure the controller is created.
- f'sky jobs launch -n {name}-0 --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y echo hi',
+ f'sky jobs launch -n {name}-0 --infra aws {smoke_tests_utils.LOW_RESOURCE_ARG} --use-spot -y echo hi',
# Wait other tests to create the job controller first, so that
# the job controller is not launched with proxy command.
smoke_tests_utils.
@@ -86,7 +86,7 @@ def test_aws_with_ssh_proxy_command():
cluster_name_wildcard='sky-jobs-controller-*',
cluster_status=[sky.ClusterStatus.UP],
timeout=300),
- f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; sky jobs launch -n {name} --cloud aws {smoke_tests_utils.LOW_RESOURCE_ARG} --region us-east-1 -yd echo hi',
+ f'export {skypilot_config.ENV_VAR_SKYPILOT_CONFIG}={f.name}; sky jobs launch -n {name} --infra aws/us-east-1 {smoke_tests_utils.LOW_RESOURCE_ARG} -yd echo hi',
smoke_tests_utils.
get_cmd_wait_until_managed_job_status_contains_matching_job_name(
job_name=name,
@@ -109,7 +109,7 @@ def test_gcp_region_and_service_account():
test = smoke_tests_utils.Test(
'gcp_region',
[
- f'sky launch -y -c {name} --region us-central1 {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} --infra gcp/us-central1 {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
f'sky exec {name} tests/test_yamls/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky exec {name} \'curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?format=standard&audience=gcp"\'',
@@ -133,8 +133,8 @@ def test_ibm_region():
test = smoke_tests_utils.Test(
'region',
[
- f'sky launch -y -c {name} --cloud ibm --region {region} examples/minimal.yaml',
- f'sky exec {name} --cloud ibm examples/minimal.yaml',
+ f'sky launch -y -c {name} --infra ibm/{region} examples/minimal.yaml',
+ f'sky exec {name} --infra ibm examples/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky status -v | grep {name} | grep {region}', # Ensure the region is correct.
],
@@ -149,7 +149,7 @@ def test_azure_region():
test = smoke_tests_utils.Test(
'azure_region',
[
- f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --region eastus2 --cloud azure tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra azure/eastus2 tests/test_yamls/minimal.yaml',
f'sky exec {name} tests/test_yamls/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky status -v | grep {name} | grep eastus2', # Ensure the region is correct.
@@ -173,8 +173,8 @@ def test_aws_zone():
test = smoke_tests_utils.Test(
'aws_zone',
[
- f'sky launch -y -c {name} examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG} --zone us-east-2b',
- f'sky exec {name} examples/minimal.yaml --zone us-east-2b',
+ f'sky launch -y -c {name} examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG} --infra */*/us-east-2b',
+ f'sky exec {name} examples/minimal.yaml --infra */*/us-east-2b',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky status -v | grep {name} | grep us-east-2b', # Ensure the zone is correct.
],
@@ -190,8 +190,8 @@ def test_ibm_zone():
test = smoke_tests_utils.Test(
'zone',
[
- f'sky launch -y -c {name} --cloud ibm examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG} --zone {zone}',
- f'sky exec {name} --cloud ibm examples/minimal.yaml --zone {zone}',
+ f'sky launch -y -c {name} --infra ibm/*/{zone} examples/minimal.yaml {smoke_tests_utils.LOW_RESOURCE_ARG}',
+ f'sky exec {name} --infra ibm/*/{zone} examples/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky status -v | grep {name} | grep {zone}', # Ensure the zone is correct.
],
@@ -206,8 +206,8 @@ def test_gcp_zone():
test = smoke_tests_utils.Test(
'gcp_zone',
[
- f'sky launch -y -c {name} --zone us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud gcp tests/test_yamls/minimal.yaml',
- f'sky exec {name} --zone us-central1-a --cloud gcp tests/test_yamls/minimal.yaml',
+ f'sky launch -y -c {name} --infra gcp/*/us-central1-a {smoke_tests_utils.LOW_RESOURCE_ARG} tests/test_yamls/minimal.yaml',
+ f'sky exec {name} --infra gcp/*/us-central1-a tests/test_yamls/minimal.yaml',
f'sky logs {name} 1 --status', # Ensure the job succeeded.
f'sky status -v | grep {name} | grep us-central1-a', # Ensure the zone is correct.
],
diff --git a/tests/smoke_tests/test_sky_serve.py b/tests/smoke_tests/test_sky_serve.py
index f766a277f8b..62064148b94 100644
--- a/tests/smoke_tests/test_sky_serve.py
+++ b/tests/smoke_tests/test_sky_serve.py
@@ -113,13 +113,13 @@ def _get_service_name() -> str:
'echo "$s"')
_WAIT_PROVISION_REPR = (
- # Once controller is ready, check provisioning vs. vCPU=2. This is for
- # the `_check_replica_in_status`, which will check number of `vCPU=2` in the
+ # Once controller is ready, check provisioning vs. cpus=2. This is for
+ # the `_check_replica_in_status`, which will check number of `cpus=2` in the
# `sky serve status` output and use that to suggest the number of replicas.
# However, replicas in provisioning state is possible to have a repr of `-`,
# since the desired `launched_resources` is not decided yet. This would
# cause an error when counting desired number of replicas. We wait for the
- # representation of `vCPU=2` the same with number of provisioning replicas
+ # representation of `cpus=2` the same with number of provisioning replicas
# to avoid this error.
# NOTE(tian): This assumes the replica will not do failover, as the
# requested resources is only 2 vCPU and likely to be immediately available
@@ -127,7 +127,7 @@ def _get_service_name() -> str:
# failover
# Check #4565 for more information.
'num_provisioning=$(echo "$s" | grep "PROVISIONING" | wc -l); '
- 'num_vcpu_in_provision=$(echo "$s" | grep "PROVISIONING" | grep "vCPU=2" | wc -l); '
+ 'num_vcpu_in_provision=$(echo "$s" | grep "PROVISIONING" | grep "x(cpus=2, " | wc -l); '
'until [ "$num_provisioning" -eq "$num_vcpu_in_provision" ]; '
'do '
' echo "Waiting for provisioning resource repr ready..."; '
@@ -135,10 +135,10 @@ def _get_service_name() -> str:
' sleep 2; '
' s=$(sky serve status {name}); '
' num_provisioning=$(echo "$s" | grep "PROVISIONING" | wc -l); '
- ' num_vcpu_in_provision=$(echo "$s" | grep "PROVISIONING" | grep "vCPU=2" | wc -l); '
+ ' num_vcpu_in_provision=$(echo "$s" | grep "PROVISIONING" | grep "x(cpus=2, " | wc -l); '
'done; '
# Provisioning is complete
- 'echo "Provisioning complete. PROVISIONING: $num_provisioning, vCPU=2: $num_vcpu_in_provision"'
+ 'echo "Provisioning complete. PROVISIONING: $num_provisioning, cpus=2: $num_cpus_in_provision"'
)
# Shell script snippet to monitor and wait for resolution of NOT_READY status:
@@ -197,7 +197,7 @@ def _check_replica_in_status(name: str,
timeout_seconds: int = 0) -> str:
"""Check replicas' status and count in sky serve status
- We will check vCPU=2, as all our tests use vCPU=2.
+ We will check cpus=2, as all our tests use cpus=2.
Args:
name: the name of the service
@@ -216,8 +216,8 @@ def _check_replica_in_status(name: str,
] and not status.startswith('FAILED'):
spot_str = ''
if is_spot:
- spot_str = r'\[Spot\]'
- resource_str = f'({spot_str}vCPU=2)'
+ spot_str = r'\[spot\]'
+ resource_str = f'x{spot_str}(cpus=2, '
check_conditions.append(
f'echo "$s" | grep "{resource_str}" | grep "{status}" | wc -l | '
f'grep {count}')
@@ -342,7 +342,7 @@ def generate_llm_test_command(prompt: str, expected_output: str) -> str:
test = smoke_tests_utils.Test(
'test-skyserve-llm',
[
- f'sky serve up -n {name} --cloud {generic_cloud} --gpus {accelerator} -y tests/skyserve/llm/service.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} --gpus {accelerator} -y tests/skyserve/llm/service.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
*[
generate_llm_test_command(prompt, output)
@@ -395,7 +395,7 @@ def test_skyserve_base_ondemand_fallback(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-base-ondemand-fallback',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/base_ondemand_fallback.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/base_ondemand_fallback.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),
_check_replica_in_status(name, [(1, True, 'READY'),
(1, False, 'READY')]),
@@ -417,7 +417,7 @@ def test_skyserve_dynamic_ondemand_fallback():
'test-skyserve-dynamic-ondemand-fallback',
[
smoke_tests_utils.launch_cluster_for_cloud_cmd('gcp', name),
- f'sky serve up -n {name} --cloud gcp {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/dynamic_ondemand_fallback.yaml',
+ f'sky serve up -n {name} --infra gcp {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/spot/dynamic_ondemand_fallback.yaml',
f'sleep 40',
# 2 on-demand (provisioning) + 2 Spot (provisioning).
f'{_SERVE_STATUS_WAIT.format(name=name)}; echo "$s";'
@@ -475,7 +475,7 @@ def test_skyserve_user_bug_restart(generic_cloud: str):
'test-skyserve-user-bug-restart',
[
increase_initial_delay_seconds(
- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/restart/user_bug.yaml'
+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/restart/user_bug.yaml'
),
f's=$(sky serve status {name}); echo "$s";'
'until echo "$s" | grep -A 100 "Service Replicas" | grep "SHUTTING_DOWN"; '
@@ -490,7 +490,7 @@ def test_skyserve_user_bug_restart(generic_cloud: str):
f'echo "$s" | grep -A 100 "Service Replicas" | grep "{name}" | wc -l | grep 1; '
f'echo "$s" | grep -B 100 "NO_REPLICA" | grep "0/0"',
increase_initial_delay_seconds(
- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/auto_restart.yaml'
+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/auto_restart.yaml'
),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
'until curl --connect-timeout 10 --max-time 10 $endpoint | grep "Hi, SkyPilot here"; do sleep 1; done; sleep 2; '
@@ -513,7 +513,7 @@ def test_skyserve_load_balancer(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-load-balancer',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/load_balancer/service.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/load_balancer/service.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=3),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
f'{_SERVE_STATUS_WAIT.format(name=name)}; '
@@ -587,7 +587,7 @@ def test_skyserve_cancel(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-cancel',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/cancel/cancel.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/cancel/cancel.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; python3 '
'tests/skyserve/cancel/send_cancel_request.py '
@@ -616,7 +616,7 @@ def test_skyserve_streaming(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-streaming',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/streaming/streaming.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/streaming/streaming.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
'python3 tests/skyserve/streaming/send_streaming_request.py '
@@ -637,7 +637,7 @@ def test_skyserve_readiness_timeout_fail(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-readiness-timeout-fail',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task.yaml',
# None of the readiness probe will pass, so the service will be
# terminated after the initial delay.
f's=$(sky serve status {name}); '
@@ -663,7 +663,7 @@ def test_skyserve_large_readiness_timeout(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-large-readiness-timeout',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task_large_timeout.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/readiness_timeout/task_large_timeout.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
'request_output=$(curl $endpoint); echo "$request_output"; echo "$request_output" | grep "Hi, SkyPilot here"',
@@ -689,10 +689,10 @@ def test_skyserve_update(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-update',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep "Hi, SkyPilot here"',
- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/new.yaml',
+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/new.yaml',
# sleep before update is registered.
'sleep 20',
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
@@ -733,12 +733,12 @@ def test_skyserve_rolling_update(generic_cloud: str):
f'test-skyserve-rolling-update',
[
increase_initial_delay_seconds(
- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml'
+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/old.yaml'
),
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep "Hi, SkyPilot here"',
increase_initial_delay_seconds(
- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/new.yaml'
+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/new.yaml'
),
# Make sure the traffic is mixed across two versions, the replicas
# with even id will sleep 120 seconds before being ready, so we
@@ -785,10 +785,10 @@ def test_skyserve_fast_update(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-fast-update',
[
- f'sky serve up -n {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} tests/skyserve/update/bump_version_before.yaml',
+ f'sky serve up -n {name} -y {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} tests/skyserve/update/bump_version_before.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep "Hi, SkyPilot here"',
- f'sky serve update {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode blue_green -y tests/skyserve/update/bump_version_after.yaml',
+ f'sky serve update {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode blue_green -y tests/skyserve/update/bump_version_after.yaml',
# sleep to wait for update to be registered.
'sleep 40',
# 2 on-deamnd (ready) + 1 on-demand (provisioning).
@@ -802,7 +802,7 @@ def test_skyserve_fast_update(generic_cloud: str):
_check_service_version(name, "2"),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; curl $endpoint | grep "Hi, SkyPilot here"',
# Test rolling update
- f'sky serve update {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/bump_version_before.yaml',
+ f'sky serve update {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/bump_version_before.yaml',
# sleep to wait for update to be registered.
'sleep 25',
# 2 on-deamnd (ready) + 1 on-demand (shutting down).
@@ -833,14 +833,14 @@ def test_skyserve_update_autoscale(generic_cloud: str):
f'test-skyserve-update-autoscale',
[
increase_initial_delay_seconds(
- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'
+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'
),
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2) +
_check_service_version(name, "1"),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
'curl $endpoint | grep "Hi, SkyPilot here"',
increase_initial_delay_seconds(
- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/num_min_one.yaml'
+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} --mode blue_green -y tests/skyserve/update/num_min_one.yaml'
),
# sleep before update is registered.
'sleep 20',
@@ -851,7 +851,7 @@ def test_skyserve_update_autoscale(generic_cloud: str):
'curl $endpoint | grep "Hi, SkyPilot here!"',
# Rolling Update
increase_initial_delay_seconds(
- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'
+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/update/num_min_two.yaml'
),
# sleep before update is registered.
'sleep 20',
@@ -909,12 +909,12 @@ def test_skyserve_new_autoscaler_update(mode: str, generic_cloud: str):
test = smoke_tests_utils.Test(
f'test-skyserve-new-autoscaler-update-{mode}',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/new_autoscaler_before.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/update/new_autoscaler_before.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=2) +
_check_service_version(name, "1"),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
's=$(curl $endpoint); echo "$s"; echo "$s" | grep "Hi, SkyPilot here"',
- f'sky serve update {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode {mode} -y tests/skyserve/update/new_autoscaler_after.yaml',
+ f'sky serve update {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} --mode {mode} -y tests/skyserve/update/new_autoscaler_after.yaml',
# Wait for update to be registered
'sleep 90',
wait_until_no_pending,
@@ -953,7 +953,7 @@ def test_skyserve_failures(generic_cloud: str):
'test-skyserve-failures',
[
increase_initial_delay_seconds(
- f'sky serve up -n {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/failures/initial_delay.yaml'
+ f'sky serve up -n {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/failures/initial_delay.yaml'
),
f's=$(sky serve status {name}); '
f'until echo "$s" | grep "FAILED_INITIAL_DELAY"; do '
@@ -964,7 +964,7 @@ def test_skyserve_failures(generic_cloud: str):
# Make sure no new replicas are started for early failure.
f'echo "$s" | grep -A 100 "Service Replicas" | grep "{name}" | wc -l | grep 2;',
increase_initial_delay_seconds(
- f'sky serve update {name} --cloud {generic_cloud} {resource_arg} -y tests/skyserve/failures/probing.yaml'
+ f'sky serve update {name} --infra {generic_cloud} {resource_arg} -y tests/skyserve/failures/probing.yaml'
),
f's=$(sky serve status {name}); '
# Wait for replica to be ready.
@@ -1012,7 +1012,7 @@ def test_skyserve_https(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-https',
[
- f'sky serve up -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --cloud {generic_cloud} -y tests/skyserve/https/service.yaml '
+ f'sky serve up -n {name} {smoke_tests_utils.LOW_RESOURCE_ARG} --infra {generic_cloud} -y tests/skyserve/https/service.yaml '
f'--env TLS_KEYFILE_ENV_VAR={keyfile} --env TLS_CERTFILE_ENV_VAR={certfile}',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
@@ -1043,7 +1043,7 @@ def test_skyserve_multi_ports(generic_cloud: str):
test = smoke_tests_utils.Test(
'test-skyserve-multi-ports',
[
- f'sky serve up -n {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/multi_ports.yaml',
+ f'sky serve up -n {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} -y tests/skyserve/multi_ports.yaml',
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
'curl $replica_endpoint | grep "Hi, SkyPilot here"; '
@@ -1069,7 +1069,7 @@ def test_user_dependencies(generic_cloud: str):
test = smoke_tests_utils.Test(
'user-dependencies',
[
- f'sky launch -y -c {name} --cloud {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "pip install ray>2.11; ray start --head"',
+ f'sky launch -y -c {name} --infra {generic_cloud} {smoke_tests_utils.LOW_RESOURCE_ARG} "pip install ray>2.11; ray start --head"',
f'sky logs {name} 1 --status',
f'sky exec {name} "echo hi"',
f'sky logs {name} 2 --status',
diff --git a/tests/stress/mountedstorage/mount_stress.yaml b/tests/stress/mountedstorage/mount_stress.yaml
index 41b9f19656b..8caa3f49f1c 100644
--- a/tests/stress/mountedstorage/mount_stress.yaml
+++ b/tests/stress/mountedstorage/mount_stress.yaml
@@ -10,7 +10,7 @@
name: stress
resources:
- cloud: aws
+ infra: aws
workdir: .
diff --git a/tests/test_failover.py b/tests/test_failover.py
index c8159213d85..8b50115e58e 100644
--- a/tests/test_failover.py
+++ b/tests/test_failover.py
@@ -80,7 +80,7 @@ def mock_create_instances(ec2_fail_fast, cluster_name, node_config, tags,
monkeypatch.setattr(aws_instance, '_create_instances',
mock_create_instances)
task = sky.Task(run='echo hi')
- task.set_resources(sky.Resources(sky.AWS(), instance_type='t2.micro'))
+ task.set_resources(sky.Resources(infra='aws', instance_type='t2.micro'))
with unittest.mock.patch.object(
cloud_vm_ray_backend.FailoverCloudErrorHandlerV2,
diff --git a/tests/test_jobs.py b/tests/test_jobs.py
index a5cebd0c3d1..1ac2e76be72 100644
--- a/tests/test_jobs.py
+++ b/tests/test_jobs.py
@@ -38,10 +38,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):
cluster_name_on_cloud='test-cluster1',
cluster_yaml='/tmp/cluster1.yaml',
launched_nodes=2,
- launched_resources=sky.Resources(sky.AWS(),
- instance_type='p4d.24xlarge',
- region='us-east-1',
- zone='us-east-1a'),
+ launched_resources=sky.Resources(infra='aws/us-east-1/us-east-1a',
+ instance_type='p4d.24xlarge'),
)
global_user_state.add_or_update_cluster(
'test-cluster1',
@@ -53,11 +51,9 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):
cluster_name_on_cloud='test-cluster2',
cluster_yaml='/tmp/cluster2.yaml',
launched_nodes=1,
- launched_resources=sky.Resources(sky.GCP(),
+ launched_resources=sky.Resources(infra='gcp/us-west1/us-west1-a',
instance_type='n1-highmem-64',
- accelerators='V100:4',
- region='us-west1',
- zone='us-west1-a'),
+ accelerators='V100:4'),
)
global_user_state.add_or_update_cluster(
'test-cluster2',
@@ -69,9 +65,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):
cluster_name_on_cloud='test-cluster3',
cluster_yaml='/tmp/cluster3.yaml',
launched_nodes=1,
- launched_resources=sky.Resources(sky.Azure(),
- instance_type='Standard_D4s_v3',
- region='eastus'),
+ launched_resources=sky.Resources(infra='azure/eastus',
+ instance_type='Standard_D4s_v3'),
)
global_user_state.add_or_update_cluster(
'test-cluster3',
@@ -84,10 +79,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):
cluster_yaml='/tmp/disk-tier1.yaml',
launched_nodes=1,
launched_resources=sky.Resources(
- sky.AWS(),
+ infra='aws/us-east-1/us-east-1a',
instance_type='m6i.2xlarge',
- region='us-east-1',
- zone='us-east-1a',
disk_tier=resources_utils.DiskTier.BEST))
global_user_state.add_or_update_cluster(
'test-disk-tier1',
@@ -100,10 +93,8 @@ def _mock_cluster_state(_mock_db_conn, enable_all_clouds):
cluster_yaml='/tmp/disk-tier2.yaml',
launched_nodes=1,
launched_resources=sky.Resources(
- sky.GCP(),
+ infra='gcp/us-west1/us-west1-a',
instance_type='n2-standard-8',
- region='us-west1',
- zone='us-west1-a',
disk_tier=resources_utils.DiskTier.MEDIUM))
global_user_state.add_or_update_cluster(
'test-disk-tier2',
@@ -150,9 +141,8 @@ def test_launch_exec(self):
sky.exec(task, cluster_name='test-cluster1', dryrun=True))
task.set_resources(
sky.Resources(
- sky.AWS(),
+ infra='aws/us-east-1',
accelerators='A100:1',
- region='us-east-1',
))
sky.stream_and_get(
sky.launch(task, cluster_name='test-cluster1', dryrun=True))
@@ -166,7 +156,7 @@ def test_launch_exec(self):
sky.stream_and_get(
sky.exec(task, cluster_name='test-cluster2', dryrun=True))
task.set_resources(
- sky.Resources(sky.GCP(), accelerators='V100:3', region='us-west1'))
+ sky.Resources(infra='gcp/us-west1', accelerators='V100:3'))
sky.stream_and_get(
sky.launch(task, cluster_name='test-cluster2', dryrun=True))
sky.stream_and_get(
@@ -217,10 +207,10 @@ def test_launch_exec_mismatch(self):
self._run_launch_exec_with_error(task, 'test-cluster3')
# Cloud mismatch
- task.set_resources(sky.Resources(sky.AWS(), accelerators='V100'))
+ task.set_resources(sky.Resources(infra='aws', accelerators='V100'))
self._run_launch_exec_with_error(task, 'test-cluster2')
- task.set_resources(sky.Resources(sky.GCP()))
+ task.set_resources(sky.Resources(infra='gcp'))
self._run_launch_exec_with_error(task, 'test-cluster1')
# Disk tier mismatch
diff --git a/tests/test_jobs_and_serve.py b/tests/test_jobs_and_serve.py
index 21e369e26f5..3f8d7344c21 100644
--- a/tests/test_jobs_and_serve.py
+++ b/tests/test_jobs_and_serve.py
@@ -74,9 +74,8 @@ def _mock_cluster_state(_mock_db_conn, tmp_path):
cluster_name_on_cloud='test-cluster1',
cluster_yaml=_generate_tmp_yaml(tmp_path, 'cluster1.yaml'),
launched_nodes=2,
- launched_resources=sky.Resources(sky.AWS(),
- instance_type='p3.2xlarge',
- region='us-east-1'),
+ launched_resources=sky.Resources(infra='aws/us-east-1',
+ instance_type='p3.2xlarge'),
)
global_user_state.add_or_update_cluster(
'test-cluster1',
@@ -88,10 +87,9 @@ def _mock_cluster_state(_mock_db_conn, tmp_path):
cluster_name_on_cloud='test-cluster2',
cluster_yaml=_generate_tmp_yaml(tmp_path, 'cluster2.yaml'),
launched_nodes=1,
- launched_resources=sky.Resources(sky.GCP(),
+ launched_resources=sky.Resources(infra='gcp/us-west1',
instance_type='a2-highgpu-4g',
- accelerators={'A100': 4},
- region='us-west1'),
+ accelerators={'A100': 4}),
)
global_user_state.add_or_update_cluster(
'test-cluster2',
@@ -103,9 +101,8 @@ def _mock_cluster_state(_mock_db_conn, tmp_path):
cluster_name_on_cloud='test-cluster3',
cluster_yaml=_generate_tmp_yaml(tmp_path, 'cluster3.yaml'),
launched_nodes=4,
- launched_resources=sky.Resources(sky.Azure(),
- instance_type='Standard_D4s_v3',
- region='eastus'),
+ launched_resources=sky.Resources(infra='AZURE/eastus',
+ instance_type='Standard_D4s_v3'),
)
global_user_state.add_or_update_cluster(
'test-cluster3',
@@ -121,9 +118,8 @@ def _mock_jobs_controller(_mock_db_conn, tmp_path):
cluster_name_on_cloud=common.JOB_CONTROLLER_NAME,
cluster_yaml=_generate_tmp_yaml(tmp_path, 'jobs_controller.yaml'),
launched_nodes=1,
- launched_resources=sky.Resources(sky.AWS(),
- instance_type='m4.2xlarge',
- region='us-west-1'),
+ launched_resources=sky.Resources(infra='aws/us-west-1',
+ instance_type='m4.2xlarge'),
)
global_user_state.add_or_update_cluster(
common.JOB_CONTROLLER_NAME,
@@ -140,9 +136,8 @@ def _mock_serve_controller(_mock_db_conn, tmp_path):
cluster_name_on_cloud=common.SKY_SERVE_CONTROLLER_NAME,
cluster_yaml=yaml_path,
launched_nodes=1,
- launched_resources=sky.Resources(sky.AWS(),
- instance_type='m4.2xlarge',
- region='us-west-1'),
+ launched_resources=sky.Resources(infra='aws/us-west-1',
+ instance_type='m4.2xlarge'),
stable_internal_external_ips=[('1.2.3.4', '4.3.2.1')],
stable_ssh_ports=[22],
)
diff --git a/tests/test_optimizer_dryruns.py b/tests/test_optimizer_dryruns.py
index 2de21695bd9..4e594025287 100644
--- a/tests/test_optimizer_dryruns.py
+++ b/tests/test_optimizer_dryruns.py
@@ -86,16 +86,16 @@ def _test_resources_launch(*resources_args,
def test_resources_aws(enable_all_clouds):
- _test_resources_launch(sky.AWS(), 'p3.2xlarge')
+ _test_resources_launch(infra='aws', instance_type='p3.2xlarge')
def test_resources_azure(enable_all_clouds):
- _test_resources_launch(sky.Azure(), 'Standard_NC24s_v3')
+ _test_resources_launch(infra='azure', instance_type='Standard_NC24s_v3')
def test_resources_gcp(enable_all_clouds):
- _test_resources_launch(sky.GCP(), 'n1-standard-16')
- _test_resources_launch(sky.GCP(), 'a3-highgpu-8g')
+ _test_resources_launch(infra='gcp', instance_type='n1-standard-16')
+ _test_resources_launch(infra='gcp', instance_type='a3-highgpu-8g')
def test_partial_cpus(enable_all_clouds):
@@ -419,20 +419,15 @@ def test_invalid_image(enable_all_clouds):
def test_valid_image(enable_all_clouds):
- _test_resources(cloud=sky.AWS(),
- region='us-east-1',
- image_id='ami-0868a20f5a3bf9702')
+ _test_resources(infra='aws/us-east-1', image_id='ami-0868a20f5a3bf9702')
_test_resources(
- cloud=sky.GCP(),
- region='us-central1',
+ infra='gcp/us-central1',
image_id=
- 'projects/deeplearning-platform-release/global/images/family/common-cpu-v20230126'
- )
+ 'projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240927')
_test_resources(
- cloud=sky.GCP(),
+ infra='gcp',
image_id=
- 'projects/deeplearning-platform-release/global/images/family/common-cpu-v20230126'
- )
+ 'projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240927')
def test_parse_cpus_from_yaml():
@@ -566,9 +561,8 @@ def test_invalid_accelerators_regions(enable_all_clouds):
task = sky.Task(run='echo hi')
task.set_resources(
sky.Resources(
- sky.AWS(),
+ infra='aws/us-west-1',
accelerators='A100:8',
- region='us-west-1',
))
with pytest.raises(exceptions.ResourcesUnavailableError) as e:
sky.stream_and_get(
@@ -591,7 +585,7 @@ def _test_optimize_speed(resources: sky.Resources):
def test_optimize_speed(enable_all_clouds):
_test_optimize_speed(sky.Resources(cpus=4))
for cloud in registry.CLOUD_REGISTRY.values():
- _test_optimize_speed(sky.Resources(cloud, cpus='4+'))
+ _test_optimize_speed(sky.Resources(infra=str(cloud), cpus='4+'))
_test_optimize_speed(sky.Resources(cpus='4+', memory='4+'))
_test_optimize_speed(
sky.Resources(cpus='4+', memory='4+', accelerators='V100:1'))
diff --git a/tests/test_optimizer_random_dag.py b/tests/test_optimizer_random_dag.py
index 1a848097ab7..8efaaf098e4 100644
--- a/tests/test_optimizer_random_dag.py
+++ b/tests/test_optimizer_random_dag.py
@@ -83,7 +83,7 @@ def generate_random_dag(
if 'tpu' in candidate.accelerator_name:
instance_type = 'TPU-VM'
resources = sky.Resources(
- cloud=registry.CLOUD_REGISTRY.from_str(candidate.cloud),
+ infra=candidate.cloud,
instance_type=instance_type,
accelerators={
candidate.accelerator_name: candidate.accelerator_count
diff --git a/tests/test_yamls/failed_setup_pipeline.yaml b/tests/test_yamls/failed_setup_pipeline.yaml
index 81e5f2bde34..3d4b3885b18 100644
--- a/tests/test_yamls/failed_setup_pipeline.yaml
+++ b/tests/test_yamls/failed_setup_pipeline.yaml
@@ -9,8 +9,8 @@ resources:
cpus: 2
memory: 4+
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
setup: |
@@ -27,8 +27,8 @@ resources:
cpus: 2
memory: 4+
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
setup: |
@@ -47,8 +47,8 @@ resources:
cpus: 2
memory: 4+
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
setup: |
echo setup for eval
@@ -67,8 +67,8 @@ resources:
cpus: 2
memory: 4+
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
setup: |
echo setup for eval
diff --git a/tests/test_yamls/gcp_per_region_images.yaml b/tests/test_yamls/gcp_per_region_images.yaml
index db07061d5d9..8d309ca8b8f 100644
--- a/tests/test_yamls/gcp_per_region_images.yaml
+++ b/tests/test_yamls/gcp_per_region_images.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: gcp
+ infra: gcp
image_id:
us-central1: skypilot:cpu-debian-10
us-west3: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20230112
diff --git a/tests/test_yamls/minimal_test_quick_tests_core.yaml b/tests/test_yamls/minimal_test_quick_tests_core.yaml
index 15857e972dd..9159f22ad00 100644
--- a/tests/test_yamls/minimal_test_quick_tests_core.yaml
+++ b/tests/test_yamls/minimal_test_quick_tests_core.yaml
@@ -1,5 +1,5 @@
resources:
- cloud: aws
+ infra: aws
instance_type: t3.small
file_mounts:
diff --git a/tests/test_yamls/pipeline.yaml b/tests/test_yamls/pipeline.yaml
index 3f3e0b1e563..14514c6c736 100644
--- a/tests/test_yamls/pipeline.yaml
+++ b/tests/test_yamls/pipeline.yaml
@@ -8,8 +8,8 @@ resources:
memory: 4+
use_spot: true
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
setup: |
echo setup for train
@@ -27,8 +27,8 @@ resources:
cpus: 2+
memory: 4+
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
setup: |
echo setup for train
diff --git a/tests/test_yamls/pipeline_aws.yaml b/tests/test_yamls/pipeline_aws.yaml
index fe074f74128..f11afae5bbd 100644
--- a/tests/test_yamls/pipeline_aws.yaml
+++ b/tests/test_yamls/pipeline_aws.yaml
@@ -4,8 +4,7 @@ name: pipeline
name: a
resources:
- cloud: aws
- region: us-east-2
+ infra: aws/us-east-2
cpus: 2+
memory: 4+
@@ -21,7 +20,7 @@ run: |
name: b
resources:
- cloud: aws
+ infra: aws
cpus: 2+
memory: 4+
@@ -39,7 +38,7 @@ run: |
name: eval1
resources:
- cloud: aws
+ infra: aws
cpus: 2+
memory: 4+
@@ -57,7 +56,7 @@ run: |
name: eval2
resources:
- cloud: aws
+ infra: aws
cpus: 2+
memory: 4+
diff --git a/tests/test_yamls/pipeline_gcp.yaml b/tests/test_yamls/pipeline_gcp.yaml
index c32b423a171..5d3cb1ba142 100644
--- a/tests/test_yamls/pipeline_gcp.yaml
+++ b/tests/test_yamls/pipeline_gcp.yaml
@@ -4,8 +4,7 @@ name: pipeline
name: a
resources:
- cloud: gcp
- zone: us-east4-b
+ infra: gcp/*/us-east4-b
cpus: 2+
memory: 4+
@@ -21,7 +20,7 @@ run: |
name: b
resources:
- cloud: gcp
+ infra: gcp
cpus: 2+
memory: 4+
@@ -39,7 +38,7 @@ run: |
name: eval1
resources:
- cloud: gcp
+ infra: gcp
cpus: 2+
memory: 4+
@@ -57,7 +56,7 @@ run: |
name: eval2
resources:
- cloud: gcp
+ infra: gcp
cpus: 2+
memory: 4+
diff --git a/tests/test_yamls/test_custom_image.yaml b/tests/test_yamls/test_custom_image.yaml
index 2b304c73bca..6479d25fb65 100644
--- a/tests/test_yamls/test_custom_image.yaml
+++ b/tests/test_yamls/test_custom_image.yaml
@@ -1,6 +1,5 @@
resources:
- cloud: aws
- region: us-east-2
+ infra: aws/us-east-2
# Nvidia image from
# https://aws.amazon.com/marketplace/pp/prodview-rf7na2b2ttvdg
image_id: ami-062ddd90fb6f8267a
diff --git a/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml b/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml
index f6e143d8378..233ba8e4caf 100644
--- a/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml
+++ b/tests/test_yamls/test_multiple_accelerators_unordered_with_default.yaml
@@ -4,8 +4,8 @@ resources:
use_spot: true
accelerators: {'A100:1', 'T4:1', 'V100:1'}
any_of:
- - cloud: aws
- - cloud: gcp
+ - infra: aws
+ - infra: gcp
run: |
nvidia-smi
diff --git a/tests/test_yamls/test_multiple_resources.yaml b/tests/test_yamls/test_multiple_resources.yaml
index 37c0c25e867..6771c52a70c 100644
--- a/tests/test_yamls/test_multiple_resources.yaml
+++ b/tests/test_yamls/test_multiple_resources.yaml
@@ -2,12 +2,11 @@ name: multi-resources
resources:
any_of:
- - cloud: aws
- region: us-east-1
+ - infra: aws/us-east-1
accelerators: A100:8
- - cloud: gcp
+ - infra: gcp
accelerators: T4:4
- - cloud: aws
+ - infra: aws
run:
- echo hi
\ No newline at end of file
+ echo hi
diff --git a/tests/unit_tests/test_controller_utils.py b/tests/unit_tests/test_controller_utils.py
index d3704afcd78..54050be9be7 100644
--- a/tests/unit_tests/test_controller_utils.py
+++ b/tests/unit_tests/test_controller_utils.py
@@ -7,6 +7,7 @@
from sky.jobs import constants as managed_job_constants
from sky.serve import constants as serve_constants
from sky.utils import controller_utils
+from sky.utils import registry
_DEFAULT_AUTOSTOP = {
'down': False,
@@ -73,21 +74,17 @@ def get_custom_controller_resources(keys, default):
def _check_controller_resources(
- controller_resources: Set[sky.Resources],
- expected_combinations: Set[Tuple[Optional[str], Optional[str],
- Optional[str]]],
+ controller_resources: Set[sky.Resources], expected_infra_list: Set[str],
default_controller_resources: Dict[str, Any]) -> None:
"""Helper function to check that the controller resources match the
expected combinations."""
for r in controller_resources:
config = r.to_yaml_config()
- cloud = config.pop('cloud')
- region = config.pop('region', None)
- zone = config.pop('zone', None)
- assert (cloud, region, zone) in expected_combinations
- expected_combinations.remove((cloud, region, zone))
+ infra = config.pop('infra')
+ assert infra in expected_infra_list
+ expected_infra_list.remove(infra)
assert config == default_controller_resources, config
- assert not expected_combinations
+ assert not expected_infra_list
@pytest.mark.parametrize(('controller_type', 'default_controller_resources'), [
@@ -107,28 +104,23 @@ def test_get_controller_resources_with_task_resources(
# 1. All resources has cloud specified. All of them
# could host controllers. Return a set, each item has
# one cloud specified plus the default resources.
- all_clouds = {sky.AWS(), sky.GCP(), sky.Azure()}
- expected_combinations = {(str(c), None, None) for c in all_clouds}
+ all_clouds = {'aws', 'gcp', 'azure'}
+ expected_infra_set = all_clouds
controller_resources = controller_utils.get_controller_resources(
controller=controller_utils.Controllers.from_type(controller_type),
- task_resources=[sky.Resources(cloud=c) for c in all_clouds])
- _check_controller_resources(controller_resources, expected_combinations,
+ task_resources=[sky.Resources(infra=c) for c in all_clouds])
+ _check_controller_resources(controller_resources, expected_infra_set,
default_controller_resources)
# 2. All resources has cloud specified. Some of them
# could NOT host controllers. Return a set, only
# containing those could host controllers.
all_clouds = {
- sky.AWS(),
- sky.GCP(),
- sky.Azure(),
- sky.Fluidstack(),
- sky.Kubernetes(),
- sky.Lambda(),
- sky.RunPod()
+ 'aws', 'gcp', 'azure', 'fluidstack', 'kubernetes', 'lambda', 'runpod'
}
- def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:
+ def _could_host_controllers(cloud_str: str) -> bool:
+ cloud = registry.CLOUD_REGISTRY.from_str(cloud_str)
try:
cloud.check_features_are_supported(
sky.Resources(),
@@ -137,13 +129,11 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:
return False
return True
- expected_combinations = {
- (str(c), None, None) for c in all_clouds if _could_host_controllers(c)
- }
+ expected_infra_set = {c for c in all_clouds if _could_host_controllers(c)}
controller_resources = controller_utils.get_controller_resources(
controller=controller_utils.Controllers.from_type(controller_type),
- task_resources=[sky.Resources(cloud=c) for c in all_clouds])
- _check_controller_resources(controller_resources, expected_combinations,
+ task_resources=[sky.Resources(infra=c) for c in all_clouds])
+ _check_controller_resources(controller_resources, expected_infra_set,
default_controller_resources)
# 3. Some resources does not have cloud specified.
@@ -152,7 +142,7 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:
controller=controller_utils.Controllers.from_type(controller_type),
task_resources=[
sky.Resources(accelerators='L4'),
- sky.Resources(cloud=sky.RunPod(), accelerators='A40'),
+ sky.Resources(infra='runpod', accelerators='A40'),
])
assert len(controller_resources) == 1
config = list(controller_resources)[0].to_yaml_config()
@@ -170,16 +160,18 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:
zone='us-central1-a'),
sky.Resources(cloud=sky.GCP(),
region='europe-west1',
- zone='europe-west1-b')
+ zone='europe-west1-b'),
]
- expected_combinations = {('AWS', 'us-east-1', 'us-east-1a'),
- ('AWS', 'ap-south-1', 'ap-south-1b'),
- ('GCP', 'us-central1', 'us-central1-a'),
- ('GCP', 'europe-west1', 'europe-west1-b')}
+ expected_infra_set = {
+ 'aws/us-east-1/us-east-1a',
+ 'aws/ap-south-1/ap-south-1b',
+ 'gcp/us-central1/us-central1-a',
+ 'gcp/europe-west1/europe-west1-b',
+ }
controller_resources = controller_utils.get_controller_resources(
controller=controller_utils.Controllers.from_type(controller_type),
task_resources=all_cloud_regions_zones)
- _check_controller_resources(controller_resources, expected_combinations,
+ _check_controller_resources(controller_resources, expected_infra_set,
default_controller_resources)
# 5. Clouds and regions are specified, but zones are partially specified.
@@ -190,17 +182,15 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:
controller_resources = controller_utils.get_controller_resources(
controller=controller_utils.Controllers.from_type(controller_type),
task_resources=[
- sky.Resources(cloud=sky.AWS(), region='us-west-2'),
- sky.Resources(cloud=sky.AWS(),
- region='us-west-2',
- zone='us-west-2b'),
- sky.Resources(cloud=sky.GCP(),
- region='us-central1',
- zone='us-central1-a')
+ sky.Resources(infra='aws/us-west-2'),
+ sky.Resources(infra='aws/us-west-2/us-west-2b'),
+ sky.Resources(infra='gcp/us-central1/us-central1-a')
])
- expected_combinations = {('AWS', 'us-west-2', None),
- ('GCP', 'us-central1', 'us-central1-a')}
- _check_controller_resources(controller_resources, expected_combinations,
+ expected_infra_set = {
+ 'aws/us-west-2',
+ 'gcp/us-central1/us-central1-a',
+ }
+ _check_controller_resources(controller_resources, expected_infra_set,
default_controller_resources)
# 6. Mixed case: Some resources have clouds and regions or zones, others do
@@ -219,11 +209,11 @@ def _could_host_controllers(cloud: sky.clouds.Cloud) -> bool:
sky.Resources(cloud=sky.AWS(), region='ap-south-1'),
sky.Resources(cloud=sky.Azure()),
])
- expected_combinations = {
- ('AWS', 'eu-north-1', None),
- ('AWS', 'ap-south-1', None),
- ('GCP', None, None),
- ('Azure', None, None),
+ expected_infra_set = {
+ 'aws/eu-north-1',
+ 'aws/ap-south-1',
+ 'gcp',
+ 'azure',
}
- _check_controller_resources(controller_resources, expected_combinations,
+ _check_controller_resources(controller_resources, expected_infra_set,
default_controller_resources)
diff --git a/tests/unit_tests/test_resources.py b/tests/unit_tests/test_resources.py
index b79211c9a9f..4c2a0bc5039 100644
--- a/tests/unit_tests/test_resources.py
+++ b/tests/unit_tests/test_resources.py
@@ -50,8 +50,9 @@ def _run_label_test(allowed_labels: Dict[str, str],
r = Resources(cloud=cloud, labels=l)
with pytest.raises(ValueError):
r.validate()
- assert False, (f'Resources were initialized with '
- f'invalid label {invalid_label}={value}')
+ assert False, (f'Resources {r.to_yaml_config()} were initialized '
+ f'with invalid label {invalid_label}={value} but no '
+ 'error was raised.')
def test_gcp_labels_resources():
@@ -208,3 +209,283 @@ def test_aws_make_deploy_variables(*mocks) -> None:
dryrun=True)
assert config == expected_config, ('unexpected resource '
'variables generated')
+
+
[email protected](['resources_kwargs', 'expected_yaml_config'], [
+ ({
+ 'infra': '*/*/us-east-1b',
+ 'accelerators': 'A10'
+ }, {
+ 'infra': '*/*/us-east-1b',
+ 'accelerators': {
+ 'A10': 1
+ },
+ 'disk_size': 256,
+ }),
+ ({
+ 'infra': 'gcp/*/us-east1-b',
+ 'accelerators': 'A10:8',
+ 'labels': {
+ 'key': 'value'
+ }
+ }, {
+ 'infra': 'gcp/*/us-east1-b',
+ 'accelerators': {
+ 'A10': 8
+ },
+ 'labels': {
+ 'key': 'value'
+ },
+ 'disk_size': 256,
+ }),
+])
+def test_to_yaml_and_load(resources_kwargs, expected_yaml_config):
+ r = Resources(**resources_kwargs)
+ yaml_config = r.to_yaml_config()
+ assert yaml_config == expected_yaml_config
+
+ loaded_r = list(Resources.from_yaml_config(yaml_config))[0]
+ assert loaded_r.cloud == r.cloud
+ assert loaded_r.region == r.region
+ assert loaded_r.zone == r.zone
+ original_accelerators = r.accelerators
+ assert loaded_r.accelerators == original_accelerators
+ assert original_accelerators == r.accelerators
+ assert loaded_r.labels == r.labels
+
+
+def test_resources_any_of():
+ """Test Resources creation with any_of option."""
+ # Test any_of with different resources options
+ config = {
+ 'any_of': [
+ {
+ 'cpus': 8,
+ 'memory': 16
+ },
+ {
+ 'cpus': 4,
+ 'memory': 32
+ },
+ {
+ 'accelerators': 'V100:1'
+ },
+ ]
+ }
+ resources_set = Resources.from_yaml_config(config)
+
+ # Verify it returns a set of resources
+ assert isinstance(resources_set, set)
+ assert len(resources_set) == 3
+
+ # Validate the resources options are correctly created
+ resources_list = list(resources_set)
+
+ # Find resources by properties (order may not be preserved)
+ r_cpus8 = next((r for r in resources_list if r.cpus == '8'), None)
+ r_cpus4 = next((r for r in resources_list if r.cpus == '4'), None)
+ r_gpu = next((r for r in resources_list if r.accelerators is not None),
+ None)
+
+ assert r_cpus8 is not None
+ assert r_cpus8.memory == '16'
+
+ assert r_cpus4 is not None
+ assert r_cpus4.memory == '32'
+
+ assert r_gpu is not None
+ assert r_gpu.accelerators == {'V100': 1}
+
+
+def test_resources_ordered():
+ """Test Resources creation with ordered option."""
+ # Test ordered with different resources options
+ config = {
+ 'ordered': [
+ {
+ 'infra': 'gcp',
+ 'accelerators': 'A100:8'
+ },
+ {
+ 'infra': 'aws',
+ 'accelerators': 'V100:8'
+ },
+ {
+ 'accelerators': 'T4:8'
+ },
+ ]
+ }
+ resources_list = Resources.from_yaml_config(config)
+
+ # Verify it returns a list of resources
+ assert isinstance(resources_list, list)
+ assert len(resources_list) == 3
+
+ # Ordered resources should preserve order
+ assert resources_list[0].infra.cloud.lower() == 'gcp'
+ assert resources_list[0].accelerators == {'A100': 8}
+
+ assert resources_list[1].infra.cloud.lower() == 'aws'
+ assert resources_list[1].accelerators == {'V100': 8}
+
+ assert resources_list[2].accelerators == {'T4': 8}
+
+
+def test_resources_any_of_spot_flag():
+ """Test Resources with any_of option including spot flag variations."""
+ config = {
+ 'accelerators': 'A100:8',
+ 'any_of': [{
+ 'use_spot': True
+ }, {
+ 'use_spot': False
+ }]
+ }
+ resources_set = Resources.from_yaml_config(config)
+
+ # Verify it returns a set of resources
+ assert isinstance(resources_set, set)
+ assert len(resources_set) == 2
+
+ # Find spot and on-demand resources
+ resources_list = list(resources_set)
+ r_spot = next((r for r in resources_list if r.use_spot), None)
+ r_ondemand = next((r for r in resources_list if not r.use_spot), None)
+
+ assert r_spot is not None
+ assert r_spot.accelerators == {'A100': 8}
+ assert r_spot.use_spot is True
+
+ assert r_ondemand is not None
+ assert r_ondemand.accelerators == {'A100': 8}
+ assert r_ondemand.use_spot is False
+
+
+def test_resources_ordered_preference():
+ """Test Resources creation with ordered preference correctly preserves order."""
+ config = {
+ 'ordered': [
+ {
+ 'infra': 'aws/us-east-1',
+ 'accelerators': 'A100:8'
+ },
+ {
+ 'infra': 'gcp/us-central1',
+ 'accelerators': 'A100:8'
+ },
+ {
+ 'infra': 'azure/eastus',
+ 'accelerators': 'A100:8'
+ },
+ ]
+ }
+ resources_list = Resources.from_yaml_config(config)
+
+ # Verify order matches the input order
+ assert resources_list[0].infra.cloud.lower() == 'aws'
+ assert resources_list[0].infra.region == 'us-east-1'
+
+ assert resources_list[1].infra.cloud.lower() == 'gcp'
+ assert resources_list[1].infra.region == 'us-central1'
+
+ assert resources_list[2].infra.cloud.lower() == 'azure'
+ assert resources_list[2].infra.region == 'eastus'
+
+
+def test_resources_any_of_ordered_exclusive():
+ """Test that Resources raises ValueError if both any_of and ordered are specified."""
+ config = {'any_of': [{'cpus': 8}], 'ordered': [{'cpus': 4}]}
+
+ # Should raise ValueError because both any_of and ordered are specified
+ with pytest.raises(ValueError,
+ match='Cannot specify both "any_of" and "ordered"'):
+ Resources.from_yaml_config(config)
+
+
+def test_resources_any_of_with_base_infra():
+ """Test Resources creation with any_of option and base infra."""
+ # Test any_of with base infra and additional infra specifications
+ config = {
+ 'infra': 'aws', # Base infra
+ 'cpus': 8,
+ 'any_of': [
+ {
+ 'infra': 'aws/us-east-1'
+ }, # Override with specific region
+ {
+ 'infra': 'aws/us-west-2'
+ }, # Different region
+ {
+ 'infra': 'gcp/us-central1'
+ }, # Different cloud
+ ]
+ }
+ resources_set = Resources.from_yaml_config(config)
+
+ # Verify it returns a set of resources
+ assert isinstance(resources_set, set)
+ assert len(resources_set) == 3
+
+ # Validate the resources are correctly created with proper infra
+ resources_list = list(resources_set)
+
+ # All resources should have cpus=8 from the base config
+ for r in resources_list:
+ assert r.cpus == '8'
+
+ # Find resources by infra properties
+ r_east = next((r for r in resources_list if r.infra.region == 'us-east-1'),
+ None)
+ r_west = next((r for r in resources_list if r.infra.region == 'us-west-2'),
+ None)
+ r_gcp = next((r for r in resources_list if r.infra.cloud.lower() == 'gcp'),
+ None)
+
+ assert r_east is not None
+ assert str(r_east.cloud).lower() == 'aws'
+
+ assert r_west is not None
+ assert str(r_west.cloud).lower() == 'aws'
+
+ assert r_gcp is not None
+ assert r_gcp.infra.region == 'us-central1'
+
+
+def test_resources_ordered_with_base_infra():
+ """Test Resources creation with ordered option and base infra."""
+ # Test ordered with base infra and additional infra specifications
+ config = {
+ 'infra': 'azure', # Base infra
+ 'accelerators': 'A100:8', # Base accelerator
+ 'ordered': [
+ {
+ 'infra': 'gcp/us-central1'
+ }, # Specific region in same cloud
+ {
+ 'infra': 'aws/us-east-1'
+ }, # Different cloud
+ {
+ 'accelerators': 'T4:8'
+ }, # Another cloud
+ ]
+ }
+ resources_list = Resources.from_yaml_config(config)
+
+ # Verify it returns a list of resources with right length
+ assert isinstance(resources_list, list)
+ assert len(resources_list) == 3
+
+ # All resources should have A100:8 from the base config
+ assert resources_list[0].accelerators == {'A100': 8}
+ assert resources_list[1].accelerators == {'A100': 8}
+ assert resources_list[2].accelerators == {'T4': 8}
+
+ # Ordered resources should preserve order and have correct infra
+ assert str(resources_list[0].cloud).lower() == 'gcp'
+ assert resources_list[0].region == 'us-central1'
+
+ assert str(resources_list[1].cloud).lower() == 'aws'
+ assert resources_list[1].region == 'us-east-1'
+
+ assert str(resources_list[2].cloud).lower() == 'azure'
+ assert resources_list[2].region is None
diff --git a/tests/unit_tests/test_sky/utils/test_cli_utils.py b/tests/unit_tests/test_sky/utils/test_cli_utils.py
new file mode 100644
index 00000000000..6893f624dfd
--- /dev/null
+++ b/tests/unit_tests/test_sky/utils/test_cli_utils.py
@@ -0,0 +1,378 @@
+"""Tests for CLI utilities.
+
+This module contains tests for the CLI utilities in sky.utils.cli_utils.
+"""
+import time
+
+import pytest
+
+import sky
+from sky import backends
+from sky.resources import Resources
+from sky.utils import status_lib
+from sky.utils.cli_utils import status_utils
+
+
+def test_status_table_format():
+ """Test the status table format."""
+ # Test AWS case
+ mock_resources = Resources(infra='aws/us-east-1',
+ instance_type='m6i.2xlarge')
+ mock_handle = backends.CloudVmRayResourceHandle(
+ cluster_name='test-cluster',
+ cluster_name_on_cloud='test-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=mock_resources)
+ mock_record = {
+ 'name': 'test-cluster',
+ 'handle': mock_handle,
+ 'launched_at': int(time.time()) - 3600, # 1 hour ago
+ 'status': status_lib.ClusterStatus.UP,
+ 'autostop': 300, # 5 minutes
+ 'to_down': False,
+ }
+
+ # Test the infra format
+ infra_str = status_utils._get_infra(mock_record)
+ assert infra_str == 'AWS (us-east-1)'
+
+ # Test the resources format
+ resources_str = status_utils._get_resources(mock_record)
+ assert resources_str == '1x(cpus=8, mem=32, m6i.2xlarge, ...)'
+
+ # Test Kubernetes case
+ mock_k8s_resources = Resources(infra='k8s/my-ctx', cpus='2+', memory='4+')
+ mock_k8s_handle = backends.CloudVmRayResourceHandle(
+ cluster_name='test-k8s-cluster',
+ cluster_name_on_cloud='test-k8s-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=2,
+ launched_resources=mock_k8s_resources)
+ mock_k8s_record = {
+ 'name': 'test-k8s-cluster',
+ 'handle': mock_k8s_handle,
+ 'launched_at': int(time.time()) - 3600, # 1 hour ago
+ 'status': status_lib.ClusterStatus.UP,
+ 'autostop': -1, # No autostop
+ 'to_down': False,
+ 'resources_str': '2x (...)',
+ }
+
+ # Test K8S infra format
+ k8s_infra_str = status_utils._get_infra(mock_k8s_record)
+ assert k8s_infra_str == 'Kubernetes (my-ctx)'
+
+ # Test K8S resources format
+ k8s_resources_str = status_utils._get_resources(mock_k8s_record)
+ assert k8s_resources_str == '2x (...)'
+
+ # For test purposes, override _get_resources to avoid trying to call
+ # resources_utils.get_readable_resources_repr on a Resources object
+ orig_get_resources = status_utils._get_resources
+
+ def mock_get_resources(cluster_record, truncate=True):
+ return cluster_record.get('resources_str', '-')
+
+ status_utils._get_resources = mock_get_resources
+
+ try:
+ # Test SSH case
+ mock_ssh_handle = backends.CloudVmRayResourceHandle(
+ cluster_name='test-ssh-cluster',
+ cluster_name_on_cloud='test-ssh-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=None)
+ mock_ssh_record = {
+ 'name': 'test-ssh-cluster',
+ 'handle': mock_ssh_handle,
+ 'launched_at': int(time.time()) - 3600, # 1 hour ago
+ 'status': status_lib.ClusterStatus.UP,
+ 'autostop': -1, # No autostop
+ 'to_down': False,
+ 'resources_str': '1x (...)',
+ 'infra': 'SSH/my-tobi-box',
+ }
+
+ # Test SSH infra format
+ ssh_infra_str = status_utils._get_infra(mock_ssh_record)
+ assert ssh_infra_str == 'SSH/my-tobi-box'
+
+ # Test SSH resources format
+ ssh_resources_str = status_utils._get_resources(mock_ssh_record)
+ assert ssh_resources_str == '1x (...)'
+ finally:
+ # Restore original function
+ status_utils._get_resources = orig_get_resources
+
+
+def test_show_status_table():
+ """Test the full status table output."""
+ mock_resources = Resources(infra='aws/us-east-1',
+ instance_type='m6i.2xlarge')
+ mock_handle = backends.CloudVmRayResourceHandle(
+ cluster_name='test-cluster',
+ cluster_name_on_cloud='test-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=mock_resources)
+
+ # Test different cluster statuses
+ statuses = [
+ status_lib.ClusterStatus.UP,
+ status_lib.ClusterStatus.INIT,
+ status_lib.ClusterStatus.STOPPED,
+ ]
+
+ for status in statuses:
+ mock_record = {
+ 'name': 'test-cluster',
+ 'handle': mock_handle,
+ 'launched_at': int(time.time()) - 3600, # 1 hour ago
+ 'status': status,
+ 'autostop': 300, # 5 minutes
+ 'to_down': False,
+ 'last_use': 'sky launch test.yaml',
+ 'user_name': 'test_user',
+ 'user_hash': 'abc123',
+ 'head_ip': '1.2.3.4',
+ 'resources_str': '1x(cpus=8, mem=32, m6i.2xlarge, ...)',
+ 'resources_str_full': ('1x(cpus=8, mem=32, m6i.2xlarge, '
+ 'disk=50)'),
+ }
+
+ # Test basic table
+ num_pending = status_utils.show_status_table([mock_record],
+ show_all=False,
+ show_user=False)
+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED
+ else 0)
+
+ # Test with user info
+ num_pending = status_utils.show_status_table([mock_record],
+ show_all=False,
+ show_user=True)
+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED
+ else 0)
+
+ # Test with show_all
+ num_pending = status_utils.show_status_table([mock_record],
+ show_all=True,
+ show_user=True)
+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED
+ else 0)
+
+ # Test with query_clusters
+ num_pending = status_utils.show_status_table(
+ [mock_record],
+ show_all=False,
+ show_user=False,
+ query_clusters=['test-cluster'])
+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED
+ else 0)
+
+ # Test with non-existent query_clusters
+ num_pending = status_utils.show_status_table(
+ [mock_record],
+ show_all=False,
+ show_user=False,
+ query_clusters=['non-existent'])
+ assert num_pending == (1 if status != status_lib.ClusterStatus.STOPPED
+ else 0)
+
+
+def test_get_command():
+ """Test command display in status table."""
+ mock_record = {
+ 'last_use': 'sky launch test.yaml --env FOO=bar',
+ }
+
+ # Test normal command
+ cmd_str = status_utils._get_command(mock_record)
+ assert cmd_str == 'sky launch test.yaml --env...'
+
+ # Test command without truncation
+ cmd_str = status_utils._get_command(mock_record, truncate=False)
+ assert cmd_str == 'sky launch test.yaml --env FOO=bar'
+
+ # Test short command
+ mock_record['last_use'] = 'sky status'
+ cmd_str = status_utils._get_command(mock_record)
+ assert cmd_str == 'sky status'
+
+
+def test_get_autostop():
+ """Test autostop display in status table."""
+ mock_record = {
+ 'autostop': 300, # 5 minutes
+ 'to_down': False,
+ }
+
+ # Test normal autostop
+ autostop_str = status_utils._get_autostop(mock_record)
+ assert autostop_str == '300m'
+
+ # Test autostop with to_down
+ mock_record['to_down'] = True
+ autostop_str = status_utils._get_autostop(mock_record)
+ assert autostop_str == '300m (down)'
+
+ # Test no autostop
+ mock_record['autostop'] = -1
+ autostop_str = status_utils._get_autostop(mock_record)
+ assert autostop_str == '(down)'
+
+ # Test no autostop and no to_down
+ mock_record['to_down'] = False
+ autostop_str = status_utils._get_autostop(mock_record)
+ assert autostop_str == '-'
+
+
+def test_get_resources():
+ """Test resources display in status table."""
+ mock_resources = Resources(infra='aws/us-east-1',
+ instance_type='m6i.2xlarge')
+ mock_handle = backends.CloudVmRayResourceHandle(
+ cluster_name='test-cluster',
+ cluster_name_on_cloud='test-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=mock_resources)
+ mock_record = {
+ 'handle': mock_handle,
+ 'resources_str': '1x(cpus=8, mem=32, m6i.2xlarge, ...)',
+ 'resources_str_full': '1x(cpus=8, mem=32, m6i.2xlarge, disk=50)',
+ }
+
+ # Test normal resources
+ resources_str = status_utils._get_resources(mock_record)
+ assert resources_str == '1x(cpus=8, mem=32, m6i.2xlarge, ...)'
+
+ # Test full resources
+ resources_str = status_utils._get_resources(mock_record, truncate=False)
+ assert resources_str == '1x(cpus=8, mem=32, m6i.2xlarge, disk=50)'
+
+ # Test no resources
+ mock_record['handle'].launched_resources = None
+ resources_str = status_utils._get_resources(mock_record)
+ assert resources_str == '-'
+
+
+def test_get_resources_gpu():
+ """Test resources display for clusters with GPUs."""
+ # Test AWS with GPU resources
+ mock_resources_aws_gpu = Resources(infra='aws/us-east-1',
+ instance_type='p3.2xlarge',
+ accelerators='V100')
+ mock_handle_aws_gpu = backends.CloudVmRayResourceHandle(
+ cluster_name='test-gpu-cluster',
+ cluster_name_on_cloud='test-gpu-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=mock_resources_aws_gpu)
+ mock_record_aws_gpu = {
+ 'handle': mock_handle_aws_gpu,
+ 'resources_str': '1x(V100:1, cpus=8, mem=61, ...)',
+ 'resources_str_full': '1x(V100:1, cpus=8, mem=61, disk=50)',
+ }
+
+ # Test GPU resources
+ resources_str = status_utils._get_resources(mock_record_aws_gpu)
+ assert resources_str == '1x(V100:1, cpus=8, mem=61, ...)'
+
+ # Test full GPU resources
+ resources_str = status_utils._get_resources(mock_record_aws_gpu,
+ truncate=False)
+ assert resources_str == '1x(V100:1, cpus=8, mem=61, disk=50)'
+
+ # Test GCP with multiple GPUs
+ mock_resources_gcp_multi_gpu = Resources(infra='gcp/us-central1',
+ instance_type='a2-highgpu-4g',
+ accelerators='A100:4')
+ mock_handle_gcp_multi_gpu = backends.CloudVmRayResourceHandle(
+ cluster_name='test-gcp-multi-gpu',
+ cluster_name_on_cloud='test-gcp-multi-gpu-cloud',
+ cluster_yaml=None,
+ launched_nodes=2,
+ launched_resources=mock_resources_gcp_multi_gpu)
+ mock_record_gcp_multi_gpu = {
+ 'handle': mock_handle_gcp_multi_gpu,
+ 'resources_str': '2x(gpus=A100:4, cpus=12, mem=85, ...)',
+ 'resources_str_full': '2x(gpus=A100:4, cpus=12, mem=85, disk=50)',
+ }
+
+ # Test multiple GPU resources
+ resources_str = status_utils._get_resources(mock_record_gcp_multi_gpu)
+ assert resources_str == '2x(gpus=A100:4, cpus=12, mem=85, ...)'
+
+
+def test_get_resources_kubernetes():
+ """Test resources display for Kubernetes clusters."""
+ # Test Kubernetes with CPU resources
+ mock_resources_k8s_cpu = Resources(infra='k8s/my-cluster-ctx',
+ cpus=4,
+ memory=16)
+ mock_handle_k8s_cpu = backends.CloudVmRayResourceHandle(
+ cluster_name='test-k8s-cluster',
+ cluster_name_on_cloud='test-k8s-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=mock_resources_k8s_cpu)
+ mock_record_k8s_cpu = {
+ 'handle': mock_handle_k8s_cpu,
+ 'resources_str': '1x(cpus=4, mem=16, ...)',
+ 'resources_str_full': '1x(cpus=4, mem=16, disk=50)',
+ }
+
+ # Test K8s CPU resources
+ resources_str = status_utils._get_resources(mock_record_k8s_cpu)
+ assert resources_str == '1x(cpus=4, mem=16, ...)'
+
+ # Test Kubernetes with GPU resources
+ mock_resources_k8s_gpu = Resources(infra='k8s/gpu-cluster-ctx',
+ cpus=8,
+ memory=32,
+ accelerators='A100:2')
+ mock_handle_k8s_gpu = backends.CloudVmRayResourceHandle(
+ cluster_name='test-k8s-gpu-cluster',
+ cluster_name_on_cloud='test-k8s-gpu-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=2,
+ launched_resources=mock_resources_k8s_gpu)
+ mock_record_k8s_gpu = {
+ 'handle': mock_handle_k8s_gpu,
+ 'resources_str': '2x(gpus=A100:2, cpus=8, mem=32, ...)',
+ 'resources_str_full': '2x(gpus=A100:2, cpus=8, mem=32, disk=50)',
+ }
+
+ # Test K8s GPU resources
+ resources_str = status_utils._get_resources(mock_record_k8s_gpu)
+ assert resources_str == '2x(gpus=A100:2, cpus=8, mem=32, ...)'
+
+ # Test full K8s GPU resources
+ resources_str = status_utils._get_resources(mock_record_k8s_gpu,
+ truncate=False)
+ assert resources_str == '2x(gpus=A100:2, cpus=8, mem=32, disk=50)'
+
+ # Test K8s with TPU resources
+ mock_resources_k8s_tpu = Resources(infra='k8s/gke-tpu-cluster',
+ cpus=8,
+ memory=32,
+ accelerators='tpu-v4-8')
+ mock_handle_k8s_tpu = backends.CloudVmRayResourceHandle(
+ cluster_name='test-k8s-tpu-cluster',
+ cluster_name_on_cloud='test-k8s-tpu-cluster-cloud',
+ cluster_yaml=None,
+ launched_nodes=1,
+ launched_resources=mock_resources_k8s_tpu)
+ mock_record_k8s_tpu = {
+ 'handle': mock_handle_k8s_tpu,
+ 'resources_str': '1x(gpus=tpu-v4-8:1, cpus=8, mem=32, ...)',
+ 'resources_str_full': ('1x(gpus=tpu-v4-8:1, cpus=8, mem=32, '
+ 'disk=50)'),
+ }
+
+ # Test K8s TPU resources
+ resources_str = status_utils._get_resources(mock_record_k8s_tpu)
+ assert resources_str == '1x(gpus=tpu-v4-8:1, cpus=8, mem=32, ...)'
diff --git a/tests/unit_tests/test_common_utils.py b/tests/unit_tests/test_sky/utils/test_common_utils.py
similarity index 79%
rename from tests/unit_tests/test_common_utils.py
rename to tests/unit_tests/test_sky/utils/test_common_utils.py
index f8a0a17498a..d42d908c579 100644
--- a/tests/unit_tests/test_common_utils.py
+++ b/tests/unit_tests/test_sky/utils/test_common_utils.py
@@ -9,6 +9,64 @@
MOCKED_USER_HASH = 'ab12cd34'
+class TestTruncateLongString:
+
+ def test_no_truncation_needed(self):
+ s = "short string"
+ result = common_utils.truncate_long_string(s, 15)
+ assert result == s
+
+ def test_end_truncation(self):
+ s = "this is a very long string that needs truncation"
+ result = common_utils.truncate_long_string(s, 20)
+ assert len(result) <= 20 + 3 # +3 for '...'
+ assert result.endswith('...')
+ assert result.startswith('this is a very')
+
+ def test_middle_truncation(self):
+ s = "us-west-2-availability-zone-1"
+ result = common_utils.truncate_long_string(s, 20, truncate_middle=True)
+ assert len(result) <= 20
+ assert '...' in result
+ assert result.startswith('us-west')
+ assert result.endswith('zone-1')
+
+ def test_middle_truncation_odd_length(self):
+ s = "us-west-2-availability-zone-1"
+ result = common_utils.truncate_long_string(s, 15, truncate_middle=True)
+ assert len(result) <= 15
+ assert '...' in result
+ assert result.startswith('us-w')
+ assert result.endswith('ne-1')
+
+ def test_middle_truncation_very_short(self):
+ s = "us-west-2-availability-zone-1"
+ result = common_utils.truncate_long_string(s, 3, truncate_middle=True)
+ assert result == '...'
+
+ def test_empty_string(self):
+ assert common_utils.truncate_long_string('', 10) == ''
+
+ def test_exact_length_no_truncation(self):
+ assert common_utils.truncate_long_string(
+ 'abcde', 5, truncate_middle=True) == 'abcde'
+
+ def test_one_less_than_length(self):
+ assert common_utils.truncate_long_string('abcde',
+ 4,
+ truncate_middle=True) == 'a...'
+
+ def test_middle_truncation_even_length(self):
+ assert common_utils.truncate_long_string(
+ 'abcdefghijklmnopqrstuvwxyz', 10,
+ truncate_middle=True) == 'abcd...xyz'
+
+ def test_middle_truncation_odd_max_length(self):
+ assert common_utils.truncate_long_string(
+ 'abcdefghijklmnopqrstuvwxyz', 11,
+ truncate_middle=True) == 'abcd...wxyz'
+
+
class TestCheckClusterNameIsValid:
def test_check(self):
diff --git a/tests/unit_tests/test_sky/utils/test_infra_utils.py b/tests/unit_tests/test_sky/utils/test_infra_utils.py
new file mode 100644
index 00000000000..c164a24ddfc
--- /dev/null
+++ b/tests/unit_tests/test_sky/utils/test_infra_utils.py
@@ -0,0 +1,163 @@
+"""Tests for infra_utils.py"""
+import unittest
+
+from sky.utils import infra_utils
+
+
+class TestInfraUtils(unittest.TestCase):
+ """Tests for infra_utils.py"""
+
+ def test_from_str(self):
+ """Test the from_str function with various inputs."""
+ test_cases = [
+ # Format: (infra_str, expected_cloud, expected_region, expected_zone)
+ ('aws/us-east-1', 'aws', 'us-east-1', None),
+ ('aws/us-east-1/us-east-1a', 'aws', 'us-east-1', 'us-east-1a'),
+ ('gcp/us-central1', 'gcp', 'us-central1', None),
+ ('k8s/my-cluster-ctx', 'kubernetes', 'my-cluster-ctx', None),
+ ('kubernetes/my-cluster-ctx', 'kubernetes', 'my-cluster-ctx', None),
+ # Test Kubernetes context with slashes
+ ('k8s/my/cluster/ctx', 'kubernetes', 'my/cluster/ctx', None),
+ # Test AWS with empty zone
+ ('aws/us-east-1/', 'aws', 'us-east-1', None),
+ # Test with just cloud
+ ('aws', 'aws', None, None),
+ # Test with asterisk
+ ('*/us-east-1', None, 'us-east-1', None),
+ ('aws/*/us-east-1a', 'aws', None, 'us-east-1a'),
+ ('aws/*', 'aws', None, None),
+ ('*/*/us-east-1a', None, None, 'us-east-1a'),
+ (None, None, None, None),
+ ('*', None, None, None),
+ # Test case sensitivity
+ ('AWS/US-EAST-1', 'aws', 'US-EAST-1', None),
+ ('GCP/US-CENTRAL1', 'gcp', 'US-CENTRAL1', None),
+ ('K8S/MY-CLUSTER', 'kubernetes', 'MY-CLUSTER', None),
+ # Test whitespace handling
+ (' aws/us-east-1 ', 'aws', 'us-east-1', None),
+ (' aws / us-east-1 / us-east-1a ', 'aws', 'us-east-1',
+ 'us-east-1a'),
+ # Test local and lambda clouds
+ ('local', 'local', None, None),
+ ('lambda', 'lambda', None, None),
+ ]
+
+ for infra_str, expected_cloud, expected_region, expected_zone in test_cases:
+ info = infra_utils.InfraInfo.from_str(infra_str)
+ cloud_str = info.cloud
+
+ self.assertEqual(
+ cloud_str, expected_cloud,
+ f'Failed on {infra_str}: Expected cloud={expected_cloud}, got {cloud_str}'
+ )
+ self.assertEqual(
+ info.region, expected_region,
+ f'Failed on {infra_str}: Expected region={expected_region}, got {info.region}'
+ )
+ self.assertEqual(
+ info.zone, expected_zone,
+ f'Failed on {infra_str}: Expected zone={expected_zone}, got {info.zone}'
+ )
+
+ def test_from_str_errors(self):
+ """Test the from_str function with invalid inputs."""
+ error_test_cases = [
+ # Too many segments
+ 'aws/us-east-1/us-east-1a/extra',
+ # Invalid format
+ 'aws//us-east-1',
+ # Just slashes
+ '///',
+ # Multiple consecutive slashes
+ 'aws///us-east-1',
+ ]
+
+ for infra_str in error_test_cases:
+ with self.assertRaises((ValueError, TypeError),
+ msg=f'Expected error for {infra_str!r}'):
+ infra_utils.InfraInfo.from_str(infra_str)
+
+ def test_to_str(self):
+ """Test the to_str function with various inputs."""
+ test_cases = [
+ # Format: (cloud, region, zone, expected)
+ ('aws', 'us-east-1', None, 'aws/us-east-1'),
+ ('aws', 'us-east-1', 'us-east-1a', 'aws/us-east-1/us-east-1a'),
+ ('gcp', 'us-central1', None, 'gcp/us-central1'),
+ ('kubernetes', 'my-cluster-ctx', None, 'kubernetes/my-cluster-ctx'),
+ # Test with slashes in Kubernetes context
+ ('kubernetes', 'my/cluster/ctx', None, 'kubernetes/my/cluster/ctx'),
+ # Test with zone in Kubernetes
+ ('kubernetes', 'my-cluster-ctx', 'some-zone',
+ 'kubernetes/my-cluster-ctx/some-zone'),
+ # Test with just cloud
+ ('aws', None, None, 'aws'),
+ # Test with None cloud
+ (None, 'us-east-1', None, '*/us-east-1'),
+ # Additional test cases for simplified implementation
+ ('aws', '*', '*', 'aws'),
+ ('gcp', 'us-central1', '*', 'gcp/us-central1'),
+ ('aws', '*', 'us-east-1a', 'aws/*/us-east-1a'),
+ (None, None, None, None),
+ ('*', '*', '*', None),
+ ('*', 'us-east-1', None, '*/us-east-1'),
+ # Test case sensitivity preservation
+ ('aws', 'US-EAST-1', 'US-EAST-1A', 'aws/US-EAST-1/US-EAST-1A'),
+ # Test local and lambda clouds
+ ('local', None, None, 'local'),
+ ('lambda', 'region-name', None, 'lambda/region-name'),
+ ]
+
+ for cloud, region, zone, expected in test_cases:
+ result = infra_utils.InfraInfo(cloud, region, zone).to_str()
+ self.assertEqual(result, expected,
+ f'Failed: Expected {expected}, got {result}')
+
+ def test_formatted_str(self):
+ """Test the formatted_str function with various inputs."""
+ test_cases = [
+ # Format: (cloud, region, zone, truncate, expected)
+ ('aws', 'us-east-1', None, True, 'aws (us-east-1)'),
+ ('aws', 'us-east-1', 'us-east-1a', True, 'aws (us-east-1a)'),
+ ('gcp', 'us-central1', None, True, 'gcp (us-central1)'),
+ ('kubernetes', 'my-cluster-ctx', None, True,
+ 'kubernetes (my-cluster-ctx)'),
+ # Test with slashes in Kubernetes context
+ ('kubernetes', 'my/cluster/ctx', None, True,
+ 'kubernetes (my/cluster/ctx)'),
+ # Test with just cloud
+ ('aws', None, None, True, 'aws'),
+ # Test with None cloud
+ (None, 'us-east-1', None, True, '-'),
+ # Test with long region/zone (truncation)
+ ('aws', 'us-east-1-very-long-region', None, True,
+ 'aws (us-east-1-v...long-region)'),
+ ('aws', 'us-east-1-very-very-very-long-region', None, True,
+ 'aws (us-east-1-v...long-region)'),
+ ('aws', 'us-east-1-very-long-region', None, False,
+ 'aws (us-east-1-very-long-region)'),
+ # Test with asterisk
+ ('*', '*', '*', True, '-'),
+ ('aws', '*', '*', True, 'aws'),
+ ('aws', '*', 'us-east-1a', True, 'aws (us-east-1a)'),
+ ('*', 'us-east-1', None, True, '-'),
+ # Test truncation boundary cases
+ ('aws', 'x' * 25, None, True, 'aws (' + 'x' * 25 + ')'),
+ ('aws', 'x' * 26, None, True,
+ 'aws (' + 'x' * 11 + '...' + 'x' * 11 + ')'),
+ ('aws', 'x' * 24, None, True, 'aws (' + 'x' * 24 + ')'),
+ # Test with empty strings
+ ('aws', '', None, True, 'aws'),
+ ('aws', '', '', True, 'aws'),
+ # Test local and lambda clouds
+ ('local', None, None, True, 'local'),
+ ('lambda', 'region-name', None, True, 'lambda (region-name)'),
+ ]
+
+ for cloud, region, zone, truncate, expected in test_cases:
+ result = infra_utils.InfraInfo(
+ cloud, region, zone).formatted_str(truncate=truncate)
+ self.assertEqual(
+ result, expected, f'Failed: Expected {expected}, got {result}, '
+ f'cloud={cloud}, region={region}, zone={zone}, '
+ f'truncate={truncate}')
diff --git a/tests/unit_tests/test_sky/utils/test_schemas.py b/tests/unit_tests/test_sky/utils/test_schemas.py
new file mode 100644
index 00000000000..39c5bb4f2a1
--- /dev/null
+++ b/tests/unit_tests/test_sky/utils/test_schemas.py
@@ -0,0 +1,113 @@
+"""Tests for schemas.py"""
+import unittest
+
+import jsonschema
+
+from sky.utils import schemas
+
+
+class TestResourcesSchema(unittest.TestCase):
+ """Tests for the resources schema in schemas.py"""
+
+ def test_valid_infra_configs(self):
+ """Test validation of valid infra field configs."""
+ resources_schema = schemas.get_resources_schema()
+
+ # Valid infra configurations
+ valid_infra_configs = [
+ {
+ 'infra': 'aws'
+ },
+ {
+ 'infra': 'gcp'
+ },
+ {
+ 'infra': 'azure'
+ },
+ {
+ 'infra': 'kubernetes'
+ },
+ {
+ 'infra': 'aws/us-east-1'
+ },
+ {
+ 'infra': 'aws/us-east-1/us-east-1a'
+ },
+ {
+ 'infra': 'gcp/us-central1'
+ },
+ {
+ 'infra': 'k8s/my-cluster-ctx'
+ },
+ {
+ 'infra': 'kubernetes/my/complex/context/path'
+ },
+ {
+ 'infra': '*'
+ },
+ {
+ 'infra': '*/us-east-1'
+ },
+ {
+ 'infra': '*/us-east-1/us-east-1a'
+ },
+ {
+ 'infra': '*/*'
+ },
+ {
+ 'infra': '*/*/us-east-1a'
+ },
+ ]
+
+ for config in valid_infra_configs:
+ # Should not raise an exception
+ jsonschema.validate(instance=config, schema=resources_schema)
+
+ def test_invalid_infra_type(self):
+ """Test validation rejects invalid infra field types."""
+ resources_schema = schemas.get_resources_schema()
+
+ # Invalid infra configurations - wrong type
+ invalid_type_config = {'infra': 123} # Not a string
+ with self.assertRaises(jsonschema.exceptions.ValidationError):
+ jsonschema.validate(instance=invalid_type_config,
+ schema=resources_schema)
+
+ def test_invalid_infra_format(self):
+ """Test validation rejects invalid infra field formats."""
+ resources_schema = schemas.get_resources_schema()
+
+ # Invalid formats
+ invalid_formats = [
+ {
+ 'infra': 'aws/'
+ }, # Trailing slash without region
+ {
+ 'infra': 'aws//us-east-1a'
+ }, # Empty region
+ {
+ 'infra': '/us-east-1'
+ }, # Missing cloud
+ {
+ 'infra': 'aws/us-east-1/zone/extra'
+ }, # Too many segments
+ {
+ 'infra': 'invalid-cloud/us-east-1'
+ }, # Invalid cloud name
+ {
+ 'infra': 'invalid-cloud'
+ }, # Invalid cloud name without region
+ {
+ 'infra': '**/us-east-1'
+ }, # Multiple asterisks (invalid syntax)
+ ]
+
+ for config in invalid_formats:
+ with self.assertRaises(
+ jsonschema.exceptions.ValidationError,
+ msg=f"Expected '{config['infra']}' to be rejected"):
+ jsonschema.validate(instance=config, schema=resources_schema)
+
+
+if __name__ == "__main__":
+ unittest.main()
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
|
skypilot-org__skypilot-5435@50d9632
|
skypilot-org/skypilot
|
Python
| 5,435
|
Support launch controller and jobs on different cloud for smoke test
|
<!-- Describe the changes in this PR -->
Resolve #5234
Add one more pytest argument: `--controller-cloud`.
Override this in the Sky env config for `jobs.controller.resources.cloud` when running tests.
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [x] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [x] Relevant individual tests: `/smoke-test --aws --controller-cloud gcp -k test_managed_jobs_basic` (CI)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-04-29T08:02:49Z
|
Support launch controller and jobs on different cloud
we should support it from our codebase instead of the buildkite infra. For example:
`sky jobs launch --controller_cloud aws --cloud gcp`
We don’t necessarily need to support these exact parameters, but provide a way to configure the controller cloud and set this in the test case.
|
Actually, with #5197 we can use `--config jobs.controller.resources.cloud=aws`. But if the jobs controller is already spawned it won't do anything.
|
[
{
"body": "we should support it from our codebase instead of the buildkite infra. For example:\n\n`sky jobs launch --controller_cloud aws --cloud gcp`\n\nWe don’t necessarily need to support these exact parameters, but provide a way to configure the controller cloud and set this in the test case.",
"number": 5234,
"title": "Support launch controller and jobs on different cloud"
}
] |
7b804dafe2f6b775f8a357ac6e147b83e792af93
|
{
"head_commit": "50d96329de8bd979e5473ec3b9fc4b59be4b3c8b",
"head_commit_message": "fix",
"patch_to_review": "diff --git a/.buildkite/generate_pipeline.py b/.buildkite/generate_pipeline.py\nindex 3ea4d9eb27f..3cce4b17e35 100644\n--- a/.buildkite/generate_pipeline.py\n+++ b/.buildkite/generate_pipeline.py\n@@ -114,6 +114,8 @@ def _parse_args(args: Optional[str] = None):\n \n parser.add_argument('--base-branch')\n \n+ parser.add_argument('--controller-cloud')\n+\n parsed_args, _ = parser.parse_known_args(args_list)\n \n # Collect chosen clouds from the flags\n@@ -142,6 +144,8 @@ def _parse_args(args: Optional[str] = None):\n extra_args.append('--remote-server')\n if parsed_args.base_branch:\n extra_args.append(f'--base-branch {parsed_args.base_branch}')\n+ if parsed_args.controller_cloud:\n+ extra_args.append(f'--controller-cloud {parsed_args.controller_cloud}')\n \n return default_clouds_to_run, parsed_args.k, extra_args\n \ndiff --git a/tests/conftest.py b/tests/conftest.py\nindex 6c24bcb0919..79ad4518ca4 100644\n--- a/tests/conftest.py\n+++ b/tests/conftest.py\n@@ -140,6 +140,12 @@ def pytest_addoption(parser):\n default='master',\n help='Base branch to test backward compatibility against',\n )\n+ parser.addoption(\n+ '--controller-cloud',\n+ type=str,\n+ default=None,\n+ help='Controller cloud to use for tests',\n+ )\n \n \n def pytest_configure(config):\n@@ -437,3 +443,16 @@ def setup_docker_container(request):\n # Release the lock and close the file\n fcntl.flock(lock_fd, fcntl.LOCK_UN)\n lock_fd.close()\n+\n+\[email protected](scope='session', autouse=True)\n+def setup_controller_cloud_env(request):\n+ \"\"\"Setup controller cloud environment variable if --controller-cloud is specified.\"\"\"\n+ if not request.config.getoption('--controller-cloud'):\n+ yield\n+ return\n+\n+ # Set environment variable to indicate we're using remote server\n+ controller_cloud = request.config.getoption('--controller-cloud')\n+ os.environ['PYTEST_SKYPILOT_CONTROLLER_CLOUD'] = controller_cloud\n+ yield controller_cloud\ndiff --git a/tests/smoke_tests/smoke_tests_utils.py b/tests/smoke_tests/smoke_tests_utils.py\nindex 7300319385d..49d11443aa9 100644\n--- a/tests/smoke_tests/smoke_tests_utils.py\n+++ b/tests/smoke_tests/smoke_tests_utils.py\n@@ -8,7 +8,8 @@\n import subprocess\n import sys\n import tempfile\n-from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple\n+from typing import (Any, Dict, Generator, List, NamedTuple, Optional, Sequence,\n+ Set, Tuple)\n import uuid\n \n import colorama\n@@ -355,6 +356,64 @@ def terminate_gcp_replica(name: str, zone: str, replica_id: int) -> str:\n f' --quiet $({query_cmd})')\n \n \[email protected]\n+def override_sky_config(\n+ test: Test, env_dict: Dict[str, str]\n+) -> Generator[Optional[tempfile.NamedTemporaryFile], None, None]:\n+\n+ def deep_update(base_dict: Dict[str, Any],\n+ update_dict: Dict[str, Any]) -> Dict[str, Any]:\n+ \"\"\"\n+ Recursively update a dictionary with another dictionary.\n+ \"\"\"\n+ for key, value in update_dict.items():\n+ if isinstance(value, dict) and key in base_dict and isinstance(\n+ base_dict[key], dict):\n+ deep_update(base_dict[key], value)\n+ else:\n+ base_dict[key] = value\n+ return base_dict\n+\n+ override_sky_config_dict = dict()\n+ if is_remote_server_test():\n+ override_sky_config_dict['api_server'] = {\n+ 'endpoint': docker_utils.get_api_server_endpoint_inside_docker()\n+ }\n+ test.echo(\n+ f'Overriding API server endpoint: {override_sky_config_dict[\"api_server\"][\"endpoint\"]}'\n+ )\n+\n+ if pytest_controller_cloud():\n+ override_sky_config_dict['jobs'] = {\n+ 'controller': {\n+ 'resources': {\n+ 'cloud': pytest_controller_cloud()\n+ }\n+ }\n+ }\n+ test.echo(\n+ f'Overriding controller cloud: {override_sky_config_dict[\"jobs\"][\"controller\"][\"resources\"][\"cloud\"]}'\n+ )\n+\n+ if not override_sky_config_dict:\n+ yield None\n+ return\n+\n+ temp_config_file = tempfile.NamedTemporaryFile(mode='w', suffix='.yaml')\n+ if skypilot_config.ENV_VAR_SKYPILOT_CONFIG in env_dict:\n+ # Read the original config\n+ with open(env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG], 'r') as f:\n+ original_config = yaml.safe_load(f)\n+ else:\n+ original_config = {}\n+ original_config = deep_update(original_config, override_sky_config_dict)\n+ yaml.dump(original_config, temp_config_file)\n+ temp_config_file.flush()\n+ # Update the environment variable to use the temporary file\n+ env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG] = temp_config_file.name\n+ yield temp_config_file\n+\n+\n def run_one_test(test: Test) -> None:\n # Fail fast if `sky` CLI somehow errors out.\n subprocess.run(['sky', 'status'], stdout=subprocess.DEVNULL, check=True)\n@@ -378,86 +437,65 @@ def run_one_test(test: Test) -> None:\n if test.env:\n env_dict.update(test.env)\n \n- # Create a temporary config file with API server config only if running with remote server\n- if is_remote_server_test():\n- temp_config = tempfile.NamedTemporaryFile(mode='w',\n- suffix='.yaml',\n- delete=False)\n- if skypilot_config.ENV_VAR_SKYPILOT_CONFIG in env_dict:\n- # Read the original config\n- with open(env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG],\n- 'r') as f:\n- config = yaml.safe_load(f)\n- else:\n- config = {}\n- config['api_server'] = {\n- 'endpoint': docker_utils.get_api_server_endpoint_inside_docker()\n- }\n- test.echo(\n- f'Overriding API server endpoint: {config[\"api_server\"][\"endpoint\"]}'\n- )\n- yaml.dump(config, temp_config)\n- temp_config.close()\n- # Update the environment variable to use the temporary file\n- env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG] = temp_config.name\n-\n- for command in test.commands:\n- write(f'+ {command}\\n')\n- flush()\n- proc = subprocess.Popen(\n- command,\n- stdout=subprocess_out,\n- stderr=subprocess.STDOUT,\n- shell=True,\n- executable='/bin/bash',\n- env=env_dict,\n- )\n- try:\n- proc.wait(timeout=test.timeout)\n- except subprocess.TimeoutExpired as e:\n+ with override_sky_config(test, env_dict):\n+ for command in test.commands:\n+ write(f'+ {command}\\n')\n flush()\n- test.echo(f'Timeout after {test.timeout} seconds.')\n- test.echo(str(e))\n- write(f'Timeout after {test.timeout} seconds.\\n')\n- flush()\n- # Kill the current process.\n- proc.terminate()\n- proc.returncode = 1 # None if we don't set it.\n- break\n-\n- if proc.returncode:\n- break\n-\n- style = colorama.Style\n- fore = colorama.Fore\n- outcome = (f'{fore.RED}Failed{style.RESET_ALL} (returned {proc.returncode})'\n- if proc.returncode else f'{fore.GREEN}Passed{style.RESET_ALL}')\n- reason = f'\\nReason: {command}' if proc.returncode else ''\n- msg = (f'{outcome}.'\n- f'{reason}')\n- if log_to_stdout:\n- test.echo(msg)\n- else:\n- msg += f'\\nLog: less -r {log_file.name}\\n'\n- test.echo(msg)\n- write(msg)\n-\n- if (proc.returncode == 0 or\n- pytest.terminate_on_failure) and test.teardown is not None:\n- subprocess_utils.run(\n- test.teardown,\n- stdout=subprocess_out,\n- stderr=subprocess.STDOUT,\n- timeout=10 * 60, # 10 mins\n- shell=True,\n- env=env_dict,\n- )\n-\n- if proc.returncode:\n+ proc = subprocess.Popen(\n+ command,\n+ stdout=subprocess_out,\n+ stderr=subprocess.STDOUT,\n+ shell=True,\n+ executable='/bin/bash',\n+ env=env_dict,\n+ )\n+ try:\n+ proc.wait(timeout=test.timeout)\n+ except subprocess.TimeoutExpired as e:\n+ flush()\n+ test.echo(f'Timeout after {test.timeout} seconds.')\n+ test.echo(str(e))\n+ write(f'Timeout after {test.timeout} seconds.\\n')\n+ flush()\n+ # Kill the current process.\n+ proc.terminate()\n+ proc.returncode = 1 # None if we don't set it.\n+ break\n+\n+ if proc.returncode:\n+ break\n+\n+ style = colorama.Style\n+ fore = colorama.Fore\n+ outcome = (\n+ f'{fore.RED}Failed{style.RESET_ALL} (returned {proc.returncode})'\n+ if proc.returncode else f'{fore.GREEN}Passed{style.RESET_ALL}')\n+ reason = f'\\nReason: {command}' if proc.returncode else ''\n+ msg = (f'{outcome}.'\n+ f'{reason}')\n if log_to_stdout:\n- raise Exception(f'test failed')\n+ test.echo(msg)\n else:\n- raise Exception(f'test failed: less -r {log_file.name}')\n+ msg += f'\\nLog: less -r {log_file.name}\\n'\n+ test.echo(msg)\n+ write(msg)\n+\n+ if (proc.returncode == 0 or\n+ pytest.terminate_on_failure) and test.teardown is not None:\n+ subprocess_utils.run(\n+ test.teardown,\n+ stdout=subprocess_out,\n+ stderr=subprocess.STDOUT,\n+ timeout=10 * 60, # 10 mins\n+ shell=True,\n+ env=env_dict,\n+ )\n+\n+ if proc.returncode:\n+ if log_to_stdout:\n+ raise Exception(f'test failed')\n+ else:\n+ raise Exception(f'test failed: less -r {log_file.name}')\n \n \n def get_aws_region_for_quota_failover() -> Optional[str]:\n@@ -663,6 +701,16 @@ def is_remote_server_test() -> bool:\n return 'PYTEST_SKYPILOT_REMOTE_SERVER_TEST' in os.environ\n \n \n+def pytest_controller_cloud() -> Optional[str]:\n+ return os.environ.get('PYTEST_SKYPILOT_CONTROLLER_CLOUD', None)\n+\n+\n+def override_env_config(config: Dict[str, str]):\n+ \"\"\"Override the environment variable for the test.\"\"\"\n+ for key, value in config.items():\n+ os.environ[key] = value\n+\n+\n def get_api_server_url() -> str:\n \"\"\"Get the API server URL in the test environment.\"\"\"\n if is_remote_server_test():\ndiff --git a/tests/smoke_tests/test_basic.py b/tests/smoke_tests/test_basic.py\nindex 951b35cba00..43150607087 100644\n--- a/tests/smoke_tests/test_basic.py\n+++ b/tests/smoke_tests/test_basic.py\n@@ -33,7 +33,6 @@\n \n import sky\n from sky import skypilot_config\n-from sky.clouds import Lambda\n from sky.skylet import constants\n from sky.skylet import events\n from sky.utils import common_utils\n"
}
|
[
{
"diff_hunk": "@@ -355,6 +356,64 @@ def terminate_gcp_replica(name: str, zone: str, replica_id: int) -> str:\n f' --quiet $({query_cmd})')\n \n \[email protected]\n+def override_sky_config(\n+ test: Test, env_dict: Dict[str, str]\n+) -> Generator[Optional[tempfile.NamedTemporaryFile], None, None]:\n+\n+ def deep_update(base_dict: Dict[str, Any],\n+ update_dict: Dict[str, Any]) -> Dict[str, Any]:\n+ \"\"\"\n+ Recursively update a dictionary with another dictionary.\n+ \"\"\"\n+ for key, value in update_dict.items():\n+ if isinstance(value, dict) and key in base_dict and isinstance(\n+ base_dict[key], dict):\n+ deep_update(base_dict[key], value)\n+ else:\n+ base_dict[key] = value\n+ return base_dict\n+\n+ override_sky_config_dict = dict()\n+ if is_remote_server_test():\n+ override_sky_config_dict['api_server'] = {\n+ 'endpoint': docker_utils.get_api_server_endpoint_inside_docker()\n+ }\n+ test.echo(\n+ f'Overriding API server endpoint: {override_sky_config_dict[\"api_server\"][\"endpoint\"]}'\n+ )\n+\n+ if pytest_controller_cloud():\n+ override_sky_config_dict['jobs'] = {\n+ 'controller': {\n+ 'resources': {\n+ 'cloud': pytest_controller_cloud()\n+ }\n+ }\n+ }\n+ test.echo(\n+ f'Overriding controller cloud: {override_sky_config_dict[\"jobs\"][\"controller\"][\"resources\"][\"cloud\"]}'\n+ )\n+\n+ if not override_sky_config_dict:\n+ yield None\n+ return\n+\n+ temp_config_file = tempfile.NamedTemporaryFile(mode='w', suffix='.yaml')\n+ if skypilot_config.ENV_VAR_SKYPILOT_CONFIG in env_dict:\n+ # Read the original config\n+ with open(env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG], 'r') as f:\n+ original_config = yaml.safe_load(f)",
"line": null,
"original_line": 406,
"original_start_line": 402,
"path": "tests/smoke_tests/smoke_tests_utils.py",
"start_line": null,
"text": "@user1:\nThis is also potentially replaceable with `/sky/skypilot_config.py:_parse_config_file()`"
},
{
"diff_hunk": "@@ -355,6 +356,64 @@ def terminate_gcp_replica(name: str, zone: str, replica_id: int) -> str:\n f' --quiet $({query_cmd})')\n \n \[email protected]\n+def override_sky_config(\n+ test: Test, env_dict: Dict[str, str]\n+) -> Generator[Optional[tempfile.NamedTemporaryFile], None, None]:\n+\n+ def deep_update(base_dict: Dict[str, Any],",
"line": null,
"original_line": 364,
"original_start_line": null,
"path": "tests/smoke_tests/smoke_tests_utils.py",
"start_line": null,
"text": "@user1:\nWe have a similar function at `/sky/skypilot_config.py:overlay_skypilot_config`, since `Dict[str, Any]` can be trivially converted to `config_utils.Config` you may be able to use that function instead of defining a function here."
},
{
"diff_hunk": "@@ -355,6 +356,64 @@ def terminate_gcp_replica(name: str, zone: str, replica_id: int) -> str:\n f' --quiet $({query_cmd})')\n \n \[email protected]\n+def override_sky_config(\n+ test: Test, env_dict: Dict[str, str]\n+) -> Generator[Optional[tempfile.NamedTemporaryFile], None, None]:\n+\n+ def deep_update(base_dict: Dict[str, Any],\n+ update_dict: Dict[str, Any]) -> Dict[str, Any]:\n+ \"\"\"\n+ Recursively update a dictionary with another dictionary.\n+ \"\"\"\n+ for key, value in update_dict.items():\n+ if isinstance(value, dict) and key in base_dict and isinstance(\n+ base_dict[key], dict):\n+ deep_update(base_dict[key], value)\n+ else:\n+ base_dict[key] = value\n+ return base_dict\n+\n+ override_sky_config_dict = dict()\n+ if is_remote_server_test():\n+ override_sky_config_dict['api_server'] = {\n+ 'endpoint': docker_utils.get_api_server_endpoint_inside_docker()\n+ }\n+ test.echo(\n+ f'Overriding API server endpoint: {override_sky_config_dict[\"api_server\"][\"endpoint\"]}'\n+ )\n+\n+ if pytest_controller_cloud():\n+ override_sky_config_dict['jobs'] = {",
"line": null,
"original_line": 387,
"original_start_line": null,
"path": "tests/smoke_tests/smoke_tests_utils.py",
"start_line": null,
"text": "@user1:\nYou may be able to declare `override_sky_config` as `config_utils.Config` type, in which case this below should be a valid replacement:\r\n\r\n`override_sky_config.set_nested(('jobs', 'controller', 'resources', 'cloud'), pytest_controller_cloud())`"
}
] |
fcc3929fd3e9a4d025fb2f9df03b964e04e21dc8
|
diff --git a/.buildkite/generate_pipeline.py b/.buildkite/generate_pipeline.py
index 3ea4d9eb27f..3cce4b17e35 100644
--- a/.buildkite/generate_pipeline.py
+++ b/.buildkite/generate_pipeline.py
@@ -114,6 +114,8 @@ def _parse_args(args: Optional[str] = None):
parser.add_argument('--base-branch')
+ parser.add_argument('--controller-cloud')
+
parsed_args, _ = parser.parse_known_args(args_list)
# Collect chosen clouds from the flags
@@ -142,6 +144,8 @@ def _parse_args(args: Optional[str] = None):
extra_args.append('--remote-server')
if parsed_args.base_branch:
extra_args.append(f'--base-branch {parsed_args.base_branch}')
+ if parsed_args.controller_cloud:
+ extra_args.append(f'--controller-cloud {parsed_args.controller_cloud}')
return default_clouds_to_run, parsed_args.k, extra_args
diff --git a/sky/skypilot_config.py b/sky/skypilot_config.py
index ec0a6534a68..80f02eca30e 100644
--- a/sky/skypilot_config.py
+++ b/sky/skypilot_config.py
@@ -139,7 +139,7 @@ def get_user_config() -> config_utils.Config:
# load the user config file
if os.path.exists(user_config_path):
- user_config = _parse_config_file(user_config_path)
+ user_config = parse_config_file(user_config_path)
_validate_config(user_config, user_config_path)
else:
user_config = config_utils.Config()
@@ -168,7 +168,7 @@ def _get_project_config() -> config_utils.Config:
# load the project config file
if os.path.exists(project_config_path):
- project_config = _parse_config_file(project_config_path)
+ project_config = parse_config_file(project_config_path)
_validate_config(project_config, project_config_path)
else:
project_config = config_utils.Config()
@@ -197,7 +197,7 @@ def get_server_config() -> config_utils.Config:
# load the server config file
if os.path.exists(server_config_path):
- server_config = _parse_config_file(server_config_path)
+ server_config = parse_config_file(server_config_path)
_validate_config(server_config, server_config_path)
else:
server_config = config_utils.Config()
@@ -302,7 +302,7 @@ def _reload_config() -> None:
_reload_config_as_client()
-def _parse_config_file(config_path: str) -> config_utils.Config:
+def parse_config_file(config_path: str) -> config_utils.Config:
config = config_utils.Config()
try:
config_dict = common_utils.read_yaml(config_path)
@@ -359,7 +359,7 @@ def _reload_config_from_internal_file(internal_config_path: str) -> None:
'exist. Please double check the path or unset the env var: '
f'unset {ENV_VAR_SKYPILOT_CONFIG}')
logger.debug(f'Using config path: {config_path}')
- _dict = _parse_config_file(config_path)
+ _dict = parse_config_file(config_path)
_loaded_config_path = config_path
@@ -506,7 +506,7 @@ def _compose_cli_config(cli_config: Optional[List[str]]) -> config_utils.Config:
'Cannot use multiple --config flags with a config file.')
config_source = maybe_config_path
# cli_config is a path to a config file
- parsed_config = _parse_config_file(maybe_config_path)
+ parsed_config = parse_config_file(maybe_config_path)
else: # cli_config is a comma-separated list of key-value pairs
parsed_config = _parse_dotlist(cli_config)
_validate_config(parsed_config, config_source)
diff --git a/tests/conftest.py b/tests/conftest.py
index 6c24bcb0919..79ad4518ca4 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -140,6 +140,12 @@ def pytest_addoption(parser):
default='master',
help='Base branch to test backward compatibility against',
)
+ parser.addoption(
+ '--controller-cloud',
+ type=str,
+ default=None,
+ help='Controller cloud to use for tests',
+ )
def pytest_configure(config):
@@ -437,3 +443,16 @@ def setup_docker_container(request):
# Release the lock and close the file
fcntl.flock(lock_fd, fcntl.LOCK_UN)
lock_fd.close()
+
+
[email protected](scope='session', autouse=True)
+def setup_controller_cloud_env(request):
+ """Setup controller cloud environment variable if --controller-cloud is specified."""
+ if not request.config.getoption('--controller-cloud'):
+ yield
+ return
+
+ # Set environment variable to indicate we're using remote server
+ controller_cloud = request.config.getoption('--controller-cloud')
+ os.environ['PYTEST_SKYPILOT_CONTROLLER_CLOUD'] = controller_cloud
+ yield controller_cloud
diff --git a/tests/smoke_tests/smoke_tests_utils.py b/tests/smoke_tests/smoke_tests_utils.py
index 7300319385d..4396a73b4a9 100644
--- a/tests/smoke_tests/smoke_tests_utils.py
+++ b/tests/smoke_tests/smoke_tests_utils.py
@@ -8,7 +8,8 @@
import subprocess
import sys
import tempfile
-from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple
+from typing import (Any, Dict, Generator, List, NamedTuple, Optional, Sequence,
+ Set, Tuple)
import uuid
import colorama
@@ -355,6 +356,50 @@ def terminate_gcp_replica(name: str, zone: str, replica_id: int) -> str:
f' --quiet $({query_cmd})')
[email protected]
+def override_sky_config(
+ test: Test, env_dict: Dict[str, str]
+) -> Generator[Optional[tempfile.NamedTemporaryFile], None, None]:
+ override_sky_config_dict = skypilot_config.config_utils.Config()
+ if is_remote_server_test():
+ endpoint = docker_utils.get_api_server_endpoint_inside_docker()
+ override_sky_config_dict.set_nested(('api_server', 'endpoint'),
+ endpoint)
+ test.echo(
+ f'Overriding API server endpoint: '
+ f'{override_sky_config_dict.get_nested(("api_server", "endpoint"), "UNKNOWN")}'
+ )
+ if pytest_controller_cloud():
+ cloud = pytest_controller_cloud()
+ override_sky_config_dict.set_nested(
+ ('jobs', 'controller', 'resources', 'cloud'), cloud)
+ override_sky_config_dict.set_nested(
+ ('serve', 'controller', 'resources', 'cloud'), cloud)
+ test.echo(
+ f'Overriding controller cloud: '
+ f'{override_sky_config_dict.get_nested(("jobs", "controller", "resources", "cloud"), "UNKNOWN")}'
+ )
+
+ if not override_sky_config_dict:
+ yield None
+ return
+
+ temp_config_file = tempfile.NamedTemporaryFile(mode='w', suffix='.yaml')
+ if skypilot_config.ENV_VAR_SKYPILOT_CONFIG in env_dict:
+ # Read the original config
+ original_config = skypilot_config.parse_config_file(
+ env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG])
+ else:
+ original_config = skypilot_config.config_utils.Config()
+ overlay_config = skypilot_config.overlay_skypilot_config(
+ original_config, override_sky_config_dict)
+ temp_config_file.write(common_utils.dump_yaml_str(dict(overlay_config)))
+ temp_config_file.flush()
+ # Update the environment variable to use the temporary file
+ env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG] = temp_config_file.name
+ yield temp_config_file
+
+
def run_one_test(test: Test) -> None:
# Fail fast if `sky` CLI somehow errors out.
subprocess.run(['sky', 'status'], stdout=subprocess.DEVNULL, check=True)
@@ -378,86 +423,65 @@ def run_one_test(test: Test) -> None:
if test.env:
env_dict.update(test.env)
- # Create a temporary config file with API server config only if running with remote server
- if is_remote_server_test():
- temp_config = tempfile.NamedTemporaryFile(mode='w',
- suffix='.yaml',
- delete=False)
- if skypilot_config.ENV_VAR_SKYPILOT_CONFIG in env_dict:
- # Read the original config
- with open(env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG],
- 'r') as f:
- config = yaml.safe_load(f)
- else:
- config = {}
- config['api_server'] = {
- 'endpoint': docker_utils.get_api_server_endpoint_inside_docker()
- }
- test.echo(
- f'Overriding API server endpoint: {config["api_server"]["endpoint"]}'
- )
- yaml.dump(config, temp_config)
- temp_config.close()
- # Update the environment variable to use the temporary file
- env_dict[skypilot_config.ENV_VAR_SKYPILOT_CONFIG] = temp_config.name
-
- for command in test.commands:
- write(f'+ {command}\n')
- flush()
- proc = subprocess.Popen(
- command,
- stdout=subprocess_out,
- stderr=subprocess.STDOUT,
- shell=True,
- executable='/bin/bash',
- env=env_dict,
- )
- try:
- proc.wait(timeout=test.timeout)
- except subprocess.TimeoutExpired as e:
+ with override_sky_config(test, env_dict):
+ for command in test.commands:
+ write(f'+ {command}\n')
flush()
- test.echo(f'Timeout after {test.timeout} seconds.')
- test.echo(str(e))
- write(f'Timeout after {test.timeout} seconds.\n')
- flush()
- # Kill the current process.
- proc.terminate()
- proc.returncode = 1 # None if we don't set it.
- break
-
- if proc.returncode:
- break
-
- style = colorama.Style
- fore = colorama.Fore
- outcome = (f'{fore.RED}Failed{style.RESET_ALL} (returned {proc.returncode})'
- if proc.returncode else f'{fore.GREEN}Passed{style.RESET_ALL}')
- reason = f'\nReason: {command}' if proc.returncode else ''
- msg = (f'{outcome}.'
- f'{reason}')
- if log_to_stdout:
- test.echo(msg)
- else:
- msg += f'\nLog: less -r {log_file.name}\n'
- test.echo(msg)
- write(msg)
-
- if (proc.returncode == 0 or
- pytest.terminate_on_failure) and test.teardown is not None:
- subprocess_utils.run(
- test.teardown,
- stdout=subprocess_out,
- stderr=subprocess.STDOUT,
- timeout=10 * 60, # 10 mins
- shell=True,
- env=env_dict,
- )
-
- if proc.returncode:
+ proc = subprocess.Popen(
+ command,
+ stdout=subprocess_out,
+ stderr=subprocess.STDOUT,
+ shell=True,
+ executable='/bin/bash',
+ env=env_dict,
+ )
+ try:
+ proc.wait(timeout=test.timeout)
+ except subprocess.TimeoutExpired as e:
+ flush()
+ test.echo(f'Timeout after {test.timeout} seconds.')
+ test.echo(str(e))
+ write(f'Timeout after {test.timeout} seconds.\n')
+ flush()
+ # Kill the current process.
+ proc.terminate()
+ proc.returncode = 1 # None if we don't set it.
+ break
+
+ if proc.returncode:
+ break
+
+ style = colorama.Style
+ fore = colorama.Fore
+ outcome = (
+ f'{fore.RED}Failed{style.RESET_ALL} (returned {proc.returncode})'
+ if proc.returncode else f'{fore.GREEN}Passed{style.RESET_ALL}')
+ reason = f'\nReason: {command}' if proc.returncode else ''
+ msg = (f'{outcome}.'
+ f'{reason}')
if log_to_stdout:
- raise Exception(f'test failed')
+ test.echo(msg)
else:
- raise Exception(f'test failed: less -r {log_file.name}')
+ msg += f'\nLog: less -r {log_file.name}\n'
+ test.echo(msg)
+ write(msg)
+
+ if (proc.returncode == 0 or
+ pytest.terminate_on_failure) and test.teardown is not None:
+ subprocess_utils.run(
+ test.teardown,
+ stdout=subprocess_out,
+ stderr=subprocess.STDOUT,
+ timeout=10 * 60, # 10 mins
+ shell=True,
+ env=env_dict,
+ )
+
+ if proc.returncode:
+ if log_to_stdout:
+ raise Exception(f'test failed')
+ else:
+ raise Exception(f'test failed: less -r {log_file.name}')
def get_aws_region_for_quota_failover() -> Optional[str]:
@@ -663,6 +687,16 @@ def is_remote_server_test() -> bool:
return 'PYTEST_SKYPILOT_REMOTE_SERVER_TEST' in os.environ
+def pytest_controller_cloud() -> Optional[str]:
+ return os.environ.get('PYTEST_SKYPILOT_CONTROLLER_CLOUD', None)
+
+
+def override_env_config(config: Dict[str, str]):
+ """Override the environment variable for the test."""
+ for key, value in config.items():
+ os.environ[key] = value
+
+
def get_api_server_url() -> str:
"""Get the API server URL in the test environment."""
if is_remote_server_test():
diff --git a/tests/smoke_tests/test_basic.py b/tests/smoke_tests/test_basic.py
index 951b35cba00..43150607087 100644
--- a/tests/smoke_tests/test_basic.py
+++ b/tests/smoke_tests/test_basic.py
@@ -33,7 +33,6 @@
import sky
from sky import skypilot_config
-from sky.clouds import Lambda
from sky.skylet import constants
from sky.skylet import events
from sky.utils import common_utils
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
}
|
skypilot-org__skypilot-5297@d3743c2
|
skypilot-org/skypilot
|
Python
| 5,297
|
improve CLI config flag's parsing
|
<!-- Describe the changes in this PR -->
Closes https://github.com/skypilot-org/skypilot/issues/5296
The CLI config flag currently accepts a comma separated list of config overrides. For example:
`--config kubernetes.provision_timeout=600,kubernetes.pod_config.spec.priorityClassName=high-priority`
is a valid CLI flag.
The CLI parsing logic currently simply splits the input string by commas and feed the result into a dotlist parser. While this logic works for example above, the logic fails at parsing config overrides such as:
`--config kubernetes.allowed_contexts=[context1,context2]`
or
`--config gcp.labels.mylabel="one,two,three"`
because CLI splits the input by the commas within the value designation.
I've originally attempted to solve it by having the parsing logic account for curly braces, square braces and quotes - but the number of edge cases to deal with kept increasing as I iterated on this logic.
I think the simplest way to provide robust parsing here is to let users pass in each config overrides in a separate command.
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [ ] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [ ] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-04-21T17:35:33Z
|
[config] CLI "--config" flag does not properly parse commas passed as values
The CLI config flag accepts a comma separated list of config overrides. For example:
`--config kubernetes.provision_timeout=600,kubernetes.pod_config.spec.priorityClassName=high-priority`
is a valid CLI flag.
The CLI parsing logic currently simply splits the input string by commas and feed the result into a dotlist parser. While this logic works for example above, the logic fails at parsing config overrides such as:
`--config kubernetes.allowed_contexts=[context1,context2]`
or
`--config gcp.labels.mylabel="one,two,three"`
because CLI splits the input by the commas within the value designation.
|
[
{
"body": "The CLI config flag accepts a comma separated list of config overrides. For example:\n`--config kubernetes.provision_timeout=600,kubernetes.pod_config.spec.priorityClassName=high-priority`\nis a valid CLI flag.\n\nThe CLI parsing logic currently simply splits the input string by commas and feed the result into a dotlist parser. While this logic works for example above, the logic fails at parsing config overrides such as:\n`--config kubernetes.allowed_contexts=[context1,context2]`\nor\n`--config gcp.labels.mylabel=\"one,two,three\"`\nbecause CLI splits the input by the commas within the value designation.\n\n",
"number": 5296,
"title": "[config] CLI \"--config\" flag does not properly parse commas passed as values"
}
] |
c975eab356a7b927a63320cf45452c5d3562db87
|
{
"head_commit": "d3743c2baf8547b77a03a6aa1006fa8f2238373a",
"head_commit_message": "improve CLI config flag's parsing",
"patch_to_review": "diff --git a/docs/source/reference/config-sources.rst b/docs/source/reference/config-sources.rst\nindex d09445146d4..8a213602686 100644\n--- a/docs/source/reference/config-sources.rst\n+++ b/docs/source/reference/config-sources.rst\n@@ -138,7 +138,8 @@ CLI flag\n \n You can pass configuration arguments to the CLI using the ``--config`` flag.\n \n-The ``--config`` flag can either be a path to a config YAML file, or a dotlist of key-value pairs. Only one ``--config`` flag can be provided.\n+The ``--config`` flag can either be a path to a config YAML file, or a dotlist of key-value pairs.\n+If passing in a config file, only one ``--config`` flag can be provided.\n \n Example:\n \n@@ -147,8 +148,8 @@ Example:\n # pass a config file\n sky launch --config my_config.yaml ...\n # pass individual config options\n- sky launch --config 'kubernetes.provision_timeout=600,kubernetes.pod_config.spec.priorityClassName=high-priority' ...\n- sky launch --config 'kubernetes.custom_metadata.annotations.myannotation1=myvalue1,kubernetes.custom_metadata.annotations.myannotation2=myvalue2' ...\n+ sky launch --config 'kubernetes.provision_timeout=600' --config 'kubernetes.pod_config.spec.priorityClassName=high-priority' ...\n+ sky launch --config 'kubernetes.custom_metadata.annotations.myannotation1=myvalue1' --config 'kubernetes.custom_metadata.annotations.myannotation2=myvalue2' ...\n \n \n .. _config-overrides:\ndiff --git a/sky/cli.py b/sky/cli.py\nindex 5d7b7854348..cc873d184fd 100644\n--- a/sky/cli.py\n+++ b/sky/cli.py\n@@ -302,13 +302,9 @@ def preprocess_config_options(ctx, param, value):\n try:\n if len(value) == 0:\n return None\n- elif len(value) > 1:\n- raise ValueError('argument specified multiple times. '\n- 'To specify multiple configs, use '\n- '--config nested.key1=val1,another.key2=val2')\n else:\n # Apply the config overrides to the skypilot config.\n- return skypilot_config.apply_cli_config(value[0])\n+ return skypilot_config.apply_cli_config(value)\n except ValueError as e:\n raise click.BadParameter(f'{str(e)}') from e\n \ndiff --git a/sky/skypilot_config.py b/sky/skypilot_config.py\nindex cfe1854a66e..0a5255518ea 100644\n--- a/sky/skypilot_config.py\n+++ b/sky/skypilot_config.py\n@@ -464,7 +464,7 @@ def override_skypilot_config(\n _config_overridden = False\n \n \n-def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:\n+def _compose_cli_config(cli_config: Optional[List[str]]) -> config_utils.Config:\n \"\"\"Composes the skypilot CLI config.\n CLI config can either be:\n - A path to a config file\n@@ -475,18 +475,16 @@ def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:\n return config_utils.Config()\n \n config_source = 'CLI'\n- maybe_config_path = os.path.expanduser(cli_config)\n try:\n- if os.path.isfile(maybe_config_path):\n+ maybe_config_path = os.path.expanduser(cli_config[0])\n+ if len(cli_config) == 1 and os.path.isfile(maybe_config_path):\n config_source = maybe_config_path\n # cli_config is a path to a config file\n parsed_config = OmegaConf.to_object(\n OmegaConf.load(maybe_config_path))\n else: # cli_config is a comma-separated list of key-value pairs\n- variables: List[str] = []\n- variables = cli_config.split(',')\n parsed_config = OmegaConf.to_object(\n- OmegaConf.from_dotlist(variables))\n+ OmegaConf.from_dotlist(cli_config))\n _validate_config(parsed_config, config_source)\n except ValueError as e:\n raise ValueError(f'Invalid config override: {cli_config}. '\n@@ -497,7 +495,7 @@ def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:\n return parsed_config\n \n \n-def apply_cli_config(cli_config: Optional[str]) -> Dict[str, Any]:\n+def apply_cli_config(cli_config: Optional[List[str]]) -> Dict[str, Any]:\n \"\"\"Applies the CLI provided config.\n SAFETY:\n This function directly modifies the global _dict variable.\n"
}
|
[
{
"diff_hunk": "@@ -475,18 +475,16 @@ def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:\n return config_utils.Config()\n \n config_source = 'CLI'\n- maybe_config_path = os.path.expanduser(cli_config)\n try:\n- if os.path.isfile(maybe_config_path):\n+ maybe_config_path = os.path.expanduser(cli_config[0])\n+ if len(cli_config) == 1 and os.path.isfile(maybe_config_path):",
"line": null,
"original_line": 480,
"original_start_line": null,
"path": "sky/skypilot_config.py",
"start_line": null,
"text": "@user1:\n```suggestion\r\n if os.path.isfile(maybe_config_path):\r\n if len(cli_config) != 1:\r\n raise ValueError('Cannot use multiple --config flags with a config file.')\r\n```\r\nJust to make sure the error is more clear in this case."
}
] |
830ca817e6327e85bfd63b8c57c660bdfd88b257
|
diff --git a/docs/source/reference/config-sources.rst b/docs/source/reference/config-sources.rst
index d09445146d4..8a213602686 100644
--- a/docs/source/reference/config-sources.rst
+++ b/docs/source/reference/config-sources.rst
@@ -138,7 +138,8 @@ CLI flag
You can pass configuration arguments to the CLI using the ``--config`` flag.
-The ``--config`` flag can either be a path to a config YAML file, or a dotlist of key-value pairs. Only one ``--config`` flag can be provided.
+The ``--config`` flag can either be a path to a config YAML file, or a dotlist of key-value pairs.
+If passing in a config file, only one ``--config`` flag can be provided.
Example:
@@ -147,8 +148,8 @@ Example:
# pass a config file
sky launch --config my_config.yaml ...
# pass individual config options
- sky launch --config 'kubernetes.provision_timeout=600,kubernetes.pod_config.spec.priorityClassName=high-priority' ...
- sky launch --config 'kubernetes.custom_metadata.annotations.myannotation1=myvalue1,kubernetes.custom_metadata.annotations.myannotation2=myvalue2' ...
+ sky launch --config 'kubernetes.provision_timeout=600' --config 'kubernetes.pod_config.spec.priorityClassName=high-priority' ...
+ sky launch --config 'kubernetes.custom_metadata.annotations.myannotation1=myvalue1' --config 'kubernetes.custom_metadata.annotations.myannotation2=myvalue2' ...
.. _config-overrides:
diff --git a/sky/cli.py b/sky/cli.py
index 5d7b7854348..cc873d184fd 100644
--- a/sky/cli.py
+++ b/sky/cli.py
@@ -302,13 +302,9 @@ def preprocess_config_options(ctx, param, value):
try:
if len(value) == 0:
return None
- elif len(value) > 1:
- raise ValueError('argument specified multiple times. '
- 'To specify multiple configs, use '
- '--config nested.key1=val1,another.key2=val2')
else:
# Apply the config overrides to the skypilot config.
- return skypilot_config.apply_cli_config(value[0])
+ return skypilot_config.apply_cli_config(value)
except ValueError as e:
raise click.BadParameter(f'{str(e)}') from e
diff --git a/sky/skypilot_config.py b/sky/skypilot_config.py
index cfe1854a66e..75653cb01b1 100644
--- a/sky/skypilot_config.py
+++ b/sky/skypilot_config.py
@@ -464,7 +464,7 @@ def override_skypilot_config(
_config_overridden = False
-def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:
+def _compose_cli_config(cli_config: Optional[List[str]]) -> config_utils.Config:
"""Composes the skypilot CLI config.
CLI config can either be:
- A path to a config file
@@ -475,18 +475,19 @@ def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:
return config_utils.Config()
config_source = 'CLI'
- maybe_config_path = os.path.expanduser(cli_config)
try:
+ maybe_config_path = os.path.expanduser(cli_config[0])
if os.path.isfile(maybe_config_path):
+ if len(cli_config) != 1:
+ raise ValueError(
+ 'Cannot use multiple --config flags with a config file.')
config_source = maybe_config_path
# cli_config is a path to a config file
parsed_config = OmegaConf.to_object(
OmegaConf.load(maybe_config_path))
else: # cli_config is a comma-separated list of key-value pairs
- variables: List[str] = []
- variables = cli_config.split(',')
parsed_config = OmegaConf.to_object(
- OmegaConf.from_dotlist(variables))
+ OmegaConf.from_dotlist(cli_config))
_validate_config(parsed_config, config_source)
except ValueError as e:
raise ValueError(f'Invalid config override: {cli_config}. '
@@ -497,7 +498,7 @@ def _compose_cli_config(cli_config: Optional[str],) -> config_utils.Config:
return parsed_config
-def apply_cli_config(cli_config: Optional[str]) -> Dict[str, Any]:
+def apply_cli_config(cli_config: Optional[List[str]]) -> Dict[str, Any]:
"""Applies the CLI provided config.
SAFETY:
This function directly modifies the global _dict variable.
|
{
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
}
|
|
skypilot-org__skypilot-5368@de97ecb
|
skypilot-org/skypilot
|
Python
| 5,368
|
[Core][RunPod] Show error for RunPod multi-node
|
<!-- Describe the changes in this PR -->
Fixes #5344
<img width="976" alt="dimmed" src="https://github.com/user-attachments/assets/184b669d-2bfa-4e5c-ad4a-076e17f49f07" />
This is a general fix to these classes of issues. First, we propagate resource infeasibility hints due to unsupported features to the optimizer, letting users know why specific resources were not considered. We also gather all hints in the process and display as a final error message to the user. This fix will take care of all `CloudImplementationFeatures`-related exceptions that were not displayed before during the optimization process.
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [X] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [X] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-04-25T13:28:02Z
|
[Core] Multi-node support error message is not shown
```
(base) ➜ ~ sky launch --cloud runpod --num-nodes 2
No resource satisfying 2x RunPod() on RunPod.
sky.exceptions.ResourcesUnavailableError: Catalog does not contain any instances satisfying the request: 2x RunPod().
To fix: relax or change the resource requirements.
Hint: sky show-gpus to list available accelerators.
sky check to check the enabled clouds.
```
Error should be better; should check `CloudImplementationFeatures.MULTI_NODE` feature during optimization.
User ran into this.
|
Will look into this!
|
[
{
"body": "```\n(base) ➜ ~ sky launch --cloud runpod --num-nodes 2\nNo resource satisfying 2x RunPod() on RunPod.\nsky.exceptions.ResourcesUnavailableError: Catalog does not contain any instances satisfying the request: 2x RunPod().\nTo fix: relax or change the resource requirements.\n\nHint: sky show-gpus to list available accelerators.\n sky check to check the enabled clouds.\n```\n\nError should be better; should check `CloudImplementationFeatures.MULTI_NODE` feature during optimization. \n\nUser ran into this.",
"number": 5344,
"title": "[Core] Multi-node support error message is not shown"
}
] |
957a48596a36157861e09df6c15641e513308f2d
|
{
"head_commit": "de97ecba9a02373c8c1a96f37b6971f256da5def",
"head_commit_message": "update test for changed return param count",
"patch_to_review": "diff --git a/sky/clouds/cloud.py b/sky/clouds/cloud.py\nindex a5777f58a86..d4f4163ebf4 100644\n--- a/sky/clouds/cloud.py\n+++ b/sky/clouds/cloud.py\n@@ -415,13 +415,16 @@ def get_feasible_launchable_resources(\n try:\n self.check_features_are_supported(resources,\n resources_required_features)\n- except exceptions.NotSupportedError:\n+ except exceptions.NotSupportedError as e:\n # TODO(zhwu): The resources are now silently filtered out. We\n # should have some logging telling the user why the resources\n # are not considered.\n+ # UPDATE(kyuds): passing in NotSupportedError reason string\n+ # to hint for issue #5344. Did not remove above comment as\n+ # reason is not displayed when other resources are valid.\n return resources_utils.FeasibleResources(resources_list=[],\n fuzzy_candidate_list=[],\n- hint=None)\n+ hint=str(e))\n return self._get_feasible_launchable_resources(resources)\n \n def _get_feasible_launchable_resources(\ndiff --git a/sky/optimizer.py b/sky/optimizer.py\nindex ea8d50e464e..aac9b5820a2 100644\n--- a/sky/optimizer.py\n+++ b/sky/optimizer.py\n@@ -290,11 +290,11 @@ def get_reservations_available_resources(\n fuzzy_candidates: List[str] = []\n if node_i < len(topo_order) - 1:\n # Convert partial resource labels to launchable resources.\n- launchable_resources, cloud_candidates, fuzzy_candidates = (\n- _fill_in_launchable_resources(\n- task=node,\n- blocked_resources=blocked_resources,\n- quiet=quiet))\n+ (launchable_resources, cloud_candidates, fuzzy_candidates,\n+ resource_hints) = (_fill_in_launchable_resources(\n+ task=node,\n+ blocked_resources=blocked_resources,\n+ quiet=quiet))\n node_to_candidate_map[node] = cloud_candidates\n # Has to call the printing after the launchable resources are\n # computed, because the missing fields of the resources are\n@@ -390,16 +390,27 @@ def get_reservations_available_resources(\n node_resources_reprs = ', '.join(f'{node.num_nodes}x ' +\n r.repr_with_region_zone\n for r in node.resources)\n+ hints_concat = '\\n'.join([\n+ f'{bold}Resource: {repr(resource)}{reset}\\n' +\n+ '\\n'.join(hint_list)\n+ for resource, hint_list in resource_hints.items()\n+ if hint_list\n+ ])\n+ hints_formatted = '\\n'.join(\n+ map(lambda r: f' {r}', hints_concat.split('\\n')))\n+ resource_hints_string = (\n+ f'Hint 2: Check Per Resource Hint\\n{hints_formatted}'\n+ if hints_formatted else '')\n error_msg = (\n f'{source_hint.capitalize()} does not contain any '\n f'instances satisfying the request: '\n f'{node_resources_reprs}.'\n f'\\nTo fix: relax or change the '\n f'resource requirements.{fuzzy_candidates_str}\\n\\n'\n- f'Hint: {bold}sky show-gpus{reset} '\n+ f'Hint 1: {bold}sky show-gpus{reset} '\n 'to list available accelerators.\\n'\n- f' {bold}sky check{reset} to check the enabled '\n- 'clouds.')\n+ f' {bold}sky check{reset} to check the enabled '\n+ f'clouds.\\n{resource_hints_string}')\n with ux_utils.print_exception_no_traceback():\n raise exceptions.ResourcesUnavailableError(error_msg)\n return node_to_cost_map, node_to_candidate_map\n@@ -1047,7 +1058,7 @@ def ordinal_number(n):\n for resources in task.resources:\n # Check if there exists launchable resources\n local_task.set_resources(resources)\n- launchable_resources_map, _, _ = (\n+ launchable_resources_map, _, _, _ = (\n _fill_in_launchable_resources(\n task=local_task,\n blocked_resources=blocked_resources,\n@@ -1213,7 +1224,8 @@ def _fill_in_launchable_resources(\n blocked_resources: Optional[Iterable[resources_lib.Resources]],\n quiet: bool = False\n ) -> Tuple[Dict[resources_lib.Resources, List[resources_lib.Resources]],\n- _PerCloudCandidates, List[str]]:\n+ _PerCloudCandidates, List[str], Dict[resources_lib.Resources,\n+ List[str]]]:\n \"\"\"Fills in the launchable resources for the task.\n \n Returns:\n@@ -1235,6 +1247,8 @@ def _fill_in_launchable_resources(\n all_fuzzy_candidates = set()\n cloud_candidates: _PerCloudCandidates = collections.defaultdict(\n List[resources_lib.Resources])\n+ resource_hints: Dict[resources_lib.Resources,\n+ List[str]] = collections.defaultdict(list)\n if blocked_resources is None:\n blocked_resources = []\n for resources in task.resources:\n@@ -1259,6 +1273,7 @@ def _fill_in_launchable_resources(\n for cloud, feasible_resources in feasible_list:\n if feasible_resources.hint is not None:\n hints[cloud] = feasible_resources.hint\n+ resource_hints[resources].append(feasible_resources.hint)\n if feasible_resources.resources_list:\n # Assume feasible_resources is sorted by prices. Guaranteed by\n # the implementation of get_feasible_launchable_resources and\n@@ -1301,8 +1316,11 @@ def _fill_in_launchable_resources(\n 'to allow for larger instances.'\n f'{colorama.Style.RESET_ALL}')\n for cloud, hint in hints.items():\n- logger.info(f'{repr(cloud)}: {hint}')\n+ logger.info(f'{colorama.Fore.LIGHTBLACK_EX}'\n+ f'{repr(cloud)}: {hint}'\n+ f'{colorama.Style.RESET_ALL}')\n \n launchable[resources] = _filter_out_blocked_launchable_resources(\n launchable[resources], blocked_resources)\n- return launchable, cloud_candidates, list(sorted(all_fuzzy_candidates))\n+ return launchable, cloud_candidates, list(\n+ sorted(all_fuzzy_candidates)), resource_hints\ndiff --git a/tests/test_optimizer_dryruns.py b/tests/test_optimizer_dryruns.py\nindex f3f40352d01..2de21695bd9 100644\n--- a/tests/test_optimizer_dryruns.py\n+++ b/tests/test_optimizer_dryruns.py\n@@ -702,7 +702,7 @@ def test_optimize_disk_tier(enable_all_clouds):\n def _get_all_candidate_cloud(r: sky.Resources) -> Set[clouds.Cloud]:\n task = sky.Task()\n task.set_resources(r)\n- _, per_cloud_candidates, _ = optimizer._fill_in_launchable_resources(\n+ _, per_cloud_candidates, _, _ = optimizer._fill_in_launchable_resources(\n task, blocked_resources=None)\n return set(per_cloud_candidates.keys())\n \n"
}
|
[
{
"diff_hunk": "@@ -390,16 +390,27 @@ def get_reservations_available_resources(\n node_resources_reprs = ', '.join(f'{node.num_nodes}x ' +\n r.repr_with_region_zone\n for r in node.resources)\n+ hints_concat = '\\n'.join([\n+ f'{bold}Resource: {repr(resource)}{reset}\\n' +\n+ '\\n'.join(hint_list)\n+ for resource, hint_list in resource_hints.items()\n+ if hint_list\n+ ])\n+ hints_formatted = '\\n'.join(\n+ map(lambda r: f' {r}', hints_concat.split('\\n')))\n+ resource_hints_string = (\n+ f'Hint 2: Check Per Resource Hint\\n{hints_formatted}'\n+ if hints_formatted else '')\n error_msg = (\n f'{source_hint.capitalize()} does not contain any '\n f'instances satisfying the request: '\n f'{node_resources_reprs}.'\n f'\\nTo fix: relax or change the '\n f'resource requirements.{fuzzy_candidates_str}\\n\\n'\n- f'Hint: {bold}sky show-gpus{reset} '\n+ f'Hint 1: {bold}sky show-gpus{reset} '",
"line": null,
"original_line": 410,
"original_start_line": null,
"path": "sky/optimizer.py",
"start_line": null,
"text": "@user1:\nI don't think numbering hints is necessary, just having multiple unnumbered hints should be fine"
},
{
"diff_hunk": "@@ -390,16 +390,27 @@ def get_reservations_available_resources(\n node_resources_reprs = ', '.join(f'{node.num_nodes}x ' +\n r.repr_with_region_zone\n for r in node.resources)\n+ hints_concat = '\\n'.join([\n+ f'{bold}Resource: {repr(resource)}{reset}\\n' +\n+ '\\n'.join(hint_list)\n+ for resource, hint_list in resource_hints.items()\n+ if hint_list\n+ ])\n+ hints_formatted = '\\n'.join(\n+ map(lambda r: f' {r}', hints_concat.split('\\n')))",
"line": null,
"original_line": 400,
"original_start_line": 399,
"path": "sky/optimizer.py",
"start_line": null,
"text": "@user1:\nI'm not a fan for the hardcoded spaces here, we should try to make it more obvious why the number of spaces are the way there are. What do you think about something like\r\n```suggestion\r\n indent_prefix = ' '*len('Hint: ')\r\n hints_formatted = '\\n'.join(\r\n map(lambda r: f'{indent_prefix}{r}', hints_concat.split('\\n')))\r\n```"
}
] |
e8da8f72cf55b9f9b857166430aadd81c7f1c107
|
diff --git a/sky/clouds/cloud.py b/sky/clouds/cloud.py
index df52fbf292d..0e391113c21 100644
--- a/sky/clouds/cloud.py
+++ b/sky/clouds/cloud.py
@@ -418,13 +418,16 @@ def get_feasible_launchable_resources(
try:
self.check_features_are_supported(resources,
resources_required_features)
- except exceptions.NotSupportedError:
+ except exceptions.NotSupportedError as e:
# TODO(zhwu): The resources are now silently filtered out. We
# should have some logging telling the user why the resources
# are not considered.
+ # UPDATE(kyuds): passing in NotSupportedError reason string
+ # to hint for issue #5344. Did not remove above comment as
+ # reason is not displayed when other resources are valid.
return resources_utils.FeasibleResources(resources_list=[],
fuzzy_candidate_list=[],
- hint=None)
+ hint=str(e))
return self._get_feasible_launchable_resources(resources)
def _get_feasible_launchable_resources(
diff --git a/sky/optimizer.py b/sky/optimizer.py
index ea8d50e464e..3c2f2c320ce 100644
--- a/sky/optimizer.py
+++ b/sky/optimizer.py
@@ -277,6 +277,8 @@ def get_reservations_available_resources(
launchable_resources_list)
return num_available_reserved_nodes_per_resource
+ indent_prefix = ' ' * len('Hint: ')
+
# Compute the estimated cost/time for each node.
for node_i, node in enumerate(topo_order):
if node_i == 0:
@@ -290,11 +292,11 @@ def get_reservations_available_resources(
fuzzy_candidates: List[str] = []
if node_i < len(topo_order) - 1:
# Convert partial resource labels to launchable resources.
- launchable_resources, cloud_candidates, fuzzy_candidates = (
- _fill_in_launchable_resources(
- task=node,
- blocked_resources=blocked_resources,
- quiet=quiet))
+ (launchable_resources, cloud_candidates, fuzzy_candidates,
+ resource_hints) = (_fill_in_launchable_resources(
+ task=node,
+ blocked_resources=blocked_resources,
+ quiet=quiet))
node_to_candidate_map[node] = cloud_candidates
# Has to call the printing after the launchable resources are
# computed, because the missing fields of the resources are
@@ -390,6 +392,18 @@ def get_reservations_available_resources(
node_resources_reprs = ', '.join(f'{node.num_nodes}x ' +
r.repr_with_region_zone
for r in node.resources)
+ hints_concat = '\n'.join([
+ f'{bold}Resource: {repr(resource)}{reset}\n' +
+ '\n'.join(hint_list)
+ for resource, hint_list in resource_hints.items()
+ if hint_list
+ ])
+ hints_formatted = '\n'.join(
+ map(lambda r: f'{indent_prefix}{r}',
+ hints_concat.split('\n')))
+ resource_hints_string = (
+ f'Hint: Check Per Resource Hint\n{hints_formatted}'
+ if hints_formatted else '')
error_msg = (
f'{source_hint.capitalize()} does not contain any '
f'instances satisfying the request: '
@@ -398,8 +412,9 @@ def get_reservations_available_resources(
f'resource requirements.{fuzzy_candidates_str}\n\n'
f'Hint: {bold}sky show-gpus{reset} '
'to list available accelerators.\n'
- f' {bold}sky check{reset} to check the enabled '
- 'clouds.')
+ f'{indent_prefix}{bold}sky check{reset} to check the '
+ 'enabled clouds.\n'
+ f'{resource_hints_string}')
with ux_utils.print_exception_no_traceback():
raise exceptions.ResourcesUnavailableError(error_msg)
return node_to_cost_map, node_to_candidate_map
@@ -1047,7 +1062,7 @@ def ordinal_number(n):
for resources in task.resources:
# Check if there exists launchable resources
local_task.set_resources(resources)
- launchable_resources_map, _, _ = (
+ launchable_resources_map, _, _, _ = (
_fill_in_launchable_resources(
task=local_task,
blocked_resources=blocked_resources,
@@ -1213,7 +1228,8 @@ def _fill_in_launchable_resources(
blocked_resources: Optional[Iterable[resources_lib.Resources]],
quiet: bool = False
) -> Tuple[Dict[resources_lib.Resources, List[resources_lib.Resources]],
- _PerCloudCandidates, List[str]]:
+ _PerCloudCandidates, List[str], Dict[resources_lib.Resources,
+ List[str]]]:
"""Fills in the launchable resources for the task.
Returns:
@@ -1222,6 +1238,8 @@ def _fill_in_launchable_resources(
Resources,
Dict mapping Cloud to a list of feasible Resources (for printing),
Sorted list of fuzzy candidates (alternative GPU names).
+ Dict mapping requested Resources and a list of hints for why the
+ resource is unavailable if so.
Raises:
ResourcesUnavailableError: if all resources required by the task are on
a cloud that is not enabled.
@@ -1235,6 +1253,8 @@ def _fill_in_launchable_resources(
all_fuzzy_candidates = set()
cloud_candidates: _PerCloudCandidates = collections.defaultdict(
List[resources_lib.Resources])
+ resource_hints: Dict[resources_lib.Resources,
+ List[str]] = collections.defaultdict(list)
if blocked_resources is None:
blocked_resources = []
for resources in task.resources:
@@ -1259,6 +1279,7 @@ def _fill_in_launchable_resources(
for cloud, feasible_resources in feasible_list:
if feasible_resources.hint is not None:
hints[cloud] = feasible_resources.hint
+ resource_hints[resources].append(feasible_resources.hint)
if feasible_resources.resources_list:
# Assume feasible_resources is sorted by prices. Guaranteed by
# the implementation of get_feasible_launchable_resources and
@@ -1301,8 +1322,11 @@ def _fill_in_launchable_resources(
'to allow for larger instances.'
f'{colorama.Style.RESET_ALL}')
for cloud, hint in hints.items():
- logger.info(f'{repr(cloud)}: {hint}')
+ logger.info(f'{colorama.Fore.LIGHTBLACK_EX}'
+ f'{repr(cloud)}: {hint}'
+ f'{colorama.Style.RESET_ALL}')
launchable[resources] = _filter_out_blocked_launchable_resources(
launchable[resources], blocked_resources)
- return launchable, cloud_candidates, list(sorted(all_fuzzy_candidates))
+ return launchable, cloud_candidates, list(
+ sorted(all_fuzzy_candidates)), resource_hints
diff --git a/tests/test_optimizer_dryruns.py b/tests/test_optimizer_dryruns.py
index f3f40352d01..2de21695bd9 100644
--- a/tests/test_optimizer_dryruns.py
+++ b/tests/test_optimizer_dryruns.py
@@ -702,7 +702,7 @@ def test_optimize_disk_tier(enable_all_clouds):
def _get_all_candidate_cloud(r: sky.Resources) -> Set[clouds.Cloud]:
task = sky.Task()
task.set_resources(r)
- _, per_cloud_candidates, _ = optimizer._fill_in_launchable_resources(
+ _, per_cloud_candidates, _, _ = optimizer._fill_in_launchable_resources(
task, blocked_resources=None)
return set(per_cloud_candidates.keys())
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
}
|
skypilot-org__skypilot-5206@4201726
|
skypilot-org/skypilot
|
Python
| 5,206
|
[controller] fix kubectl installation
|
<!-- Describe the changes in this PR -->
This regressed in #4835.
Fixes #5186.
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [ ] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [ ] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-04-14T19:10:02Z
|
[Jobs] `kubectl` not found on jobs controller
To reproduce:
1. set up a k8s cluster with nodes that does not have enough CPUs for jobs controller
2. When running `sky jobs launch --cpus 2`
3. The controller starts on a cloud, but `kubectl` is not installed
|
https://github.com/skypilot-org/skypilot/pull/4835/files#diff-8acc88bd4d951508e5ea2f822f65dbc0bfaaed24fd6f8c18027affe8fda7ce1fL250-R257 is broken
|
[
{
"body": "To reproduce:\n1. set up a k8s cluster with nodes that does not have enough CPUs for jobs controller\n2. When running `sky jobs launch --cpus 2`\n3. The controller starts on a cloud, but `kubectl` is not installed",
"number": 5186,
"title": "[Jobs] `kubectl` not found on jobs controller"
}
] |
e536958f1e9cd2bf91e379bd8381dbb8c084c0b1
|
{
"head_commit": "4201726ac002c86c3fc2b4b420aaadd05cca0a0c",
"head_commit_message": "[controller] fix kubectl installation",
"patch_to_review": "diff --git a/sky/utils/controller_utils.py b/sky/utils/controller_utils.py\nindex 4267e4eed21..8fc3bfce344 100644\n--- a/sky/utils/controller_utils.py\n+++ b/sky/utils/controller_utils.py\n@@ -262,8 +262,10 @@ def _get_cloud_dependencies_installation_commands(\n ' ARCH=\"amd64\"; '\n 'fi && '\n '(command -v kubectl &>/dev/null || '\n- '(\"https://dl.k8s.io/release/v1.31.6/bin/linux/$ARCH/kubectl\" '\n- '&& sudo install -o root -g root -m 0755 '\n+ '(curl -s -LO \"https://dl.k8s.io/release/'\n+ '$(curl -L -s https://dl.k8s.io/release/stable.txt)'\n+ '/bin/linux/$ARCH/kubectl\" && '\n+ 'sudo install -o root -g root -m 0755 '\n 'kubectl /usr/local/bin/kubectl))')\n elif isinstance(cloud, clouds.Cudo):\n step_prefix = prefix_str.replace('<step>', str(len(commands) + 1))\n"
}
|
[
{
"diff_hunk": "@@ -262,8 +262,10 @@ def _get_cloud_dependencies_installation_commands(\n ' ARCH=\"amd64\"; '\n 'fi && '\n '(command -v kubectl &>/dev/null || '\n- '(\"https://dl.k8s.io/release/v1.31.6/bin/linux/$ARCH/kubectl\" '\n- '&& sudo install -o root -g root -m 0755 '\n+ '(curl -s -LO \"https://dl.k8s.io/release/'\n+ '$(curl -L -s https://dl.k8s.io/release/stable.txt)'",
"line": null,
"original_line": 266,
"original_start_line": null,
"path": "sky/utils/controller_utils.py",
"start_line": null,
"text": "@user1:\nShould we stick with the fixed version as our other places did?\r\n\r\nAlso, can we add a test for this?\n\n@author:\ntbh, I'm not sure how this isn't caught in our existing tests - maybe only affects cross-cloud use (e.g. AWS controller -> k8s job)\n\n@user2:\nOur k8s base image has kubectl already baked into it, so a bad kubectl install attempt won't break anything. If you used a different base image for the controller (or ran on a different cloud, like @author said), you'll run into this\n\n@author:\nI think adding a test will be more difficult. Prefer to just merge this and follow up with cross-cloud tests later."
}
] |
39f30630e8a3e4d0ca40435446800b7e90f56fae
|
diff --git a/sky/utils/controller_utils.py b/sky/utils/controller_utils.py
index 4267e4eed21..398ddeb96b4 100644
--- a/sky/utils/controller_utils.py
+++ b/sky/utils/controller_utils.py
@@ -262,8 +262,9 @@ def _get_cloud_dependencies_installation_commands(
' ARCH="amd64"; '
'fi && '
'(command -v kubectl &>/dev/null || '
- '("https://dl.k8s.io/release/v1.31.6/bin/linux/$ARCH/kubectl" '
- '&& sudo install -o root -g root -m 0755 '
+ '(curl -s -LO "https://dl.k8s.io/release/v1.31.6'
+ '/bin/linux/$ARCH/kubectl" && '
+ 'sudo install -o root -g root -m 0755 '
'kubectl /usr/local/bin/kubectl))')
elif isinstance(cloud, clouds.Cudo):
step_prefix = prefix_str.replace('<step>', str(len(commands) + 1))
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
skypilot-org__skypilot-5273@b32e9f7
|
skypilot-org/skypilot
|
Python
| 5,273
|
[k8s] Hints for querying stale kube current context
|
<!-- Describe the changes in this PR -->
Fixes #5258

<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [X] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [X] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-04-18T01:07:23Z
|
[k8s] Confusing Error for Stale Current-Context in KubeConfig File
```
E 04-17 13:02:39 sdk.py:1496] === Traceback on SkyPilot API Server ===
...
E 04-17 13:02:39 sdk.py:1496] _load_config(context)
E 04-17 13:02:39 sdk.py:1496] File "/Users/kyuds/dev/skypilot/sky/adaptors/kubernetes.py", line 102, in _load_config
E 04-17 13:02:39 sdk.py:1496] _load_config_from_kubeconfig()
E 04-17 13:02:39 sdk.py:1496] File "/Users/kyuds/dev/skypilot/sky/adaptors/kubernetes.py", line 89, in _load_config_from_kubeconfig
E 04-17 13:02:39 sdk.py:1496] raise ValueError(err_str) from None
E 04-17 13:02:39 sdk.py:1496] ValueError: Failed to load Kubernetes configuration for None. Please check if your kubeconfig file exists at ~/.kube/config and is valid.
E 04-17 13:02:39 sdk.py:1496] [kubernetes.config.config_exception.ConfigException] Invalid kube-config file. Expected object with name kyuds@<redacted> in /Users/kyuds/.kube/config/contexts list
E 04-17 13:02:39 sdk.py:1496] To disable Kubernetes for SkyPilot: run `sky check`.
E 04-17 13:02:39 sdk.py:1496]
D 04-17 13:02:39 sdk.py:82] To stream request logs: sky api logs <redacted>
ValueError: Failed to load Kubernetes configuration for None. Please check if your kubeconfig file exists at ~/.kube/config and is valid.
[kubernetes.config.config_exception.ConfigException] Invalid kube-config file. Expected object with name kyuds@<redacted> in /Users/kyuds/.kube/config/contexts list
To disable Kubernetes for SkyPilot: run `sky check`.
```
when we remove the current-context from kubeconfig using `kubectl`, the current-context still records the now deleted context. If the kubeconfig file contains another context and we run `sky jobs launch`, then skypilot will automatically try to check the "current-context" (now non-existent" first hence causing this error.
While this is more of a problem with `kubectl` than skypilot, we should nevertheless give users a hint to check their current-context value.
|
[
{
"body": "```\nE 04-17 13:02:39 sdk.py:1496] === Traceback on SkyPilot API Server ===\n...\nE 04-17 13:02:39 sdk.py:1496] _load_config(context)\nE 04-17 13:02:39 sdk.py:1496] File \"/Users/kyuds/dev/skypilot/sky/adaptors/kubernetes.py\", line 102, in _load_config\nE 04-17 13:02:39 sdk.py:1496] _load_config_from_kubeconfig()\nE 04-17 13:02:39 sdk.py:1496] File \"/Users/kyuds/dev/skypilot/sky/adaptors/kubernetes.py\", line 89, in _load_config_from_kubeconfig\nE 04-17 13:02:39 sdk.py:1496] raise ValueError(err_str) from None\nE 04-17 13:02:39 sdk.py:1496] ValueError: Failed to load Kubernetes configuration for None. Please check if your kubeconfig file exists at ~/.kube/config and is valid.\nE 04-17 13:02:39 sdk.py:1496] [kubernetes.config.config_exception.ConfigException] Invalid kube-config file. Expected object with name kyuds@<redacted> in /Users/kyuds/.kube/config/contexts list\nE 04-17 13:02:39 sdk.py:1496] To disable Kubernetes for SkyPilot: run `sky check`.\nE 04-17 13:02:39 sdk.py:1496]\nD 04-17 13:02:39 sdk.py:82] To stream request logs: sky api logs <redacted>\nValueError: Failed to load Kubernetes configuration for None. Please check if your kubeconfig file exists at ~/.kube/config and is valid.\n[kubernetes.config.config_exception.ConfigException] Invalid kube-config file. Expected object with name kyuds@<redacted> in /Users/kyuds/.kube/config/contexts list\nTo disable Kubernetes for SkyPilot: run `sky check`.\n```\nwhen we remove the current-context from kubeconfig using `kubectl`, the current-context still records the now deleted context. If the kubeconfig file contains another context and we run `sky jobs launch`, then skypilot will automatically try to check the \"current-context\" (now non-existent\" first hence causing this error.\n\nWhile this is more of a problem with `kubectl` than skypilot, we should nevertheless give users a hint to check their current-context value. ",
"number": 5258,
"title": "[k8s] Confusing Error for Stale Current-Context in KubeConfig File"
}
] |
b55219e02a689021f3dee72bcc3c993fbc4c124a
|
{
"head_commit": "b32e9f7a0116ec239b85d5eb2b02632c62c099f0",
"head_commit_message": "styling",
"patch_to_review": "diff --git a/sky/adaptors/kubernetes.py b/sky/adaptors/kubernetes.py\nindex 1e8607870c3..e8a098c166b 100644\n--- a/sky/adaptors/kubernetes.py\n+++ b/sky/adaptors/kubernetes.py\n@@ -3,6 +3,8 @@\n import os\n from typing import Any, Callable, Optional, Set\n \n+import colorama\n+\n from sky.adaptors import common\n from sky.sky_logging import set_logging_level\n from sky.utils import annotations\n@@ -85,6 +87,11 @@ def _load_config_from_kubeconfig(context: Optional[str] = None):\n 'Please check if your kubeconfig file exists at '\n f'{kubeconfig_path} and is valid.\\n{suffix}')\n err_str += '\\nTo disable Kubernetes for SkyPilot: run `sky check`.'\n+ if context is None: # kubernetes defaults to current-context.\n+ err_str += (\n+ f'\\n{colorama.Fore.YELLOW}Hint: Kubernetes attempted '\n+ 'to query the current-context set in kubeconfig. Check if '\n+ f'the current-context is valid.{colorama.Style.RESET_ALL}')\n with ux_utils.print_exception_no_traceback():\n raise ValueError(err_str) from None\n \n"
}
|
[
{
"diff_hunk": "@@ -85,6 +87,11 @@ def _load_config_from_kubeconfig(context: Optional[str] = None):\n 'Please check if your kubeconfig file exists at '\n f'{kubeconfig_path} and is valid.\\n{suffix}')\n err_str += '\\nTo disable Kubernetes for SkyPilot: run `sky check`.'\n+ if context is None: # kubernetes defaults to current-context.\n+ err_str += (\n+ f'\\n{colorama.Fore.YELLOW}Hint: Kubernetes attempted '",
"line": null,
"original_line": 92,
"original_start_line": null,
"path": "sky/adaptors/kubernetes.py",
"start_line": null,
"text": "@user1:\nBest practice is to have simple error messages in strs without any ascii codes. Remove colorama from here and the import?\n\n@author:\ndone"
}
] |
1346e7766c5bf796d3e99ca5994e970f9a50da38
|
diff --git a/sky/adaptors/kubernetes.py b/sky/adaptors/kubernetes.py
index 1e8607870c3..27343bc15c2 100644
--- a/sky/adaptors/kubernetes.py
+++ b/sky/adaptors/kubernetes.py
@@ -70,21 +70,26 @@ def _load_config_from_kubeconfig(context: Optional[str] = None):
kubernetes.config.load_kube_config(context=context)
except kubernetes.config.config_exception.ConfigException as e:
suffix = common_utils.format_exception(e, use_bracket=True)
+ context_name = '(current-context)' if context is None else context
# Check if exception was due to no current-context
if 'Expected key current-context' in str(e):
- err_str = (
- f'Failed to load Kubernetes configuration for {context!r}. '
- 'Kubeconfig does not contain any valid context(s).'
- f'\n{suffix}\n'
- ' If you were running a local Kubernetes '
- 'cluster, run `sky local up` to start the cluster.')
+ err_str = ('Failed to load Kubernetes configuration for '
+ f'{context_name!r}. '
+ 'Kubeconfig does not contain any valid context(s).'
+ f'\n{suffix}\n'
+ ' If you were running a local Kubernetes '
+ 'cluster, run `sky local up` to start the cluster.')
else:
kubeconfig_path = os.environ.get('KUBECONFIG', '~/.kube/config')
err_str = (
- f'Failed to load Kubernetes configuration for {context!r}. '
- 'Please check if your kubeconfig file exists at '
- f'{kubeconfig_path} and is valid.\n{suffix}')
+ f'Failed to load Kubernetes configuration for '
+ f'{context_name!r}. Please check if your kubeconfig file '
+ f'exists at {kubeconfig_path} and is valid.\n{suffix}')
err_str += '\nTo disable Kubernetes for SkyPilot: run `sky check`.'
+ if context is None: # kubernetes defaults to current-context.
+ err_str += (
+ '\nHint: Kubernetes attempted to query the current-context '
+ 'set in kubeconfig. Check if the current-context is valid.')
with ux_utils.print_exception_no_traceback():
raise ValueError(err_str) from None
|
{
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
}
|
|
skypilot-org__skypilot-5835@8599d6b
|
skypilot-org/skypilot
|
Python
| 5,835
|
[ManagedJobs] Support failure recovery for HA controller
|
<!-- Describe the changes in this PR -->
Support recovery logic for managed jobs.
Fixes #3970
<!-- Describe the tests ran -->
<!-- Unit tests (tests/test_*.py) are part of GitHub CI; below are tests that launch on the cloud. -->
Tested (run the relevant ones):
- [ ] Code formatting: install pre-commit (auto-check on commit) or `bash format.sh`
- [ ] Any manual or new tests for this PR (please specify below)
- [ ] All smoke tests: `/smoke-test` (CI) or `pytest tests/test_smoke.py` (local)
- [ ] Relevant individual tests: `/smoke-test -k test_name` (CI) or `pytest tests/test_smoke.py::test_name` (local)
- [ ] Backward compatibility: `/quicktest-core` (CI) or `pytest tests/smoke_tests/test_backward_compat.py` (local)
<!-- CI commands (/-prefixed) can only be triggered by repo members -->
|
2025-06-02T21:51:58Z
|
How to make the Controller highly available?
I have a query related to making the controller highly available for production deployment to remove the single point of failure. I have checked the doc but didn't find anything related. Is there any guide available for this?
|
This issue is stale because it has been open 120 days with no activity. Remove stale label or comment or this will be closed in 10 days.
This issue was closed because it has been stalled for 10 days with no activity.
Reopening for jobs controller. While HA is addressed for Serve controller in #4564, jobs controller state is still stored on the controller VM/pod. If the pod/VM crashes, all jobs state is lost.
|
[
{
"body": "I have a query related to making the controller highly available for production deployment to remove the single point of failure. I have checked the doc but didn't find anything related. Is there any guide available for this?",
"number": 3970,
"title": "How to make the Controller highly available?"
}
] |
aeba5fdd1b113ba9118bdb629e9faf393a79ddef
|
{
"head_commit": "8599d6b0bb358ca0a023409e3c21c178ca1e60c8",
"head_commit_message": "add file",
"patch_to_review": "diff --git a/charts/skypilot/values.yaml b/charts/skypilot/values.yaml\nindex 21c7438d801..b2c50b02ad5 100644\n--- a/charts/skypilot/values.yaml\n+++ b/charts/skypilot/values.yaml\n@@ -196,6 +196,13 @@ rbac:\n - apiGroups: [ \"\" ]\n resources: [ \"configmaps\" ]\n verbs: [ \"get\", \"patch\" ]\n+ # Required for high-availability controller\n+ - apiGroups: [\"apps\"]\n+ resources: [\"deployments\", \"deployments/status\"]\n+ verbs: [\"*\"]\n+ - apiGroups: [\"\"]\n+ resources: [\"persistentvolumeclaims\"]\n+ verbs: [\"*\"]\n # Cluster-scoped rules for API server.\n clusterRules:\n # Required for getting node resources.\ndiff --git a/sky/clouds/kubernetes.py b/sky/clouds/kubernetes.py\nindex 5fe96e5478f..d4510280952 100644\n--- a/sky/clouds/kubernetes.py\n+++ b/sky/clouds/kubernetes.py\n@@ -618,6 +618,8 @@ def _get_image_id(resources: 'resources_lib.Resources') -> str:\n (constants.PERSISTENT_SETUP_SCRIPT_PATH),\n 'k8s_high_availability_deployment_run_script_dir':\n (constants.PERSISTENT_RUN_SCRIPT_DIR),\n+ 'k8s_high_availability_restarting_signal_file':\n+ (constants.PERSISTENT_RUN_RESTARTING_SIGNAL_FILE),\n 'k8s_high_availability_storage_class_name':\n (k8s_ha_storage_class_name),\n 'avoid_label_keys': avoid_label_keys,\ndiff --git a/sky/execution.py b/sky/execution.py\nindex b0e795a7c34..2b3659c8210 100644\n--- a/sky/execution.py\n+++ b/sky/execution.py\n@@ -263,8 +263,7 @@ def _execute_dag(\n if controller is not None:\n requested_features.add(\n clouds.CloudImplementationFeatures.HOST_CONTROLLERS)\n- if controller_utils.high_availability_specified(cluster_name,\n- skip_warning=False):\n+ if controller_utils.high_availability_specified(cluster_name):\n requested_features.add(clouds.CloudImplementationFeatures.\n HIGH_AVAILABILITY_CONTROLLERS)\n # If we provision a cluster that supports high availability\ndiff --git a/sky/jobs/controller.py b/sky/jobs/controller.py\nindex 7204fd4c516..8a2e8fdaeab 100644\n--- a/sky/jobs/controller.py\n+++ b/sky/jobs/controller.py\n@@ -152,6 +152,20 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:\n Other exceptions may be raised depending on the backend.\n \"\"\"\n \n+ latest_task_id, status = managed_job_state.get_latest_task_id_status(\n+ self._job_id)\n+ is_recovery = False\n+ if (latest_task_id is not None and\n+ status != managed_job_state.ManagedJobStatus.PENDING):\n+ assert latest_task_id >= task_id, (latest_task_id, task_id)\n+ if latest_task_id > task_id:\n+ logger.info(f'Task {task_id} ({task.name}) has already '\n+ 'been executed. Skipping...')\n+ return True\n+ if latest_task_id == task_id:\n+ # Start recovery.\n+ is_recovery = True\n+\n callback_func = managed_job_utils.event_callback_func(\n job_id=self._job_id, task_id=task_id, task=task)\n if task.run is None:\n@@ -171,42 +185,48 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:\n return True\n usage_lib.messages.usage.update_task_id(task_id)\n task_id_env_var = task.envs[constants.TASK_ID_ENV_VAR]\n- submitted_at = time.time()\n- if task_id == 0:\n- submitted_at = backend_utils.get_timestamp_from_run_timestamp(\n- self._backend.run_timestamp)\n assert task.name is not None, task\n cluster_name = managed_job_utils.generate_managed_job_cluster_name(\n task.name, self._job_id)\n self._strategy_executor = recovery_strategy.StrategyExecutor.make(\n cluster_name, self._backend, task, self._job_id, task_id)\n- managed_job_state.set_starting(\n- self._job_id,\n- task_id,\n- self._backend.run_timestamp,\n- submitted_at,\n- resources_str=backend_utils.get_task_resources_str(\n- task, is_managed_job=True),\n- specs={\n- 'max_restarts_on_errors':\n- self._strategy_executor.max_restarts_on_errors\n- },\n- callback_func=callback_func)\n- logger.info(\n- f'Submitted managed job {self._job_id} (task: {task_id}, name: '\n- f'{task.name!r}); {constants.TASK_ID_ENV_VAR}: {task_id_env_var}')\n-\n- logger.info('Started monitoring.')\n-\n- remote_job_submitted_at = self._strategy_executor.launch()\n- assert remote_job_submitted_at is not None, remote_job_submitted_at\n+ if not is_recovery:\n+ submitted_at = time.time()\n+ if task_id == 0:\n+ submitted_at = backend_utils.get_timestamp_from_run_timestamp(\n+ self._backend.run_timestamp)\n+ managed_job_state.set_starting(\n+ self._job_id,\n+ task_id,\n+ self._backend.run_timestamp,\n+ submitted_at,\n+ resources_str=backend_utils.get_task_resources_str(\n+ task, is_managed_job=True),\n+ specs={\n+ 'max_restarts_on_errors':\n+ self._strategy_executor.max_restarts_on_errors\n+ },\n+ callback_func=callback_func)\n+ logger.info(f'Submitted managed job {self._job_id} '\n+ f'(task: {task_id}, name: {task.name!r}); '\n+ f'{constants.TASK_ID_ENV_VAR}: {task_id_env_var}')\n+\n+ logger.info('Started monitoring.')\n+\n+ remote_job_submitted_at = self._strategy_executor.launch()\n+ assert remote_job_submitted_at is not None, remote_job_submitted_at\n \n- managed_job_state.set_started(job_id=self._job_id,\n- task_id=task_id,\n- start_time=remote_job_submitted_at,\n- callback_func=callback_func)\n+ managed_job_state.set_started(job_id=self._job_id,\n+ task_id=task_id,\n+ start_time=remote_job_submitted_at,\n+ callback_func=callback_func)\n \n while True:\n+ if is_recovery:\n+ last_status = managed_job_state.get_job_status_with_task_id(\n+ job_id=self._job_id, task_id=task_id)\n+ if last_status is not None and last_status.is_terminal():\n+ return True\n time.sleep(managed_job_utils.JOB_STATUS_CHECK_GAP_SECONDS)\n \n # Check the network connection to avoid false alarm for job failure.\n@@ -221,8 +241,12 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:\n \n # NOTE: we do not check cluster status first because race condition\n # can occur, i.e. cluster can be down during the job status check.\n- job_status = managed_job_utils.get_job_status(\n- self._backend, cluster_name)\n+ try:\n+ job_status = managed_job_utils.get_job_status(\n+ self._backend, cluster_name)\n+ except (exceptions.FetchClusterInfoError, exceptions.CommandError):\n+ logger.info('Failed to fetch the job status. Start recovery...')\n+ job_status = None\n \n if job_status == job_lib.JobStatus.SUCCEEDED:\n success_end_time = managed_job_utils.try_to_get_job_end_time(\ndiff --git a/sky/jobs/scheduler.py b/sky/jobs/scheduler.py\nindex 617c5b790b3..199c4348bf3 100644\n--- a/sky/jobs/scheduler.py\n+++ b/sky/jobs/scheduler.py\n@@ -84,6 +84,32 @@ def _get_lock_path() -> str:\n return path\n \n \n+def _start_controller(job_id: int, dag_yaml_path: str,\n+ env_file_path: str) -> None:\n+ activate_python_env_cmd = (f'{constants.ACTIVATE_SKY_REMOTE_PYTHON_ENV};')\n+ source_environment_cmd = (f'source {env_file_path};'\n+ if env_file_path else '')\n+ run_controller_cmd = ('python -u -m sky.jobs.controller '\n+ f'{dag_yaml_path} --job-id {job_id};')\n+\n+ # If the command line here is changed, please also update\n+ # utils._controller_process_alive. `--job-id X` should be at\n+ # the end.\n+ run_cmd = (f'{activate_python_env_cmd}'\n+ f'{source_environment_cmd}'\n+ f'{run_controller_cmd}')\n+\n+ logs_dir = os.path.expanduser(\n+ managed_job_constants.JOBS_CONTROLLER_LOGS_DIR)\n+ os.makedirs(logs_dir, exist_ok=True)\n+ log_path = os.path.join(logs_dir, f'{job_id}.log')\n+\n+ pid = subprocess_utils.launch_new_process_tree(run_cmd, log_output=log_path)\n+ state.set_job_controller_pid(job_id, pid)\n+\n+ logger.debug(f'Job {job_id} started with pid {pid}')\n+\n+\n def maybe_schedule_next_jobs() -> None:\n \"\"\"Determine if any managed jobs can be scheduled, and if so, schedule them.\n \n@@ -158,32 +184,9 @@ def maybe_schedule_next_jobs() -> None:\n \n job_id = maybe_next_job['job_id']\n dag_yaml_path = maybe_next_job['dag_yaml_path']\n+ env_file_path = maybe_next_job['env_file_path']\n \n- activate_python_env_cmd = (\n- f'{constants.ACTIVATE_SKY_REMOTE_PYTHON_ENV};')\n- env_file = maybe_next_job['env_file_path']\n- source_environment_cmd = (f'source {env_file};'\n- if env_file else '')\n- run_controller_cmd = ('python -u -m sky.jobs.controller '\n- f'{dag_yaml_path} --job-id {job_id};')\n-\n- # If the command line here is changed, please also update\n- # utils._controller_process_alive. `--job-id X` should be at\n- # the end.\n- run_cmd = (f'{activate_python_env_cmd}'\n- f'{source_environment_cmd}'\n- f'{run_controller_cmd}')\n-\n- logs_dir = os.path.expanduser(\n- managed_job_constants.JOBS_CONTROLLER_LOGS_DIR)\n- os.makedirs(logs_dir, exist_ok=True)\n- log_path = os.path.join(logs_dir, f'{job_id}.log')\n-\n- pid = subprocess_utils.launch_new_process_tree(\n- run_cmd, log_output=log_path)\n- state.set_job_controller_pid(job_id, pid)\n-\n- logger.debug(f'Job {job_id} started with pid {pid}')\n+ _start_controller(job_id, dag_yaml_path, env_file_path)\n \n except filelock.Timeout:\n # If we can't get the lock, just exit. The process holding the lock\n@@ -203,9 +206,14 @@ def submit_job(job_id: int, dag_yaml_path: str, env_file_path: str,\n The user hash should be set (e.g. via SKYPILOT_USER_ID) before calling this.\n \"\"\"\n with filelock.FileLock(_get_lock_path()):\n- state.scheduler_set_waiting(job_id, dag_yaml_path, env_file_path,\n- common_utils.get_user_hash(), priority)\n- maybe_schedule_next_jobs()\n+ is_recovery = state.scheduler_set_waiting(job_id, dag_yaml_path,\n+ env_file_path,\n+ common_utils.get_user_hash(),\n+ priority)\n+ if is_recovery:\n+ _start_controller(job_id, dag_yaml_path, env_file_path)\n+ else:\n+ maybe_schedule_next_jobs()\n \n \n @contextlib.contextmanager\ndiff --git a/sky/jobs/state.py b/sky/jobs/state.py\nindex 2a030d93bd0..da4b4e79764 100644\n--- a/sky/jobs/state.py\n+++ b/sky/jobs/state.py\n@@ -573,6 +573,9 @@ def set_started(job_id: int, task_id: int, start_time: float,\n def set_recovering(job_id: int, task_id: int, callback_func: CallbackType):\n \"\"\"Set the task to recovering state, and update the job duration.\"\"\"\n logger.info('=== Recovering... ===')\n+ # Originally, we force the status to be RUNNING before setting to RECOVERING\n+ # After adding the HA job controller, it is possible that the jobs came from\n+ # any status to recovering. So we skip the status check here.\n with db_utils.safe_cursor(_DB_PATH) as cursor:\n cursor.execute(\n \"\"\"\\\n@@ -580,10 +583,8 @@ def set_recovering(job_id: int, task_id: int, callback_func: CallbackType):\n status=(?), job_duration=job_duration+(?)-last_recovered_at\n WHERE spot_job_id=(?) AND\n task_id=(?) AND\n- status=(?) AND\n end_at IS null\"\"\",\n- (ManagedJobStatus.RECOVERING.value, time.time(), job_id, task_id,\n- ManagedJobStatus.RUNNING.value))\n+ (ManagedJobStatus.RECOVERING.value, time.time(), job_id, task_id))\n if cursor.rowcount != 1:\n raise exceptions.ManagedJobStatusError(\n f'Failed to set the task to recovering. '\n@@ -935,6 +936,17 @@ def _get_all_task_ids_statuses(\n return [(row[0], ManagedJobStatus(row[1])) for row in id_statuses]\n \n \n+def get_job_status_with_task_id(job_id: int,\n+ task_id: int) -> Optional[ManagedJobStatus]:\n+ with db_utils.safe_cursor(_DB_PATH) as cursor:\n+ status = cursor.execute(\n+ \"\"\"\\\n+ SELECT status FROM spot\n+ WHERE spot_job_id=(?) AND task_id=(?)\"\"\",\n+ (job_id, task_id)).fetchone()\n+ return ManagedJobStatus(status[0]) if status else None\n+\n+\n def get_num_tasks(job_id: int) -> int:\n return len(_get_all_task_ids_statuses(job_id))\n \n@@ -1084,8 +1096,15 @@ def get_local_log_file(job_id: int, task_id: Optional[int]) -> Optional[str]:\n \n \n def scheduler_set_waiting(job_id: int, dag_yaml_path: str, env_file_path: str,\n- user_hash: str, priority: int) -> None:\n- \"\"\"Do not call without holding the scheduler lock.\"\"\"\n+ user_hash: str, priority: int) -> bool:\n+ \"\"\"Do not call without holding the scheduler lock.\n+\n+ Returns: Whether this is a recovery run or not.\n+ If this is a recovery run, the job may already be in the WAITING state\n+ and the update will not change the schedule_state (hence the\n+ updated_count will be 0). In this case, we return True.\n+ Otherwise, we return False.\n+ \"\"\"\n with db_utils.safe_cursor(_DB_PATH) as cursor:\n updated_count = cursor.execute(\n 'UPDATE job_info SET '\n@@ -1095,7 +1114,9 @@ def scheduler_set_waiting(job_id: int, dag_yaml_path: str, env_file_path: str,\n (ManagedJobScheduleState.WAITING.value, dag_yaml_path,\n env_file_path, user_hash, priority, job_id,\n ManagedJobScheduleState.INACTIVE.value)).rowcount\n- assert updated_count == 1, (job_id, updated_count)\n+ # For a recovery run, the job may already be in the WAITING state.\n+ assert updated_count <= 1, (job_id, updated_count)\n+ return updated_count == 0\n \n \n def scheduler_set_launching(job_id: int,\ndiff --git a/sky/jobs/utils.py b/sky/jobs/utils.py\nindex 375217f0503..3cb232e27ac 100644\n--- a/sky/jobs/utils.py\n+++ b/sky/jobs/utils.py\n@@ -176,6 +176,17 @@ def update_managed_jobs_statuses(job_id: Optional[int] = None):\n Note: we expect that job_id, if provided, refers to a nonterminal job or a\n job that has not completed its cleanup (schedule state not DONE).\n \"\"\"\n+ # This signal file suggests that the controller is recovering from a\n+ # failure. See sky/templates/kubernetes-ray.yml.j2 for more details.\n+ # When restarting the controller processes, we don't want this event to\n+ # set the job status to FAILED_CONTROLLER.\n+ # TODO(tian): Change this to restart the controller process. For now we\n+ # disabled it when recovering because we want to avoid caveats of infinite\n+ # restart of last controller process that fully occupied the controller VM.\n+ if os.path.exists(\n+ os.path.expanduser(\n+ constants.PERSISTENT_RUN_RESTARTING_SIGNAL_FILE)):\n+ return\n \n def _cleanup_job_clusters(job_id: int) -> Optional[str]:\n \"\"\"Clean up clusters for a job. Returns error message if any.\ndiff --git a/sky/skylet/constants.py b/sky/skylet/constants.py\nindex 38b7a2898f3..7b7a1293e29 100644\n--- a/sky/skylet/constants.py\n+++ b/sky/skylet/constants.py\n@@ -396,6 +396,10 @@\n # persistent through PVC. See kubernetes-ray.yml.j2.\n PERSISTENT_SETUP_SCRIPT_PATH = '~/.sky/.controller_recovery_setup_commands.sh'\n PERSISTENT_RUN_SCRIPT_DIR = '~/.sky/.controller_recovery_task_run'\n+# Signal file to indicate that the controller is recovering from a failure.\n+# See sky/jobs/utils.py::update_managed_jobs_statuses for more details.\n+PERSISTENT_RUN_RESTARTING_SIGNAL_FILE = (\n+ '~/.sky/.controller_recovery_restarting_signal')\n \n # The placeholder for the local skypilot config path in file mounts for\n # controllers.\ndiff --git a/sky/templates/kubernetes-ray.yml.j2 b/sky/templates/kubernetes-ray.yml.j2\nindex 70fa6db2c30..06049ce94a1 100644\n--- a/sky/templates/kubernetes-ray.yml.j2\n+++ b/sky/templates/kubernetes-ray.yml.j2\n@@ -639,12 +639,14 @@ available_node_types:\n /bin/bash --login -c \"true && export OMP_NUM_THREADS=1 PYTHONWARNINGS='ignore' && {{k8s_high_availability_deployment_setup_script_path}} > /tmp/controller_recovery_setup_commands.log 2>&1\"\n echo \"=== Controller setup commands completed for recovery ===\"\n \n+ touch {{k8s_high_availability_restarting_signal_file}}\n for file in {{k8s_high_availability_deployment_run_script_dir}}/*; do\n # ! Keep this aligned with `CloudVmRayBackend._execute()`\n chmod +x $file\n /bin/bash --login -c \"true && export OMP_NUM_THREADS=1 PYTHONWARNINGS='ignore' && $file > /tmp/task_run_$(basename $file).log 2>&1\"\n echo \"=== Controller task run for service (file: $file) completed for recovery ===\"\n done\n+ rm {{k8s_high_availability_restarting_signal_file}}\n fi\n \n touch {{k8s_high_availability_deployment_volume_mount_path}}/k8s_container_ready\ndiff --git a/sky/utils/controller_utils.py b/sky/utils/controller_utils.py\nindex 50db197ae64..92c61686d4d 100644\n--- a/sky/utils/controller_utils.py\n+++ b/sky/utils/controller_utils.py\n@@ -206,8 +206,7 @@ def from_type(cls, controller_type: str) -> Optional['Controllers']:\n return None\n \n \n-def high_availability_specified(cluster_name: Optional[str],\n- skip_warning: bool = True) -> bool:\n+def high_availability_specified(cluster_name: Optional[str]) -> bool:\n \"\"\"Check if the controller high availability is specified in user config.\n \"\"\"\n controller = Controllers.from_name(cluster_name)\n@@ -215,18 +214,9 @@ def high_availability_specified(cluster_name: Optional[str],\n return False\n \n if skypilot_config.loaded():\n- high_availability = skypilot_config.get_nested(\n- (controller.value.controller_type, 'controller',\n- 'high_availability'), False)\n- if high_availability:\n- if controller.value.controller_type != 'serve':\n- if not skip_warning:\n- print(f'{colorama.Fore.RED}High availability controller is'\n- 'only supported for SkyServe controller. It cannot'\n- f'be enabled for {controller.value.name}.'\n- f'Skipping this flag.{colorama.Style.RESET_ALL}')\n- else:\n- return True\n+ return skypilot_config.get_nested((controller.value.controller_type,\n+ 'controller', 'high_availability'),\n+ False)\n return False\n \n \ndiff --git a/tests/smoke_tests/smoke_tests_utils.py b/tests/smoke_tests/smoke_tests_utils.py\nindex 4efe72e8aa8..0d911fb18fb 100644\n--- a/tests/smoke_tests/smoke_tests_utils.py\n+++ b/tests/smoke_tests/smoke_tests_utils.py\n@@ -783,3 +783,27 @@ def get_response_from_request_id(request_id: str) -> Any:\n return request_task.get_return_value()\n raise RuntimeError(f'Failed to get request {request_id}: '\n f'{response.status_code} {response.text}')\n+\n+\n+def _get_controller_pod_name(controller_name: str) -> str:\n+ return (\n+ 'kubectl get pods -l app -o custom-columns=NAME:.metadata.name,'\n+ 'APP:.metadata.labels.app --no-headers | '\n+ f'awk \\'$2 ~ /sky-{controller_name}-controller/ {{print $1; exit}}\\'')\n+\n+\n+def kill_and_wait_controller(controller_name: str) -> str:\n+ \"\"\"Kill the controller pod and wait for a new one to be ready.\"\"\"\n+ assert controller_name in ['serve', 'jobs'\n+ ], (f'Invalid controller name: {controller_name}')\n+ return (\n+ f'initial_controller_pod=$({_get_controller_pod_name(controller_name)}); '\n+ f'echo \"Killing {controller_name} controller pod: $initial_controller_pod\"; '\n+ 'kubectl delete pod $initial_controller_pod; '\n+ f'until new_controller_pod=$({_get_controller_pod_name(controller_name)}) && '\n+ '[ \"$new_controller_pod\" != \"$initial_controller_pod\" ] && '\n+ 'kubectl get pod $new_controller_pod | grep \"1/1\"; do '\n+ f' echo \"Waiting for new {controller_name} controller pod...\"; sleep 5; '\n+ 'done; '\n+ f'echo \"New {controller_name} controller pod ready: $new_controller_pod\"'\n+ )\ndiff --git a/tests/smoke_tests/test_managed_job.py b/tests/smoke_tests/test_managed_job.py\nindex 02ac94e3360..7daeadf5066 100644\n--- a/tests/smoke_tests/test_managed_job.py\n+++ b/tests/smoke_tests/test_managed_job.py\n@@ -1163,3 +1163,46 @@ def test_managed_jobs_logs_sync_down(generic_cloud: str):\n timeout=20 * 60,\n )\n smoke_tests_utils.run_one_test(test)\n+\n+\n+def _get_ha_kill_test(name: str, generic_cloud: str,\n+ status: sky.ManagedJobStatus) -> smoke_tests_utils.Test:\n+ return smoke_tests_utils.Test(\n+ f'test-managed-jobs-ha-kill-{status.value.lower()}',\n+ [\n+ f'sky jobs launch -n {name} --infra {generic_cloud} '\n+ f'{smoke_tests_utils.LOW_RESOURCE_ARG} -y examples/managed_job.yaml -d',\n+ smoke_tests_utils.\n+ get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n+ job_name=f'{name}', job_status=[status], timeout=95),\n+ smoke_tests_utils.kill_and_wait_controller('jobs'),\n+ smoke_tests_utils.\n+ get_cmd_wait_until_managed_job_status_contains_matching_job_name(\n+ job_name=f'{name}',\n+ job_status=[sky.ManagedJobStatus.SUCCEEDED],\n+ timeout=335),\n+ f's=$(sky jobs logs --controller -n {name} --no-follow); echo \"$s\"; echo \"$s\" | grep \"Cluster launched:\"',\n+ rf'{smoke_tests_utils.GET_JOB_QUEUE} | grep {name} | head -n1 | grep \"SUCCEEDED\"',\n+ ],\n+ f'sky jobs cancel -y -n {name}',\n+ env={\n+ skypilot_config.ENV_VAR_SKYPILOT_CONFIG: 'tests/test_yamls/managed_jobs_ha_config.yaml'\n+ },\n+ timeout=20 * 60,\n+ )\n+\n+\[email protected]\[email protected]_jobs\n+def test_managed_jobs_ha_kill_running(generic_cloud: str):\n+ name = smoke_tests_utils.get_cluster_name()\n+ test = _get_ha_kill_test(name, generic_cloud, sky.ManagedJobStatus.RUNNING)\n+ smoke_tests_utils.run_one_test(test)\n+\n+\[email protected]\[email protected]_jobs\n+def test_managed_jobs_ha_kill_starting(generic_cloud: str):\n+ name = smoke_tests_utils.get_cluster_name()\n+ test = _get_ha_kill_test(name, generic_cloud, sky.ManagedJobStatus.STARTING)\n+ smoke_tests_utils.run_one_test(test)\ndiff --git a/tests/smoke_tests/test_sky_serve.py b/tests/smoke_tests/test_sky_serve.py\nindex 5b526b5efc4..269e7171165 100644\n--- a/tests/smoke_tests/test_sky_serve.py\n+++ b/tests/smoke_tests/test_sky_serve.py\n@@ -1113,7 +1113,7 @@ def test_skyserve_ha_kill_after_ready():\n f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '\n 'curl $endpoint | grep \"Hi, SkyPilot here\"',\n # Kill controller and verify recovery\n- _kill_and_wait_controller(),\n+ smoke_tests_utils.kill_and_wait_controller('serve'),\n # Verify service remains accessible after controller recovery\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n _check_replica_in_status(name, [(1, False, 'READY')]),\n@@ -1145,7 +1145,7 @@ def test_skyserve_ha_kill_during_provision():\n f' s=$(sky serve status {name}); '\n 'done; echo \"$s\"',\n # Kill controller during provisioning\n- _kill_and_wait_controller(),\n+ smoke_tests_utils.kill_and_wait_controller('serve'),\n # Verify service eventually becomes ready\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n _check_replica_in_status(name, [(1, False, 'READY')]),\n@@ -1179,7 +1179,7 @@ def test_skyserve_ha_kill_during_pending():\n f'{_SERVE_STATUS_WAIT.format(name=name)}; ',\n _check_replica_in_status(name, [(1, False, 'PENDING')]),\n # Kill controller during pending\n- _kill_and_wait_controller(),\n+ smoke_tests_utils.kill_and_wait_controller('serve'),\n # Verify service eventually becomes ready and accessible\n _SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),\n _check_replica_in_status(name, [(1, False, 'READY')]),\n@@ -1227,7 +1227,7 @@ def test_skyserve_ha_kill_during_shutdown():\n f' s=$(sky serve status {name}); '\n 'done; echo \"$s\"',\n # Kill controller during shutdown\n- _kill_and_wait_controller(),\n+ smoke_tests_utils.kill_and_wait_controller('serve'),\n # Even after the pod ready, `serve status` may return `Failed to connect to serve controller, please try again later.`\n # So we need to wait for a while before checking the status again.\n 'sleep 10',\n@@ -1243,16 +1243,3 @@ def test_skyserve_ha_kill_during_shutdown():\n timeout=30 * 60,\n env={'SKYPILOT_CONFIG': 'tests/skyserve/high_availability/config.yaml'})\n smoke_tests_utils.run_one_test(test)\n-\n-\n-def _kill_and_wait_controller() -> str:\n- \"\"\"Kill the controller pod and wait for a new one to be ready.\"\"\"\n- return (\n- 'initial_controller_pod=$(kubectl get pods -l \"skypilot-head-node=1\" -o jsonpath=\"{.items[0].metadata.name}\"); '\n- 'echo \"Killing controller pod: $initial_controller_pod\"; '\n- 'kubectl delete pod $initial_controller_pod; '\n- 'until new_controller_pod=$(kubectl get pods -l \"skypilot-head-node=1\" -o jsonpath=\"{.items[0].metadata.name}\") && '\n- '[ \"$new_controller_pod\" != \"$initial_controller_pod\" ] && kubectl get pod $new_controller_pod | grep \"1/1\"; do '\n- ' echo \"Waiting for new controller pod...\"; sleep 5; '\n- 'done; '\n- 'echo \"New controller pod ready: $new_controller_pod\"')\ndiff --git a/tests/test_yamls/managed_jobs_ha_config.yaml b/tests/test_yamls/managed_jobs_ha_config.yaml\nnew file mode 100644\nindex 00000000000..229cc766dc3\n--- /dev/null\n+++ b/tests/test_yamls/managed_jobs_ha_config.yaml\n@@ -0,0 +1,6 @@\n+jobs:\n+ controller:\n+ resources:\n+ infra: kubernetes\n+ cpus: 2\n+ high_availability: true\n"
}
|
[
{
"diff_hunk": "@@ -221,8 +241,12 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:\n \n # NOTE: we do not check cluster status first because race condition\n # can occur, i.e. cluster can be down during the job status check.\n- job_status = managed_job_utils.get_job_status(\n- self._backend, cluster_name)\n+ try:\n+ job_status = managed_job_utils.get_job_status(\n+ self._backend, cluster_name)\n+ except (exceptions.FetchClusterInfoError, exceptions.CommandError):",
"line": null,
"original_line": 247,
"original_start_line": null,
"path": "sky/jobs/controller.py",
"start_line": null,
"text": "@user1:\nCommandError should be caught by get_job_status. Is it possible to hit FetchClusterInfoError? If yes, what's the stacktrace?\n\n@author:\nYes; if we terminate the controller on provisioning/terminating the worker cluster, and resume it, it is possible that the cluster info cannot be fetched. I encountered this today and added this except. But forget to record the stacktrace. Will do next time\n\n@user1:\nHmm, I'm pretty sure this should be handled by get_job_status. If not it may be a bug."
},
{
"diff_hunk": "@@ -152,6 +152,20 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:\n Other exceptions may be raised depending on the backend.\n \"\"\"\n \n+ latest_task_id, status = managed_job_state.get_latest_task_id_status(\n+ self._job_id)\n+ is_recovery = False",
"line": null,
"original_line": 157,
"original_start_line": null,
"path": "sky/jobs/controller.py",
"start_line": null,
"text": "@user1:\nCan we call this something else? The task may still be running, in which case we don't necessarily need to \"recover\" - that is, recover from spot preemption.\r\nMaybe we can call this `is_resume` or `is_restore`?"
},
{
"diff_hunk": "@@ -171,42 +185,48 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:\n return True\n usage_lib.messages.usage.update_task_id(task_id)\n task_id_env_var = task.envs[constants.TASK_ID_ENV_VAR]\n- submitted_at = time.time()\n- if task_id == 0:\n- submitted_at = backend_utils.get_timestamp_from_run_timestamp(\n- self._backend.run_timestamp)\n assert task.name is not None, task\n cluster_name = managed_job_utils.generate_managed_job_cluster_name(\n task.name, self._job_id)\n self._strategy_executor = recovery_strategy.StrategyExecutor.make(\n cluster_name, self._backend, task, self._job_id, task_id)\n- managed_job_state.set_starting(\n- self._job_id,\n- task_id,\n- self._backend.run_timestamp,\n- submitted_at,\n- resources_str=backend_utils.get_task_resources_str(\n- task, is_managed_job=True),\n- specs={\n- 'max_restarts_on_errors':\n- self._strategy_executor.max_restarts_on_errors\n- },\n- callback_func=callback_func)\n- logger.info(\n- f'Submitted managed job {self._job_id} (task: {task_id}, name: '\n- f'{task.name!r}); {constants.TASK_ID_ENV_VAR}: {task_id_env_var}')\n-\n- logger.info('Started monitoring.')\n-\n- remote_job_submitted_at = self._strategy_executor.launch()\n- assert remote_job_submitted_at is not None, remote_job_submitted_at\n+ if not is_recovery:\n+ submitted_at = time.time()\n+ if task_id == 0:\n+ submitted_at = backend_utils.get_timestamp_from_run_timestamp(\n+ self._backend.run_timestamp)\n+ managed_job_state.set_starting(\n+ self._job_id,\n+ task_id,\n+ self._backend.run_timestamp,\n+ submitted_at,\n+ resources_str=backend_utils.get_task_resources_str(\n+ task, is_managed_job=True),\n+ specs={\n+ 'max_restarts_on_errors':\n+ self._strategy_executor.max_restarts_on_errors\n+ },\n+ callback_func=callback_func)\n+ logger.info(f'Submitted managed job {self._job_id} '\n+ f'(task: {task_id}, name: {task.name!r}); '\n+ f'{constants.TASK_ID_ENV_VAR}: {task_id_env_var}')\n+\n+ logger.info('Started monitoring.')\n+\n+ remote_job_submitted_at = self._strategy_executor.launch()\n+ assert remote_job_submitted_at is not None, remote_job_submitted_at\n \n- managed_job_state.set_started(job_id=self._job_id,\n- task_id=task_id,\n- start_time=remote_job_submitted_at,\n- callback_func=callback_func)\n+ managed_job_state.set_started(job_id=self._job_id,\n+ task_id=task_id,\n+ start_time=remote_job_submitted_at,\n+ callback_func=callback_func)\n \n while True:\n+ if is_recovery:\n+ last_status = managed_job_state.get_job_status_with_task_id(\n+ job_id=self._job_id, task_id=task_id)\n+ if last_status is not None and last_status.is_terminal():\n+ return True",
"line": null,
"original_line": 229,
"original_start_line": null,
"path": "sky/jobs/controller.py",
"start_line": null,
"text": "@user1:\nShould return True only if this task is SUCCEEDED, so we don't proceed to later tasks.\r\n```suggestion\r\n return last_status == managed_job_state.SUCCEEDED\r\n```"
},
{
"diff_hunk": "@@ -639,12 +639,14 @@ available_node_types:\n /bin/bash --login -c \"true && export OMP_NUM_THREADS=1 PYTHONWARNINGS='ignore' && {{k8s_high_availability_deployment_setup_script_path}} > /tmp/controller_recovery_setup_commands.log 2>&1\"\n echo \"=== Controller setup commands completed for recovery ===\"\n \n+ touch {{k8s_high_availability_restarting_signal_file}}\n for file in {{k8s_high_availability_deployment_run_script_dir}}/*; do\n # ! Keep this aligned with `CloudVmRayBackend._execute()`\n chmod +x $file\n /bin/bash --login -c \"true && export OMP_NUM_THREADS=1 PYTHONWARNINGS='ignore' && $file > /tmp/task_run_$(basename $file).log 2>&1\"",
"line": 687,
"original_line": 646,
"original_start_line": null,
"path": "sky/templates/kubernetes-ray.yml.j2",
"start_line": null,
"text": "@user1:\nThis logic may run a lot of things if the jobs controller previously had many jobs. We should make sure it will scale well.\n\n@author:\nGood point! Added a todo here."
}
] |
68934c83ca534e18f989107971e481d629b19883
|
diff --git a/docs/repo-images/managed-job-status-diagram.png b/docs/repo-images/managed-job-status-diagram.png
index 98e2a10faa1..ab94a2de917 100644
Binary files a/docs/repo-images/managed-job-status-diagram.png and b/docs/repo-images/managed-job-status-diagram.png differ
diff --git a/sky/backends/cloud_vm_ray_backend.py b/sky/backends/cloud_vm_ray_backend.py
index e0095515825..36127cc351b 100644
--- a/sky/backends/cloud_vm_ray_backend.py
+++ b/sky/backends/cloud_vm_ray_backend.py
@@ -21,6 +21,7 @@
import colorama
import filelock
+import yaml
import sky
from sky import backends
@@ -2302,12 +2303,15 @@ def _update_cluster_info(self):
clouds.ProvisionerVersion.SKYPILOT):
provider_name = str(self.launched_resources.cloud).lower()
config = {}
- if os.path.exists(self.cluster_yaml):
- # It is possible that the cluster yaml is not available when
- # the handle is unpickled for service replicas from the
- # controller with older version.
- config = global_user_state.get_cluster_yaml_dict(
- self.cluster_yaml)
+ # It is possible that the cluster yaml is not available when
+ # the handle is unpickled for service replicas from the
+ # controller with older version.
+ yaml_str = global_user_state.get_cluster_yaml_str(self.cluster_yaml)
+ if yaml_str is None:
+ # If the cluster yaml is not available,
+ # we skip updating the cluster info.
+ return
+ config = yaml.safe_load(yaml_str)
try:
cluster_info = provision_lib.get_cluster_info(
provider_name,
@@ -2500,6 +2504,21 @@ def get_command_runners(self,
'Tried to use cached cluster info, but it\'s missing for '
f'cluster "{self.cluster_name}"')
self._update_cluster_info()
+ # For Kubernetes, `KubernetesCommandRunner` want to get the pod names
+ # to run the command. But for high availability serve controller,
+ # the controller pod is part of a deployment, and once the pod is
+ # killed and a new one is created, the pod name changes, so we need
+ # to manually update the cluster info here.
+ # TODO(andyl): See if we can prevent this refresh. Like pass in
+ # deployment name as identifier for KubernetesCommandRunner. Now this
+ # is required for rsync as using deployment in rsync seems to cause
+ # some unknown issues.
+ # TODO(andyl): Should check through the real cluster info. Same as
+ # the TODO in kubernetes/instance.py:terminate_instances
+ if (isinstance(self.launched_resources.cloud, clouds.Kubernetes) and
+ controller_utils.high_availability_specified(
+ self.cluster_name)):
+ self._update_cluster_info()
assert self.cached_cluster_info is not None, self
runners = provision_lib.get_command_runners(
diff --git a/sky/clouds/kubernetes.py b/sky/clouds/kubernetes.py
index 438f57a81c4..8fd11bb7af6 100644
--- a/sky/clouds/kubernetes.py
+++ b/sky/clouds/kubernetes.py
@@ -646,6 +646,9 @@ def _get_image_id(resources: 'resources_lib.Resources') -> str:
(constants.PERSISTENT_SETUP_SCRIPT_PATH),
'k8s_high_availability_deployment_run_script_dir':
(constants.PERSISTENT_RUN_SCRIPT_DIR),
+ 'k8s_high_availability_restarting_signal_file':
+ (constants.PERSISTENT_RUN_RESTARTING_SIGNAL_FILE),
+ 'sky_python_cmd': constants.SKY_PYTHON_CMD,
'k8s_high_availability_storage_class_name':
(k8s_ha_storage_class_name),
'avoid_label_keys': avoid_label_keys,
diff --git a/sky/jobs/README.md b/sky/jobs/README.md
index b5fbf284554..1e9c4088eec 100644
--- a/sky/jobs/README.md
+++ b/sky/jobs/README.md
@@ -62,6 +62,7 @@ state "All States" as AllStates {
}
InnerLoop -\-> CANCELLING : user cancel request
+ InnerLoop -[dotted]> RECOVERING : HA controller recovery
CANCELLING -> CANCELLED : cluster\ncleaned up
CANCELLING -[dotted]-> Terminal: job could complete\nbefore we can cancel
}
diff --git a/sky/jobs/controller.py b/sky/jobs/controller.py
index 7204fd4c516..7671019849b 100644
--- a/sky/jobs/controller.py
+++ b/sky/jobs/controller.py
@@ -152,6 +152,20 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:
Other exceptions may be raised depending on the backend.
"""
+ latest_task_id, last_task_prev_status = (
+ managed_job_state.get_latest_task_id_status(self._job_id))
+ is_resume = False
+ if (latest_task_id is not None and last_task_prev_status !=
+ managed_job_state.ManagedJobStatus.PENDING):
+ assert latest_task_id >= task_id, (latest_task_id, task_id)
+ if latest_task_id > task_id:
+ logger.info(f'Task {task_id} ({task.name}) has already '
+ 'been executed. Skipping...')
+ return True
+ if latest_task_id == task_id:
+ # Start recovery.
+ is_resume = True
+
callback_func = managed_job_utils.event_callback_func(
job_id=self._job_id, task_id=task_id, task=task)
if task.run is None:
@@ -171,42 +185,72 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:
return True
usage_lib.messages.usage.update_task_id(task_id)
task_id_env_var = task.envs[constants.TASK_ID_ENV_VAR]
- submitted_at = time.time()
- if task_id == 0:
- submitted_at = backend_utils.get_timestamp_from_run_timestamp(
- self._backend.run_timestamp)
assert task.name is not None, task
cluster_name = managed_job_utils.generate_managed_job_cluster_name(
task.name, self._job_id)
self._strategy_executor = recovery_strategy.StrategyExecutor.make(
cluster_name, self._backend, task, self._job_id, task_id)
- managed_job_state.set_starting(
- self._job_id,
- task_id,
- self._backend.run_timestamp,
- submitted_at,
- resources_str=backend_utils.get_task_resources_str(
- task, is_managed_job=True),
- specs={
- 'max_restarts_on_errors':
- self._strategy_executor.max_restarts_on_errors
- },
- callback_func=callback_func)
- logger.info(
- f'Submitted managed job {self._job_id} (task: {task_id}, name: '
- f'{task.name!r}); {constants.TASK_ID_ENV_VAR}: {task_id_env_var}')
+ if not is_resume:
+ submitted_at = time.time()
+ if task_id == 0:
+ submitted_at = backend_utils.get_timestamp_from_run_timestamp(
+ self._backend.run_timestamp)
+ managed_job_state.set_starting(
+ self._job_id,
+ task_id,
+ self._backend.run_timestamp,
+ submitted_at,
+ resources_str=backend_utils.get_task_resources_str(
+ task, is_managed_job=True),
+ specs={
+ 'max_restarts_on_errors':
+ self._strategy_executor.max_restarts_on_errors
+ },
+ callback_func=callback_func)
+ logger.info(f'Submitted managed job {self._job_id} '
+ f'(task: {task_id}, name: {task.name!r}); '
+ f'{constants.TASK_ID_ENV_VAR}: {task_id_env_var}')
logger.info('Started monitoring.')
- remote_job_submitted_at = self._strategy_executor.launch()
- assert remote_job_submitted_at is not None, remote_job_submitted_at
+ # Only do the initial cluster launch if not resuming from a controller
+ # failure. Otherwise, we will transit to recovering immediately.
+ remote_job_submitted_at = time.time()
+ if not is_resume:
+ remote_job_submitted_at = self._strategy_executor.launch()
+ assert remote_job_submitted_at is not None, remote_job_submitted_at
- managed_job_state.set_started(job_id=self._job_id,
- task_id=task_id,
- start_time=remote_job_submitted_at,
- callback_func=callback_func)
+ if not is_resume:
+ managed_job_state.set_started(job_id=self._job_id,
+ task_id=task_id,
+ start_time=remote_job_submitted_at,
+ callback_func=callback_func)
while True:
+ # NOTE: if we are resuming from a controller failure, we only keep
+ # monitoring if the job is in RUNNING state. For all other cases,
+ # we will directly transit to recovering since we have no idea what
+ # the cluster status is.
+ force_transit_to_recovering = False
+ if is_resume:
+ prev_status = managed_job_state.get_job_status_with_task_id(
+ job_id=self._job_id, task_id=task_id)
+ if prev_status is not None:
+ if prev_status.is_terminal():
+ return (prev_status ==
+ managed_job_state.ManagedJobStatus.SUCCEEDED)
+ if (prev_status ==
+ managed_job_state.ManagedJobStatus.CANCELLING):
+ # If the controller is down when cancelling the job,
+ # we re-raise the error to run the `_cleanup` function
+ # again to clean up any remaining resources.
+ raise exceptions.ManagedJobUserCancelledError(
+ 'Recovering cancel signal.')
+ if prev_status != managed_job_state.ManagedJobStatus.RUNNING:
+ force_transit_to_recovering = True
+ # This resume logic should only be triggered once.
+ is_resume = False
+
time.sleep(managed_job_utils.JOB_STATUS_CHECK_GAP_SECONDS)
# Check the network connection to avoid false alarm for job failure.
@@ -221,8 +265,19 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:
# NOTE: we do not check cluster status first because race condition
# can occur, i.e. cluster can be down during the job status check.
- job_status = managed_job_utils.get_job_status(
- self._backend, cluster_name)
+ # NOTE: If fetching the job status fails or we force to transit to
+ # recovering, we will set the job status to None, which will force
+ # enter the recovering logic.
+ job_status = None
+ if not force_transit_to_recovering:
+ try:
+ job_status = managed_job_utils.get_job_status(
+ self._backend, cluster_name)
+ except exceptions.FetchClusterInfoError as fetch_e:
+ logger.info(
+ 'Failed to fetch the job status. Start recovery.\n'
+ f'Exception: {common_utils.format_exception(fetch_e)}\n'
+ f'Traceback: {traceback.format_exc()}')
if job_status == job_lib.JobStatus.SUCCEEDED:
success_end_time = managed_job_utils.try_to_get_job_end_time(
@@ -379,7 +434,17 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:
if handle is not None:
resources = handle.launched_resources
assert resources is not None, handle
- if resources.need_cleanup_after_preemption_or_failure():
+ # If we are forcing to transit to recovering, we need to clean
+ # up the cluster as it is possible that we already submitted the
+ # job to the worker cluster, but state is not updated yet. In
+ # this case, it is possible that we will double-submit the job
+ # to the worker cluster. So we always clean up the cluster here.
+ # TODO(tian,cooperc): We can check if there is a running job on
+ # the worker cluster, and if so, we can skip the cleanup.
+ # Challenge: race condition when the worker cluster thought it
+ # does not have a running job yet but later the job is launched.
+ if (resources.need_cleanup_after_preemption_or_failure() or
+ force_transit_to_recovering):
# Some spot resource (e.g., Spot TPU VM) may need to be
# cleaned up after preemption, as running launch again on
# those clusters again may fail.
@@ -389,9 +454,11 @@ def _run_one_task(self, task_id: int, task: 'sky.Task') -> bool:
# Try to recover the managed jobs, when the cluster is preempted or
# failed or the job status is failed to be fetched.
- managed_job_state.set_recovering(job_id=self._job_id,
- task_id=task_id,
- callback_func=callback_func)
+ managed_job_state.set_recovering(
+ job_id=self._job_id,
+ task_id=task_id,
+ force_transit_to_recovering=force_transit_to_recovering,
+ callback_func=callback_func)
recovered_time = self._strategy_executor.recover()
managed_job_state.set_recovered(self._job_id,
task_id,
diff --git a/sky/jobs/scheduler.py b/sky/jobs/scheduler.py
index 72234ecd73e..a4197b75d56 100644
--- a/sky/jobs/scheduler.py
+++ b/sky/jobs/scheduler.py
@@ -84,6 +84,32 @@ def _get_lock_path() -> str:
return path
+def _start_controller(job_id: int, dag_yaml_path: str,
+ env_file_path: str) -> None:
+ activate_python_env_cmd = (f'{constants.ACTIVATE_SKY_REMOTE_PYTHON_ENV};')
+ source_environment_cmd = (f'source {env_file_path};'
+ if env_file_path else '')
+ run_controller_cmd = ('python -u -m sky.jobs.controller '
+ f'{dag_yaml_path} --job-id {job_id};')
+
+ # If the command line here is changed, please also update
+ # utils._controller_process_alive. `--job-id X` should be at
+ # the end.
+ run_cmd = (f'{activate_python_env_cmd}'
+ f'{source_environment_cmd}'
+ f'{run_controller_cmd}')
+
+ logs_dir = os.path.expanduser(
+ managed_job_constants.JOBS_CONTROLLER_LOGS_DIR)
+ os.makedirs(logs_dir, exist_ok=True)
+ log_path = os.path.join(logs_dir, f'{job_id}.log')
+
+ pid = subprocess_utils.launch_new_process_tree(run_cmd, log_output=log_path)
+ state.set_job_controller_pid(job_id, pid)
+
+ logger.debug(f'Job {job_id} started with pid {pid}')
+
+
def maybe_schedule_next_jobs() -> None:
"""Determine if any managed jobs can be scheduled, and if so, schedule them.
@@ -158,32 +184,9 @@ def maybe_schedule_next_jobs() -> None:
job_id = maybe_next_job['job_id']
dag_yaml_path = maybe_next_job['dag_yaml_path']
+ env_file_path = maybe_next_job['env_file_path']
- activate_python_env_cmd = (
- f'{constants.ACTIVATE_SKY_REMOTE_PYTHON_ENV};')
- env_file = maybe_next_job['env_file_path']
- source_environment_cmd = (f'source {env_file};'
- if env_file else '')
- run_controller_cmd = ('python -u -m sky.jobs.controller '
- f'{dag_yaml_path} --job-id {job_id};')
-
- # If the command line here is changed, please also update
- # utils._controller_process_alive. `--job-id X` should be at
- # the end.
- run_cmd = (f'{activate_python_env_cmd}'
- f'{source_environment_cmd}'
- f'{run_controller_cmd}')
-
- logs_dir = os.path.expanduser(
- managed_job_constants.JOBS_CONTROLLER_LOGS_DIR)
- os.makedirs(logs_dir, exist_ok=True)
- log_path = os.path.join(logs_dir, f'{job_id}.log')
-
- pid = subprocess_utils.launch_new_process_tree(
- run_cmd, log_output=log_path)
- state.set_job_controller_pid(job_id, pid)
-
- logger.debug(f'Job {job_id} started with pid {pid}')
+ _start_controller(job_id, dag_yaml_path, env_file_path)
except filelock.Timeout:
# If we can't get the lock, just exit. The process holding the lock
@@ -203,10 +206,15 @@ def submit_job(job_id: int, dag_yaml_path: str, original_user_yaml_path: str,
The user hash should be set (e.g. via SKYPILOT_USER_ID) before calling this.
"""
with filelock.FileLock(_get_lock_path()):
- state.scheduler_set_waiting(job_id, dag_yaml_path,
- original_user_yaml_path, env_file_path,
- common_utils.get_user_hash(), priority)
- maybe_schedule_next_jobs()
+ is_resume = state.scheduler_set_waiting(job_id, dag_yaml_path,
+ original_user_yaml_path,
+ env_file_path,
+ common_utils.get_user_hash(),
+ priority)
+ if is_resume:
+ _start_controller(job_id, dag_yaml_path, env_file_path)
+ else:
+ maybe_schedule_next_jobs()
@contextlib.contextmanager
diff --git a/sky/jobs/state.py b/sky/jobs/state.py
index 10e5d5149cb..9c9a98675f8 100644
--- a/sky/jobs/state.py
+++ b/sky/jobs/state.py
@@ -352,6 +352,16 @@ def failure_statuses(cls) -> List['ManagedJobStatus']:
cls.FAILED_NO_RESOURCE, cls.FAILED_CONTROLLER
]
+ @classmethod
+ def processing_statuses(cls) -> List['ManagedJobStatus']:
+ # Any status that is not terminal and is not CANCELLING.
+ return [
+ cls.PENDING,
+ cls.STARTING,
+ cls.RUNNING,
+ cls.RECOVERING,
+ ]
+
_SPOT_STATUS_TO_COLOR = {
ManagedJobStatus.PENDING: colorama.Fore.BLUE,
@@ -607,21 +617,49 @@ def set_started(job_id: int, task_id: int, start_time: float,
@_init_db
-def set_recovering(job_id: int, task_id: int, callback_func: CallbackType):
+def set_recovering(job_id: int, task_id: int, force_transit_to_recovering: bool,
+ callback_func: CallbackType):
"""Set the task to recovering state, and update the job duration."""
assert _DB_PATH is not None
logger.info('=== Recovering... ===')
+ expected_status: List[str] = [ManagedJobStatus.RUNNING.value]
+ status_str = 'status=(?)'
+ if force_transit_to_recovering:
+ # For the HA job controller, it is possible that the jobs came from any
+ # processing status to recovering. But it should not be any terminal
+ # status as such jobs will not be recovered; and it should not be
+ # CANCELLING as we will directly trigger a cleanup.
+ expected_status = [
+ s.value for s in ManagedJobStatus.processing_statuses()
+ ]
+ question_mark_str = ', '.join(['?'] * len(expected_status))
+ status_str = f'status IN ({question_mark_str})'
+ # NOTE: if we are resuming from a controller failure and the previous status
+ # is STARTING, the initial value of `last_recovered_at` might not be set
+ # yet (default value -1). In this case, we should not add current timestamp.
+ # Otherwise, the job duration will be incorrect (~55 years from 1970).
+ current_time = time.time()
with db_utils.safe_cursor(_DB_PATH) as cursor:
cursor.execute(
- """\
+ f"""\
UPDATE spot SET
- status=(?), job_duration=job_duration+(?)-last_recovered_at
+ status=(?),
+ job_duration=CASE
+ WHEN last_recovered_at >= 0
+ THEN job_duration+(?)-last_recovered_at
+ ELSE job_duration
+ END,
+ last_recovered_at=CASE
+ WHEN last_recovered_at < 0
+ THEN (?)
+ ELSE last_recovered_at
+ END
WHERE spot_job_id=(?) AND
task_id=(?) AND
- status=(?) AND
+ {status_str} AND
end_at IS null""",
- (ManagedJobStatus.RECOVERING.value, time.time(), job_id, task_id,
- ManagedJobStatus.RUNNING.value))
+ (ManagedJobStatus.RECOVERING.value, current_time, current_time,
+ job_id, task_id, *expected_status))
if cursor.rowcount != 1:
raise exceptions.ManagedJobStatusError(
f'Failed to set the task to recovering. '
@@ -996,6 +1034,19 @@ def _get_all_task_ids_statuses(
return [(row[0], ManagedJobStatus(row[1])) for row in id_statuses]
+@_init_db
+def get_job_status_with_task_id(job_id: int,
+ task_id: int) -> Optional[ManagedJobStatus]:
+ assert _DB_PATH is not None
+ with db_utils.safe_cursor(_DB_PATH) as cursor:
+ status = cursor.execute(
+ """\
+ SELECT status FROM spot
+ WHERE spot_job_id=(?) AND task_id=(?)""",
+ (job_id, task_id)).fetchone()
+ return ManagedJobStatus(status[0]) if status else None
+
+
def get_num_tasks(job_id: int) -> int:
return len(_get_all_task_ids_statuses(job_id))
@@ -1156,8 +1207,15 @@ def get_local_log_file(job_id: int, task_id: Optional[int]) -> Optional[str]:
@_init_db
def scheduler_set_waiting(job_id: int, dag_yaml_path: str,
original_user_yaml_path: str, env_file_path: str,
- user_hash: str, priority: int) -> None:
- """Do not call without holding the scheduler lock."""
+ user_hash: str, priority: int) -> bool:
+ """Do not call without holding the scheduler lock.
+
+ Returns: Whether this is a recovery run or not.
+ If this is a recovery run, the job may already be in the WAITING
+ state and the update will not change the schedule_state (hence the
+ updated_count will be 0). In this case, we return True.
+ Otherwise, we return False.
+ """
assert _DB_PATH is not None
with db_utils.safe_cursor(_DB_PATH) as cursor:
updated_count = cursor.execute(
@@ -1169,7 +1227,9 @@ def scheduler_set_waiting(job_id: int, dag_yaml_path: str,
(ManagedJobScheduleState.WAITING.value, dag_yaml_path,
original_user_yaml_path, env_file_path, user_hash, priority,
job_id, ManagedJobScheduleState.INACTIVE.value)).rowcount
- assert updated_count == 1, (job_id, updated_count)
+ # For a recovery run, the job may already be in the WAITING state.
+ assert updated_count <= 1, (job_id, updated_count)
+ return updated_count == 0
@_init_db
diff --git a/sky/jobs/utils.py b/sky/jobs/utils.py
index 4587db68e31..eab88e5d28c 100644
--- a/sky/jobs/utils.py
+++ b/sky/jobs/utils.py
@@ -176,6 +176,17 @@ def update_managed_jobs_statuses(job_id: Optional[int] = None):
Note: we expect that job_id, if provided, refers to a nonterminal job or a
job that has not completed its cleanup (schedule state not DONE).
"""
+ # This signal file suggests that the controller is recovering from a
+ # failure. See sky/templates/kubernetes-ray.yml.j2 for more details.
+ # When restarting the controller processes, we don't want this event to
+ # set the job status to FAILED_CONTROLLER.
+ # TODO(tian): Change this to restart the controller process. For now we
+ # disabled it when recovering because we want to avoid caveats of infinite
+ # restart of last controller process that fully occupied the controller VM.
+ if os.path.exists(
+ os.path.expanduser(
+ constants.PERSISTENT_RUN_RESTARTING_SIGNAL_FILE)):
+ return
def _cleanup_job_clusters(job_id: int) -> Optional[str]:
"""Clean up clusters for a job. Returns error message if any.
diff --git a/sky/skylet/constants.py b/sky/skylet/constants.py
index 25878255f59..cc8f07f3e7e 100644
--- a/sky/skylet/constants.py
+++ b/sky/skylet/constants.py
@@ -396,6 +396,10 @@
# persistent through PVC. See kubernetes-ray.yml.j2.
PERSISTENT_SETUP_SCRIPT_PATH = '~/.sky/.controller_recovery_setup_commands.sh'
PERSISTENT_RUN_SCRIPT_DIR = '~/.sky/.controller_recovery_task_run'
+# Signal file to indicate that the controller is recovering from a failure.
+# See sky/jobs/utils.py::update_managed_jobs_statuses for more details.
+PERSISTENT_RUN_RESTARTING_SIGNAL_FILE = (
+ '~/.sky/.controller_recovery_restarting_signal')
# The placeholder for the local skypilot config path in file mounts for
# controllers.
diff --git a/sky/skylet/job_lib.py b/sky/skylet/job_lib.py
index 9f6ab335add..57959fa2285 100644
--- a/sky/skylet/job_lib.py
+++ b/sky/skylet/job_lib.py
@@ -758,6 +758,14 @@ def fail_all_jobs_in_progress() -> None:
def update_status() -> None:
+ # This signal file suggests that the controller is recovering from a
+ # failure. See sky/jobs/utils.py::update_managed_jobs_statuses for more
+ # details. When recovering, we should not update the job status to failed
+ # driver as they will be recovered later.
+ if os.path.exists(
+ os.path.expanduser(
+ constants.PERSISTENT_RUN_RESTARTING_SIGNAL_FILE)):
+ return
# This will be called periodically by the skylet to update the status
# of the jobs in the database, to avoid stale job status.
nonterminal_jobs = _get_jobs(user_hash=None,
diff --git a/sky/templates/kubernetes-ray.yml.j2 b/sky/templates/kubernetes-ray.yml.j2
index 13ee38e6ea2..baf9c940269 100644
--- a/sky/templates/kubernetes-ray.yml.j2
+++ b/sky/templates/kubernetes-ray.yml.j2
@@ -632,19 +632,66 @@ available_node_types:
{% if high_availability %}
mkdir -p {{k8s_high_availability_deployment_run_script_dir}}
if [ -f {{k8s_high_availability_deployment_volume_mount_path}}/k8s_container_ready ]; then
+ SKYPILOT_HA_RECOVERY_LOG="/tmp/ha_recovery.log"
+ echo "Starting HA recovery at $(date)" >> $SKYPILOT_HA_RECOVERY_LOG
+ start_time=$SECONDS
+ retry_count=0
+
+ # Wait for Ray to be ready, as the following commands is depending on Ray.
+ GET_RAY_STATUS_CMD=$({{sky_python_cmd}} -c 'from sky.provision import instance_setup; print(instance_setup.RAY_STATUS_WITH_SKY_RAY_PORT_COMMAND)')
+ while true; do
+ retry_count=$((retry_count + 1))
+ current_duration=$(( SECONDS - start_time ))
+ echo "Attempt $retry_count to get Ray status after $current_duration seconds..." >> $SKYPILOT_HA_RECOVERY_LOG
+
+ bash --login -c "$GET_RAY_STATUS_CMD"
+ if [ $? -eq 0 ]; then
+ wait_duration=$(( SECONDS - start_time ))
+ echo "Ray ready after waiting $wait_duration seconds (took $retry_count attempts)" >> $SKYPILOT_HA_RECOVERY_LOG
+ break
+ fi
+ echo "Waiting for Ray to be ready..." >> $SKYPILOT_HA_RECOVERY_LOG
+ sleep 2
+ done
+
# ! Keep this aligned with `CloudVmRayBackend._setup()`
- # Suppose all `task.setup` are the same for skyserve controller task.
+ # Suppose all `task.setup` are the same for sky serve / managed jobs controller task.
# So be careful for compatibility issue once you change it.
chmod +x {{k8s_high_availability_deployment_setup_script_path}}
/bin/bash --login -c "true && export OMP_NUM_THREADS=1 PYTHONWARNINGS='ignore' && {{k8s_high_availability_deployment_setup_script_path}} > /tmp/controller_recovery_setup_commands.log 2>&1"
- echo "=== Controller setup commands completed for recovery ==="
-
+ echo "=== Controller setup commands completed for recovery at $(date) ===" >> $SKYPILOT_HA_RECOVERY_LOG
+
+ touch {{k8s_high_availability_restarting_signal_file}}
+ # Get all in-progress jobs from managed jobs controller. We skip any jobs that are already done.
+ # Also, skip the jobs that are waiting to be scheduled as those does not have a controller process running.
+ # For SkyServe, this will be None and every service will be recovered. This is because SkyServe
+ # will delete the service from the database after it is terminated so everything in the database is running.
+ ALL_IN_PROGRESS_JOBS=$({{sky_python_cmd}} -c "from sky.jobs import state; jobs = state.get_managed_jobs(); print(' '.join({str(job['job_id']) for job in jobs if job['schedule_state'] not in [state.ManagedJobScheduleState.DONE, state.ManagedJobScheduleState.WAITING]}) if jobs else None)")
+ if [ "$ALL_IN_PROGRESS_JOBS" != "None" ]; then
+ read -ra ALL_IN_PROGRESS_JOBS_SEQ <<< "$ALL_IN_PROGRESS_JOBS"
+ fi
for file in {{k8s_high_availability_deployment_run_script_dir}}/*; do
+ # This is the cluster job id on managed jobs controller, but it is guaranteed to be the same as the managed job id,
+ # so we directly use it here. See `CloudVmRayBackend._exec_code_on_head::_dump_code_to_file` for more details.
+ JOB_ID=$(basename $file | sed 's/sky_job_//')
+ # If the list of in-progress jobs is not None (meaning this is a managed job HA controller) and job is not in-progress, skip.
+ if [ "$ALL_IN_PROGRESS_JOBS" != "None" ]; then
+ if [[ ! " ${ALL_IN_PROGRESS_JOBS_SEQ[@]} " =~ " ${JOB_ID} " ]]; then
+ continue
+ fi
+ fi
# ! Keep this aligned with `CloudVmRayBackend._execute()`
chmod +x $file
+ # TODO(tian): This logic may run a lot of things if the jobs controller previously had many jobs.
+ # We should do more tests and make sure it will scale well.
/bin/bash --login -c "true && export OMP_NUM_THREADS=1 PYTHONWARNINGS='ignore' && $file > /tmp/task_run_$(basename $file).log 2>&1"
- echo "=== Controller task run for service (file: $file) completed for recovery ==="
+ echo "=== Controller task run for service / job (file: $file) completed for recovery at $(date) ===" >> $SKYPILOT_HA_RECOVERY_LOG
done
+ rm {{k8s_high_availability_restarting_signal_file}}
+
+ duration=$(( SECONDS - start_time ))
+ echo "HA recovery completed at $(date)" >> $SKYPILOT_HA_RECOVERY_LOG
+ echo "Total recovery time: $duration seconds" >> $SKYPILOT_HA_RECOVERY_LOG
fi
touch {{k8s_high_availability_deployment_volume_mount_path}}/k8s_container_ready
diff --git a/sky/utils/controller_utils.py b/sky/utils/controller_utils.py
index 91681a096ef..6473ff0edb0 100644
--- a/sky/utils/controller_utils.py
+++ b/sky/utils/controller_utils.py
@@ -422,7 +422,7 @@ def download_and_stream_latest_job_log(
return None
log_dir = list(log_dirs.values())[0]
- log_file = os.path.join(log_dir, 'run.log')
+ log_file = os.path.expanduser(os.path.join(log_dir, 'run.log'))
# Print the logs to the console.
# TODO(zhwu): refactor this into log_utils, along with the refactoring for
diff --git a/tests/smoke_tests/smoke_tests_utils.py b/tests/smoke_tests/smoke_tests_utils.py
index 84aa2482205..853da93414c 100644
--- a/tests/smoke_tests/smoke_tests_utils.py
+++ b/tests/smoke_tests/smoke_tests_utils.py
@@ -778,3 +778,27 @@ def get_response_from_request_id(request_id: str) -> Any:
return request_task.get_return_value()
raise RuntimeError(f'Failed to get request {request_id}: '
f'{response.status_code} {response.text}')
+
+
+def _get_controller_pod_name(controller_name: str) -> str:
+ return (
+ 'kubectl get pods -l app -o custom-columns=NAME:.metadata.name,'
+ 'APP:.metadata.labels.app --no-headers | '
+ f'awk \'$2 ~ /sky-{controller_name}-controller/ {{print $1; exit}}\'')
+
+
+def kill_and_wait_controller(controller_name: str) -> str:
+ """Kill the controller pod and wait for a new one to be ready."""
+ assert controller_name in ['serve', 'jobs'
+ ], (f'Invalid controller name: {controller_name}')
+ return (
+ f'initial_controller_pod=$({_get_controller_pod_name(controller_name)}); '
+ f'echo "Killing {controller_name} controller pod: $initial_controller_pod"; '
+ 'kubectl delete pod $initial_controller_pod; '
+ f'until new_controller_pod=$({_get_controller_pod_name(controller_name)}) && '
+ '[ "$new_controller_pod" != "$initial_controller_pod" ] && '
+ 'kubectl get pod $new_controller_pod | grep "1/1"; do '
+ f' echo "Waiting for new {controller_name} controller pod..."; sleep 5; '
+ 'done; '
+ f'echo "New {controller_name} controller pod ready: $new_controller_pod"'
+ )
diff --git a/tests/smoke_tests/test_managed_job.py b/tests/smoke_tests/test_managed_job.py
index e5f74752e4e..1b7fd037120 100644
--- a/tests/smoke_tests/test_managed_job.py
+++ b/tests/smoke_tests/test_managed_job.py
@@ -1173,3 +1173,59 @@ def test_managed_jobs_logs_sync_down(generic_cloud: str):
timeout=20 * 60,
)
smoke_tests_utils.run_one_test(test)
+
+
+def _get_ha_kill_test(name: str, generic_cloud: str,
+ status: sky.ManagedJobStatus, first_timeout: int,
+ second_timeout: int) -> smoke_tests_utils.Test:
+ return smoke_tests_utils.Test(
+ f'test-managed-jobs-ha-kill-{status.value.lower()}',
+ [
+ f'sky jobs launch -n {name} --infra {generic_cloud} '
+ f'{smoke_tests_utils.LOW_RESOURCE_ARG} -y examples/managed_job.yaml -d',
+ smoke_tests_utils.
+ get_cmd_wait_until_managed_job_status_contains_matching_job_name(
+ job_name=f'{name}', job_status=[status], timeout=first_timeout),
+ smoke_tests_utils.kill_and_wait_controller('jobs'),
+ smoke_tests_utils.
+ get_cmd_wait_until_managed_job_status_contains_matching_job_name(
+ job_name=f'{name}',
+ job_status=[sky.ManagedJobStatus.SUCCEEDED],
+ timeout=second_timeout),
+ f's=$(sky jobs logs --controller -n {name} --no-follow); echo "$s"; echo "$s" | grep "Job succeeded."',
+ rf'{smoke_tests_utils.GET_JOB_QUEUE} | grep {name} | head -n1 | grep "SUCCEEDED"',
+ ],
+ f'sky jobs cancel -y -n {name}',
+ env={
+ skypilot_config.ENV_VAR_SKYPILOT_CONFIG: 'tests/test_yamls/managed_jobs_ha_config.yaml'
+ },
+ timeout=20 * 60,
+ )
+
+
[email protected]
[email protected]_jobs
+def test_managed_jobs_ha_kill_running(generic_cloud: str):
+ name = smoke_tests_utils.get_cluster_name()
+ test = _get_ha_kill_test(
+ name,
+ generic_cloud,
+ sky.ManagedJobStatus.RUNNING,
+ first_timeout=200,
+ second_timeout=335,
+ )
+ smoke_tests_utils.run_one_test(test)
+
+
[email protected]
[email protected]_jobs
+def test_managed_jobs_ha_kill_starting(generic_cloud: str):
+ name = smoke_tests_utils.get_cluster_name()
+ test = _get_ha_kill_test(
+ name,
+ generic_cloud,
+ sky.ManagedJobStatus.STARTING,
+ first_timeout=95,
+ second_timeout=600,
+ )
+ smoke_tests_utils.run_one_test(test)
diff --git a/tests/smoke_tests/test_sky_serve.py b/tests/smoke_tests/test_sky_serve.py
index 6b876da3863..ac883dbd10b 100644
--- a/tests/smoke_tests/test_sky_serve.py
+++ b/tests/smoke_tests/test_sky_serve.py
@@ -1130,7 +1130,7 @@ def test_skyserve_ha_kill_after_ready():
f'{_SERVE_ENDPOINT_WAIT.format(name=name)}; '
'curl $endpoint | grep "Hi, SkyPilot here"',
# Kill controller and verify recovery
- _kill_and_wait_controller(),
+ smoke_tests_utils.kill_and_wait_controller('serve'),
# Verify service remains accessible after controller recovery
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
_check_replica_in_status(name, [(1, False, 'READY')]),
@@ -1164,7 +1164,7 @@ def test_skyserve_ha_kill_during_provision():
f' s=$(sky serve status {name}); '
'done; echo "$s"',
# Kill controller during provisioning
- _kill_and_wait_controller(),
+ smoke_tests_utils.kill_and_wait_controller('serve'),
# Verify service eventually becomes ready
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
_check_replica_in_status(name, [(1, False, 'READY')]),
@@ -1200,7 +1200,7 @@ def test_skyserve_ha_kill_during_pending():
f'{_SERVE_STATUS_WAIT.format(name=name)}; ',
_check_replica_in_status(name, [(1, False, 'PENDING')]),
# Kill controller during pending
- _kill_and_wait_controller(),
+ smoke_tests_utils.kill_and_wait_controller('serve'),
# Verify service eventually becomes ready and accessible
_SERVE_WAIT_UNTIL_READY.format(name=name, replica_num=1),
_check_replica_in_status(name, [(1, False, 'READY')]),
@@ -1250,7 +1250,7 @@ def test_skyserve_ha_kill_during_shutdown():
f' s=$(sky serve status {name}); '
'done; echo "$s"',
# Kill controller during shutdown
- _kill_and_wait_controller(),
+ smoke_tests_utils.kill_and_wait_controller('serve'),
# Even after the pod ready, `serve status` may return `Failed to connect to serve controller, please try again later.`
# So we need to wait for a while before checking the status again.
'sleep 10',
@@ -1268,16 +1268,3 @@ def test_skyserve_ha_kill_during_shutdown():
skypilot_config.ENV_VAR_GLOBAL_CONFIG: 'tests/skyserve/high_availability/config.yaml'
})
smoke_tests_utils.run_one_test(test)
-
-
-def _kill_and_wait_controller() -> str:
- """Kill the controller pod and wait for a new one to be ready."""
- return (
- 'initial_controller_pod=$(kubectl get pods -l "skypilot-head-node=1" -o jsonpath="{.items[0].metadata.name}"); '
- 'echo "Killing controller pod: $initial_controller_pod"; '
- 'kubectl delete pod $initial_controller_pod; '
- 'until new_controller_pod=$(kubectl get pods -l "skypilot-head-node=1" -o jsonpath="{.items[0].metadata.name}") && '
- '[ "$new_controller_pod" != "$initial_controller_pod" ] && kubectl get pod $new_controller_pod | grep "1/1"; do '
- ' echo "Waiting for new controller pod..."; sleep 5; '
- 'done; '
- 'echo "New controller pod ready: $new_controller_pod"')
diff --git a/tests/test_yamls/managed_jobs_ha_config.yaml b/tests/test_yamls/managed_jobs_ha_config.yaml
new file mode 100644
index 00000000000..229cc766dc3
--- /dev/null
+++ b/tests/test_yamls/managed_jobs_ha_config.yaml
@@ -0,0 +1,6 @@
+jobs:
+ controller:
+ resources:
+ infra: kubernetes
+ cpus: 2
+ high_availability: true
|
{
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Code Refactoring / Architectural Improvement"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.